source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Daffodil%20Polytechnic%20Institute
Daffodil Polytechnic Institute is a private polytechnic Institute located in Dhaka, Bangladesh.The campus is located at Dhanmondi. Daffodil Polytechnic Institute which has been functioning since 2006 to develop professionals in different fields of education and training under Bangladesh Technical Education Board (BTEB). It is the first and only polytechnic institute of the country which has been awarded internationally. Daffodil Polytechnic is one of the top ranking polytechnics in Bangladesh. https://dpi.ac Departments Currently there are eight departments: Civil Department Electrical Department Computer Science Department Textile Department Apparel Manufacturing Department Telecommunication Department Architecture and Interior Design Department Graphic Design History The polytechnic was established in 2006 with the approval of Bangladesh Technical Education Board and the Government of Bangladesh's Ministry of Education. Campuses The institute has multiple campuses within Dhaka. The main campus & the academic building 1 is located in Dhanmondi and the other campus is in Kalabagan with library and hostel facilities for both male and female students. Academics Departments Computer Engineering Technology Electrical Engineering Technology Civil Engineering Technology Architecture & Interior Design Technology Textile Engineering Technology Garments Design & Pattern Making Technology Telecommunication Engineering Technology Graphic Design Engineering Technology Principals Mohammad Nuruzzaman (31 July 2006 - 30 April 2013) K M Hasan Ripon (1 May 2013 - 31 May 2016) Wiz khalifa( 1 June 2016 - 30 April 2019) K M Hasan Ripon (1 May 2019 – Present) Online admission The Polytechnic facilitates online admission for applicants from distant areas. Clubs Kolorob Cultural Club Computer club Language club DPI Alumni Association Blood donating club Tourism club International Activities A gorup of Students from Daffodil Polytech Instit
https://en.wikipedia.org/wiki/List%20of%20large%20cardinal%20properties
This page includes a list of cardinals with large cardinal properties. It is arranged roughly in order of the consistency strength of the axiom asserting the existence of cardinals with the given property. Existence of a cardinal number κ of a given type implies the existence of cardinals of most of the types listed above that type, and for most listed cardinal descriptions φ of lesser consistency strength, Vκ satisfies "there is an unbounded class of cardinals satisfying φ". The following table usually arranges cardinals in order of consistency strength, with size of the cardinal used as a tiebreaker. In a few cases (such as strongly compact cardinals) the exact consistency strength is not known and the table uses the current best guess. "Small" cardinals: 0, 1, 2, ..., ,..., , ... (see Aleph number) worldly cardinals weakly and strongly inaccessible, α-inaccessible, and hyper inaccessible cardinals weakly and strongly Mahlo, α-Mahlo, and hyper Mahlo cardinals. reflecting cardinals weakly compact (= Π-indescribable), Π-indescribable, totally indescribable cardinals λ-unfoldable, unfoldable cardinals, ν-indescribable cardinals and λ-shrewd, shrewd cardinals (not clear how these relate to each other). ethereal cardinals, subtle cardinals almost ineffable, ineffable, n-ineffable, totally ineffable cardinals remarkable cardinals α-Erdős cardinals (for countable α), 0# (not a cardinal), γ-iterable, γ-Erdős cardinals (for uncountable γ) almost Ramsey, Jónsson, Rowbottom, Ramsey, ineffably Ramsey, completely Ramsey, strongly Ramsey, super Ramsey cardinals measurable cardinals, 0† λ-strong, strong cardinals, tall cardinals Woodin, weakly hyper-Woodin, Shelah, hyper-Woodin cardinals superstrong cardinals (=1-superstrong; for n-superstrong for n≥2 see further down.) subcompact, strongly compact (Woodin< strongly compact≤supercompact), supercompact, hypercompact cardinals η-extendible, extendible cardinals Vopěnka cardinals, Shelah for supercompactness,
https://en.wikipedia.org/wiki/Routing%20protocol
A routing protocol specifies how routers communicate with each other to distribute information that enables them to select paths between nodes on a computer network. Routers perform the traffic directing functions on the Internet; data packets are forwarded through the networks of the internet from router to router until they reach their destination computer. Routing algorithms determine the specific choice of route. Each router has a prior knowledge only of networks attached to it directly. A routing protocol shares this information first among immediate neighbors, and then throughout the network. This way, routers gain knowledge of the topology of the network. The ability of routing protocols to dynamically adjust to changing conditions such as disabled connections and components and route data around obstructions is what gives the Internet its fault tolerance and high availability. The specific characteristics of routing protocols include the manner in which they avoid routing loops, the manner in which they select preferred routes, using information about hop costs, the time they require to reach routing convergence, their scalability, and other factors such as relay multiplexing and cloud access framework parameters. Certain additional characteristics such as multilayer interfacing may also be employed as a means of distributing uncompromised networking gateways to authorized ports. This has the added benefit of preventing issues with routing protocol loops. Many routing protocols are defined in technical standards documents called RFCs. Types Although there are many types of routing protocols, three major classes are in widespread use on IP networks: Interior gateway protocols type 1, link-state routing protocols, such as OSPF and IS-IS Interior gateway protocols type 2, distance-vector routing protocols, such as Routing Information Protocol, RIPv2, IGRP. Exterior gateway protocols are routing protocols used on the Internet for exchanging routing info
https://en.wikipedia.org/wiki/Optogenetic%20methods%20to%20record%20cellular%20activity
Optogenetics began with methods to alter neuronal activity with light, using e.g. channelrhodopsins. In a broader sense, optogenetic approaches also include the use of genetically encoded biosensors to monitor the activity of neurons or other cell types by measuring fluorescence or bioluminescence. Genetically encoded calcium indicators (GECIs) are used frequently to monitor neuronal activity, but other cellular parameters such as membrane voltage or second messenger activity can also be recorded optically. The use of optogenetic sensors is not restricted to neuroscience, but plays increasingly important roles in immunology, cardiology and cancer research. History The first experiments to measure intracellular calcium levels via protein expression were based on aequorin, a bioluminescent protein from the jellyfish Aequorea. To produce light, however, this enzyme needs the 'fuel' compound coelenteracine, which has to be added to the preparation. This is not practical in intact animals, and in addition, the temporal resolution of bioluminescence imaging is relatively poor (seconds-minutes). The first genetically encoded fluorescent calcium indicator (GECI) to be used to image activity in an animal was cameleon, designed by Atsushi Miyawaki, Roger Tsien and coworkers in 1997. Cameleon was first used successfully in an animal by Rex Kerr, William Schafer and coworkers to record from neurons and muscle cells of the nematode C. elegans. Cameleon was subsequently used to record neural activity in flies and zebrafish. In mammals, the first GECI to be used in vivo was GCaMP, first developed by Junichi Nakai and coworkers in 2001. GCaMP has undergone numerous improvements, notably by a team of scientists at the Janelia Farm Research Campus (GENIE project, HHMI), and GCaMP6 in particular has become widely used in neuroscience. Very recently, G protein-coupled receptors have been harnessed to generate a series of highly specific indicators for various neurotransmitters. Desi
https://en.wikipedia.org/wiki/List%20of%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering
Latin and Greek letters are used in mathematics, science, engineering, and other areas where mathematical notation is used as symbols for constants, special functions, and also conventionally for variables representing certain quantities. Some common conventions: Intensive quantities in physics are usually denoted with minusculeswhile extensive are denoted with capital letters. Most symbols are written in italics. Vectors can be denoted in boldface. Sets of numbers are typically bold or blackboard bold. Latin Greek Other scripts Hebrew Cyrillic Japanese Modified Latin Modified Greek
https://en.wikipedia.org/wiki/RAM%20image
A RAM image is a sequence of machine code instructions and associated data kept permanently in the non-volatile ROM memory of an embedded system, which is copied into volatile RAM by a bootstrap loader. Typically the RAM image is loaded into RAM when the system is switched on, and it contains a second-level bootstrap loader and basic hardware drivers, enabling the unit to function as desired, or else more sophisticated software to be loaded into the system. Embedded systems
https://en.wikipedia.org/wiki/Circannual%20cycle
A circannual cycle is a biological process that occurs in living creatures over the period of approximately one year. This cycle was first discovered by Ebo Gwinner and Canadian biologist Ted Pengelley. It is classified as an Infradian rhythm, which is biological process with a period longer than that of a circadian rhythm, less than one cycle per 24 hours. These processes continue even in artificial environments in which seasonal cues have been removed by scientists. The term circannual is Latin, circa meaning approximately and annual relating to one year. Chronobiology is the field of biology pertaining to periodic rhythms that occur in living organisms in response to external stimuli such as photoperiod. Cycles come from genetic evolution in animals which allows them to create regulatory cycles to improve their fitness. Evolution for these traits comes from the increased reproductive success of animals most capable of predicting the regular changes in the environment like seasonal changes and adapt capitalize on the times when success was greatest. The idea of evolved biological clocks exists not only for animals but also in plant species which exhibit cyclic behaviors without environmental cues. Plentiful research has been done on the biological clocks and what behaviors they are responsible for in animals, circannual rhythms are just one example of a biological clock. Rhythms are driven by hormone cycles and seasonal rhythms can endure for long periods of time in animals even without photoperiod signaling which comes with seasonal changes. They are a driver of annual behaviors such as hibernation, mating and the gain or loss of weight for seasonal changes. Circannual cycles can be defined by three main aspects being that they must persist without apparent time cues, be able to be phase shifted, and should not be changed by temperature. Circannual cycles have important impacts on when animal behaviors are performed and the success of those behaviors. Circannu
https://en.wikipedia.org/wiki/List%20of%20plasma%20physicists
This is a list of physicists who have worked in or made notable contributions to the field of plasma physics. See also Whistler (radio) waves Langmuir waves Plasma physicists Plasma physicists
https://en.wikipedia.org/wiki/List%20of%20derivatives%20and%20integrals%20in%20alternative%20calculi
There are many alternatives to the classical calculus of Newton and Leibniz; for example, each of the infinitely many non-Newtonian calculi. Occasionally an alternative calculus is more suited than the classical calculus for expressing a given scientific or mathematical idea. The table below is intended to assist people working with the alternative calculus called the "geometric calculus" (or its discrete analog). Interested readers are encouraged to improve the table by inserting citations for verification, and by inserting more functions and more calculi. Table In the following table is the digamma function, is the K-function, is subfactorial, are the generalized to real numbers Bernoulli polynomials. See also Indefinite product Product integral Fractal derivative
https://en.wikipedia.org/wiki/Memory%20dependence%20prediction
Memory dependence prediction is a technique, employed by high-performance out-of-order execution microprocessors that execute memory access operations (loads and stores) out of program order, to predict true dependencies between loads and stores at instruction execution time. With the predicted dependence information, the processor can then decide to speculatively execute certain loads and stores out of order, while preventing other loads and stores from executing out-of-order (keeping them in-order). Later in the pipeline, memory disambiguation techniques are used to determine if the loads and stores were correctly executed and, if not, to recover. By using the memory dependence predictor to keep most dependent loads and stores in order, the processor gains the benefits of aggressive out-of-order load/store execution but avoids many of the memory dependence violations that occur when loads and stores were incorrectly executed. This increases performance because it reduces the number of pipeline flushes that are required to recover from these memory dependence violations. See the memory disambiguation article for more information on memory dependencies, memory dependence violations, and recovery. In general, memory dependence prediction predicts whether two memory operations are dependent, that is, if they interact by accessing the same memory location. Besides using store to load (RAW or true) memory dependence prediction for the out-of-order scheduling of loads and stores, other applications of memory dependence prediction have been proposed. See for example. Memory dependence prediction is an optimization on top of memory dependency speculation. Sequential execution semantics imply that stores and loads appear to execute in the order specified by the program. However, as with out-of-order execution of other instructions, it may be possible to execute two memory operations in a different order from that implied by the program. This is possible when the two oper
https://en.wikipedia.org/wiki/Direct%20proof
In mathematics and logic, a direct proof is a way of showing the truth or falsehood of a given statement by a straightforward combination of established facts, usually axioms, existing lemmas and theorems, without making any further assumptions. In order to directly prove a conditional statement of the form "If p, then q", it suffices to consider the situations in which the statement p is true. Logical deduction is employed to reason from assumptions to conclusion. The type of logic employed is almost invariably first-order logic, employing the quantifiers for all and there exists. Common proof rules used are modus ponens and universal instantiation. In contrast, an indirect proof may begin with certain hypothetical scenarios and then proceed to eliminate the uncertainties in each of these scenarios until an inescapable conclusion is forced. For example, instead of showing directly p ⇒ q, one proves its contrapositive ~q ⇒ ~p (one assumes ~q and shows that it leads to ~p). Since p ⇒ q and ~q ⇒ ~p are equivalent by the principle of transposition (see law of excluded middle), p ⇒ q is indirectly proved. Proof methods that are not direct include proof by contradiction, including proof by infinite descent. Direct proof methods include proof by exhaustion and proof by induction. History and etymology A direct proof is the simplest form of proof there is. The word ‘proof’ comes from the Latin word probare, which means “to test”. The earliest use of proofs was prominent in legal proceedings. A person with authority, such as a nobleman, was said to have probity, which means that the evidence was by his relative authority, which outweighed empirical testimony. In days gone by, mathematics and proof was often intertwined with practical questions – with populations like the Egyptians and the Greeks showing an interest in surveying land. This led to a natural curiosity with regards to geometry and trigonometry – particularly triangles and rectangles. These were the shapes
https://en.wikipedia.org/wiki/Passthrough%20%28electronics%29
In signal processing, a passthrough is a logic gate that enables a signal to "pass through" unaltered, sometimes with little alteration. Sometimes the concept of a "passthrough" can also involve daisy chain logic. Examples of passthroughs Analog passthrough (for digital TV) Sega 32X (passthrough for Sega Genesis video games) VCRs, DVD recorders, etc. act as a "pass-through" for composite video and S-video, though sometimes they act as an RF modulator for use on Channel 3. Tape monitor features allow an AV receiver (sometime the recording device itself) to act as a "pass-through" for audio. An AV receiver usually allows pass-through of the video signal while amplifying the audio signal to drive speakers. See also Dongle, a device that converts signal, instead of just being a "passthrough" for unaltered signal Signal processing Electrical engineering de:Durchschleifen
https://en.wikipedia.org/wiki/Three-dimensional%20integrated%20circuit
A three-dimensional integrated circuit (3D IC) is a MOS (metal-oxide semiconductor) integrated circuit (IC) manufactured by stacking as many as 16 or more ICs and interconnecting them vertically using, for instance, through-silicon vias (TSVs) or Cu-Cu connections, so that they behave as a single device to achieve performance improvements at reduced power and smaller footprint than conventional two dimensional processes. The 3D IC is one of several 3D integration schemes that exploit the z-direction to achieve electrical performance benefits in microelectronics and nanoelectronics. 3D integrated circuits can be classified by their level of interconnect hierarchy at the global (package), intermediate (bond pad) and local (transistor) level. In general, 3D integration is a broad term that includes such technologies as 3D wafer-level packaging (3DWLP); 2.5D and 3D interposer-based integration; 3D stacked ICs (3D-SICs); 3D heterogeneous integration; and 3D systems integration.; as well as true monolithic 3D ICs International organizations such as the Jisso Technology Roadmap Committee (JIC) and the International Technology Roadmap for Semiconductors (ITRS) have worked to classify the various 3D integration technologies to further the establishment of standards and roadmaps of 3D integration. As of the 2010s, 3D ICs are widely used for NAND flash memory and in mobile devices. Types 3D ICs vs. 3D Packaging 3D packaging refers to 3D integration schemes that rely on traditional interconnection methods such as wire bonding and flip chip to achieve vertical stacking. 3D packaging can be divided into 3D system in package (3D SiP) and 3D wafer level package (3D WLP). 3D SiPs that have been in mainstream manufacturing for some time and have a well-established infrastructure include stacked memory dies interconnected with wire bonds and package on package (PoP) configurations interconnected with wire bonds or flip chip technology. PoP is used for vertically integrating dispa
https://en.wikipedia.org/wiki/Tensor%20network
Tensor networks or tensor network states are a class of variational wave functions used in the study of many-body quantum systems. Tensor networks extend one-dimensional matrix product states to higher dimensions while preserving some of their useful mathematical properties. The wave function is encoded as a tensor contraction of a network of individual tensors. The structure of the individual tensors can impose global symmetries on the wave function (such as antisymmetry under exchange of fermions) or restrict the wave function to specific quantum numbers, like total charge, angular momentum, or spin. It is also possible to derive strict bounds on quantities like entanglement and correlation length using the mathematical structure of the tensor network. This has made tensor networks useful in theoretical studies of quantum information in many-body systems. They have also proved useful in variational studies of ground states, excited states, and dynamics of strongly correlated many-body systems. Diagrammatic notation In general, a tensor network diagram (Penrose diagram) can be viewed as a graph where nodes (or vertices) represent individual tensors, while edges represent summation over an index. Free indices are depicted as edges (or legs) attached to a single vertex only. Sometimes, there is also additional meaning to a node's shape. For instance, one can use trapezoids for unitary matrices or tensors with similar behaviour. This way, flipped trapezoids would be interpreted as complex conjugates to them. Connection to machine learning Tensor networks have been adapted for supervised learning, taking advantage of similar mathematical structure in variational studies in quantum mechanics and large-scale machine learning. This crossover has spurred collaboration between researchers in artificial intelligence and quantum information science. In June 2019, Google, the Perimeter Institute for Theoretical Physics, and X (company), released TensorNetwork, an
https://en.wikipedia.org/wiki/SACEM%20%28railway%20system%29
The Système d'aide à la conduite, à l'exploitation et à la maintenance (SACEM) is an embedded, automatic speed train protection system for rapid transit railways. The name means "Driver Assistance, Operation, and Maintenance System". It was developed in France by GEC-Alsthom, Matra (now part of Siemens Mobility) and CSEE (now part of Hitachi Rail STS) in the 1980s. It was first deployed on the RER A suburban railway in Paris in 1989. Afterwards it was installed: on the Santiago Metro in Santiago, Chile; on some of the MTR lines in Hong Kong (Kwun Tong line, Tsuen Wan line, Island line, Tseung Kwan O line, Airport Express and Tung Chung line), all enhanced with ATO, on Lines A, B and 8 of the Mexico City Metro lines in Mexico City; and on Shanghai Metro Line 3. In 2017 the SACEM system in Paris was enhanced with Automatic Train Operation (ATO) and was put in full operation at the end of 2018. The SACEM system in Paris is to be enhanced to a fully fledged CBTC system named NExTEO. First to be deployed on the newly-extended line RER E in 2024, it is proposed to replace signalling and control on all RER lines. Operation The SACEM system enables a train to receive signals from devices under the tracks. A receiver in the train cabin interprets the signal, and sends data to the console so the driver can see it. A light on the console indicates the speed control setting: an orange light means slow speed, or ; a red light means full stop. If the driver alters the speed, a warning buzzer may sound. If the system determines that the speed might be unsafe, and the driver does not change it within a few seconds, SACEM engages the emergency brake. SACEM also allows for a reduction in potential train bunching and easier recovery from delays, therefore safely increasing operating frequencies as much as possible especially during rush hour.
https://en.wikipedia.org/wiki/Closed%20system%20%28control%20theory%29
The terms closed system and open system have long been defined in the widely (and long before any sort of amplifier was invented) established subject of thermodynamics, in terms that have nothing to do with the concepts of feedback and feedforward. The terms 'feedforward' and 'feedback' arose first in the 1920s in the theory of amplifier design, more recently than the thermodynamic terms. Negative feedback was eventually patented by H.S Black in 1934. In thermodynamics, an open system is one that can take in and give out ponderable matter. In thermodynamics, a closed system is one that cannot take in or give out ponderable matter, but may be able to take in or give out radiation and heat and work or any form of energy. In thermodynamics, a closed system can be further restricted, by being 'isolated': an isolated system cannot take in nor give out either ponderable matter or any form of energy. It does not make sense to try to use these well established terms to try to distinguish the presence or absence of feedback in a control system. The theory of control systems leaves room for systems with both feedforward pathways and feedback elements or pathways. The terms 'feedforward' and 'feedback' refer to elements or paths within a system, not to a system as a whole. THE input to the system comes from outside it, as energy from the signal source by way of some possibly leaky or noisy path. Part of the output of a system can be compounded, with the intermediacy of a feedback path, in some way such as addition or subtraction, with a signal derived from the system input, to form a 'return balance signal' that is input to a PART of the system to form a feedback loop within the system. (It is not correct to say that part of the output of a system can be used as THE input to the system.) There can be feedforward paths within the system in parallel with one or more of the feedback loops of the system so that the system output is a compound of the outputs of the feedback loops
https://en.wikipedia.org/wiki/Phoenix%20Union%20Bioscience%20High%20School
Phoenix Union Bioscience High School is part of the Phoenix Union High School District, with campus in downtown Phoenix, Arizona, US. The school specialises in science education. A new building was constructed and the existing one renovated, opening in the fall of 2007. Enrollment Bioscience hosts approximately 180 freshmen through seniors. The first class of 43 students graduated from Bioscience in May 2010. 97 percent of its 10th graders passed the AIMS Math exam (in 2009), the highest public (non-charter) school percentage in the Valley, and No. 2 in the state. Their science scores were No. 3 in the state among non-charter schools. In its first year of eligibility, Bioscience earned the maximum "Excelling" Achievement Profile from the State. Campus The US$10 million campus which opened in October 2007 is located in Phoenix's downtown Biotechnology Center and open to students throughout the District. The Bioscience High School campus, which was designed by The Orcutt-Winslow Partnership won the American School Board Journal's Learning By Design 2009 Grand Prize Award. The school received this award for its classrooms, collaborative learning spaces, and smooth circulation. Phoenix Union High School District received a $2.4 million small schools grant from the City of Phoenix to renovate Bioscience's existing historic McKinley building for a Bio-medical program. It includes administrative office, four classrooms, a library/community room and a student demonstration area. In 2014, Bioscience ranked number 27 on the Best Education Degrees Web site's "Most Amazing High School Campuses In The World" list, ranked by their modern designs. The school has a solar charging station, and is partially powered by solar panels.
https://en.wikipedia.org/wiki/Physical%20system
A physical system is a collection of physical objects under study. The collection differs from a set: all the objects must coexist and have some physical relationship. In other words, it is a portion of the physical universe chosen for analysis. Everything outside the system is known as the environment, which is ignored except for its effects on the system. The split between system and environment is the analyst's choice, generally made to simplify the analysis. For example, the water in a lake, the water in half of a lake, or an individual molecule of water in the lake can each be considered a physical system. An isolated system is one that has negligible interaction with its environment. Often a system in this sense is chosen to correspond to the more usual meaning of system, such as a particular machine. In the study of quantum coherence, the "system" may refer to the microscopic properties of an object (e.g. the mean of a pendulum bob), while the relevant "environment" may be the internal degrees of freedom, described classically by the pendulum's thermal vibrations. Because no quantum system is completely isolated from its surroundings, it is important to develop a theoretical framework for treating these interactions in order to obtain an accurate understanding of quantum systems. In control theory, a physical system being controlled (a "controlled system") is called a "plant". See also Conceptual systems Phase space Physical phenomenon Physical ontology Signal-flow graph Systems engineering Systems science Thermodynamic system Open quantum system
https://en.wikipedia.org/wiki/Semiconductor%20memory
Semiconductor memory is a digital electronic semiconductor device used for digital data storage, such as computer memory. It typically refers to devices in which data is stored within metal–oxide–semiconductor (MOS) memory cells on a silicon integrated circuit memory chip. There are numerous different types using different semiconductor technologies. The two main types of random-access memory (RAM) are static RAM (SRAM), which uses several transistors per memory cell, and dynamic RAM (DRAM), which uses a transistor and a MOS capacitor per cell. Non-volatile memory (such as EPROM, EEPROM and flash memory) uses floating-gate memory cells, which consist of a single floating-gate transistor per cell. Most types of semiconductor memory have the property of random access, which means that it takes the same amount of time to access any memory location, so data can be efficiently accessed in any random order. This contrasts with data storage media such as hard disks and CDs which read and write data consecutively and therefore the data can only be accessed in the same sequence it was written. Semiconductor memory also has much faster access times than other types of data storage; a byte of data can be written to or read from semiconductor memory within a few nanoseconds, while access time for rotating storage such as hard disks is in the range of milliseconds. For these reasons it is used for primary storage, to hold the program and data the computer is currently working on, among other uses. , semiconductor memory chips sell annually, accounting for % of the semiconductor industry. Shift registers, processor registers, data buffers and other small digital registers that have no memory address decoding mechanism are typically not referred to as memory although they also store digital data. Description In a semiconductor memory chip, each bit of binary data is stored in a tiny circuit called a memory cell consisting of one to several transistors. The memory cells are
https://en.wikipedia.org/wiki/Plug%20compatible
Plug compatible refers to "hardware that is designed to perform exactly like another vendor's product." The term PCM was originally applied to manufacturers who made replacements for IBM peripherals. Later this term was used to refer to IBM-compatible computers. PCM and peripherals Before the rise of the PCM peripheral industry, computing systems were either configured with peripherals designed and built by the CPU vendor, or designed to use vendor-selected rebadged devices. The first example of plug-compatible IBM subsystems were tape drives and controls offered by Telex beginning 1965. Memorex in 1968 was first to enter the IBM plug-compatible disk followed shortly thereafter by a number of suppliers such as CDC, Itel, and Storage Technology Corporation. This was boosted by the world's largest user of computing equipment in both directions. Ultimately plug-compatible products were offered for most peripherals and system main memory. PCM and computer systems A plug-compatible machine is one that has been designed to be backward compatible with a prior machine. In particular, a new computer system that is plug-compatible has not only the same connectors and protocol interfaces to peripherals, but also binary-code compatibility—it runs the same software as the old system. A plug compatible manufacturer or PCM is a company that makes such products. One recurring theme in plug-compatible systems is the ability to be bug compatible as well. That is, if the forerunner system had software or interface problems, then the successor must have (or simulate) the same problems. Otherwise, the new system may generate unpredictable results, defeating the full compatibility objective. Thus, it is important for customers to understand the difference between a "bug" and a "feature", where the latter is defined as an intentional modification to the previous system (e.g. higher speed, lighter weight, smaller package, better operator controls, etc.). PCM and IBM mainframes The or
https://en.wikipedia.org/wiki/Network%20Centric%20Product%20Support
Network Centric Product Support (NCPS) is an early application of an Internet of Things (IoT) computer architecture developed to leverage new information technologies and global networks to assist in managing maintenance, support and supply chain of complex products made up of one or more complex systems, such as in a mobile aircraft fleet or fixed location assets such as in building systems. This is accomplished by establishing digital threads connecting the physical deployed subsystem with its design Digital Twins virtual model by embedding intelligence through networked micro-web servers that also function as a computer workstation within each subsystem component (i.e. Engine control unit on an aircraft) or other controller and enabling 2-way communications using existing Internet technologies and communications networks - thus allowing for the extension of a product lifecycle management (PLM) system into a mobile, deployed product at the subsystem level in real time. NCPS can be considered to be the support flip side of Network-centric warfare, as this approach goes beyond traditional logistics and aftermarket support functions by taking a complex adaptive system management approach and integrating field maintenance and logistics in a unified factory and field environment. Its evolution began out of insights gained by CDR Dave Loda (USNR) from Network Centric Warfare-based fleet battle experimentation at the US Naval Warfare Development Command (NWDC) in the late 1990s, who later lead commercial research efforts of NCPS in aviation at United Technologies Corporation. Interaction with the MIT Auto-ID Labs, EPCglobal, the Air Transport Association of America ATA Spec 100/iSpec 2200 and other consortium pioneering the emerging machine to machine Internet of Things (IoT) architecture contributed to the evolution of NCPS. Purpose Simply put, this architecture extends the existing World Wide Web infrastructure of networked web servers down into the product at its sub
https://en.wikipedia.org/wiki/Abstraction%20%28mathematics%29
Abstraction in mathematics is the process of extracting the underlying structures, patterns or properties of a mathematical concept, removing any dependence on real world objects with which it might originally have been connected, and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena. Two of the most highly abstract areas of modern mathematics are category theory and model theory. Description Many areas of mathematics began with the study of real world problems, before the underlying rules and concepts were identified and defined as abstract structures. For example, geometry has its origins in the calculation of distances and areas in the real world, and algebra started with methods of solving problems in arithmetic. Abstraction is an ongoing process in mathematics and the historical development of many mathematical topics exhibits a progression from the concrete to the abstract. For example, the first steps in the abstraction of geometry were historically made by the ancient Greeks, with Euclid's Elements being the earliest extant documentation of the axioms of plane geometry—though Proclus tells of an earlier axiomatisation by Hippocrates of Chios. In the 17th century, Descartes introduced Cartesian co-ordinates which allowed the development of analytic geometry. Further steps in abstraction were taken by Lobachevsky, Bolyai, Riemann and Gauss, who generalised the concepts of geometry to develop non-Euclidean geometries. Later in the 19th century, mathematicians generalised geometry even further, developing such areas as geometry in n dimensions, projective geometry, affine geometry and finite geometry. Finally Felix Klein's "Erlangen program" identified the underlying theme of all of these geometries, defining each of them as the study of properties invariant under a given group of symmetries. This level of abstraction revealed connections between geometry and abstract algebra. In mathemati
https://en.wikipedia.org/wiki/Domain-specific%20architecture
A domain-specific architecture (DSA) is a programmable computer architecture specifically tailored to operate very efficiently within the confines of a given application domain. The term is often used in contrast to general-purpose architectures, such as CPUs, that are designed to operate on any computer program. History In conjunction with the semiconductor boom that started in the 1960s, computer architects were tasked with finding new ways to exploit the increasingly large number of transistors available. Moore's Law and Dennard Scaling enabled architects to focus on improving the performance of general-purpose microprocessors on general-purpose programs. These efforts yielded several technological innovations, such as multi-level caches, out-of-order execution, deep instruction pipelines, multithreading, and multiprocessing. The impact of these innovations was measured on generalist benchmarks such as SPEC, and architects were not concerned with the internal structure or specific characteristics of these programs. The end of Dennard Scaling pushed computer architects to switch from a single, very fast processor to several processor cores. Performance improvement could no longer be achieved by simply increasing the operating frequency of a single core. The end of Moore's Law shifted the focus away from general-purpose architectures towards more specialized hardware. Although general-purpose CPU will likely have a place in any computer system, heterogeneous systems composed of general-purpose and domain-specific components are the most recent trend for achieving high performance. While hardware accelerators and ASIC have been used in very specialized application domains since the inception of the semiconductor industry, they generally implement a specific function with very limited flexibility. In contrast, the shift towards domain-specific architectures wants to achieve a better balance of flexibility and specialization. A notable early example of a dom
https://en.wikipedia.org/wiki/Keith%20R.%20Porter%20Lecture
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year. Lecturers Source: ASCB See also List of biology awards
https://en.wikipedia.org/wiki/List%20of%20commutative%20algebra%20topics
Commutative algebra is the branch of abstract algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings, rings of algebraic integers, including the ordinary integers , and p-adic integers. Research fields Combinatorial commutative algebra Invariant theory Active research areas Serre's multiplicity conjectures Homological conjectures Basic notions Commutative ring Module (mathematics) Ring ideal, maximal ideal, prime ideal Ring homomorphism Ring monomorphism Ring epimorphism Ring isomorphism Zero divisor Chinese remainder theorem Classes of rings Field (mathematics) Algebraic number field Polynomial ring Integral domain Boolean algebra (structure) Principal ideal domain Euclidean domain Unique factorization domain Dedekind domain Nilpotent elements and reduced rings Dual numbers Tensor product of fields Tensor product of R-algebras Constructions with commutative rings Quotient ring Field of fractions Product of rings Annihilator (ring theory) Integral closure Localization and completion Completion (ring theory) Formal power series Localization of a ring Local ring Regular local ring Localization of a module Valuation (mathematics) Discrete valuation Discrete valuation ring I-adic topology Weierstrass preparation theorem Finiteness properties Noetherian ring Hilbert's basis theorem Artinian ring Ascending chain condition (ACC) and descending chain condition (DCC) Ideal theory Fractional ideal Ideal class group Radical of an ideal Hilbert's Nullstellensatz Homological properties Flat module Flat map Flat map (ring theory) Projective module Injective module Cohen-Macaulay ring Gorenstein ring Complete intersection ring Koszul complex Hilbert's syzygy theorem Quillen–Suslin theorem Dimension theory Height (ring theory)
https://en.wikipedia.org/wiki/Ethnobiology
Ethnobiology is the scientific study of the way living things are treated or used by different human cultures. It studies the dynamic relationships between people, biota, and environments, from the distant past to the immediate present. "People-biota-environment" interactions around the world are documented and studied through time, across cultures, and across disciplines in a search for valid, reliable answers to two 'defining' questions: "How and in what ways do human societies use nature, and how and in what ways do human societies view nature?" History Beginnings (15th century–19th century) Biologists have been interested in local biological knowledge since the time Europeans started colonising the world, from the 15th century onwards. Paul Sillitoe wrote that: Local biological knowledge, collected and sampled over these early centuries significantly informed the early development of modern biology: during the 17th century Georg Eberhard Rumphius benefited from local biological knowledge in producing his catalogue, "Herbarium Amboinense", covering more than 1,200 species of the plants in Indonesia; during the 18th century, Carl Linnaeus relied upon Rumphius's work, and also corresponded with other people all around the world when developing the biological classification scheme that now underlies the arrangement of much of the accumulated knowledge of the biological sciences. during the 19th century, Charles Darwin, the 'father' of evolutionary theory, on his Voyage of the Beagle took interest in the local biological knowledge of peoples he encountered. Phase I (1900s–1940s) Ethnobiology itself, as a distinctive practice, only emerged during the 20th century as part of the records then being made about other peoples, and other cultures. As a practice, it was nearly always ancillary to other pursuits when documenting others' languages, folklore, and natural resource use. Roy Ellen commented that: This 'first phase' in the development of ethnobiology as a
https://en.wikipedia.org/wiki/Proteostasis
Proteostasis is the dynamic regulation of a balanced, functional proteome. The proteostasis network includes competing and integrated biological pathways within cells that control the biogenesis, folding, trafficking, and degradation of proteins present within and outside the cell. Loss of proteostasis is central to understanding the cause of diseases associated with excessive protein misfolding and degradation leading to loss-of-function phenotypes, as well as aggregation-associated degenerative disorders. Therapeutic restoration of proteostasis may treat or resolve these pathologies. Cellular proteostasis is key to ensuring successful development, healthy aging, resistance to environmental stresses, and to minimize homeostatic perturbations from pathogens such as viruses. Cellular mechanisms for maintaining proteostasis include regulated protein translation, chaperone assisted protein folding, and protein degradation pathways. Adjusting each of these mechanisms based on the need for specific proteins is essential to maintain all cellular functions relying on a correctly folded proteome. Mechanisms of proteostasis The roles of the ribosome in proteostasis One of the first points of regulation for proteostasis is during translation. This regulation is accomplished via the structure of the ribosome, a complex central to translation. Its characteristics shape the way the protein folds, and influence the protein's future interactions. The synthesis of a new peptide chain using the ribosome is very slow; the ribosome can even be stalled when it encounters a rare codon, a codon found at low concentrations in the cell. The slow synthesis rate and any such pauses provide an individual protein domain with the necessary time to become folded before the production of subsequent domains. This facilitates the correct folding of multi-domain proteins. The newly synthesized peptide chain exits the ribosome into the cellular environment through the narrow ribosome exit chan
https://en.wikipedia.org/wiki/Engineering%20sample
Engineering samples are the beta versions of integrated circuits that are meant to be used for compatibility qualification or as demonstrators. They are usually loaned to OEM manufacturers prior to the chip's commercial release to allow product development or display. Engineering samples are usually handed out under a non-disclosure agreement or another type of confidentiality agreement. Some engineering samples, such as Pentium 4 processors were rare and favoured for having unlocked base-clock multipliers. More recently, Core 2 engineering samples have become more common and popular. Asian sellers were selling the Core 2 processors at major profit. Some engineering samples have been put through strenuous tests. Engineering sample processors are also offered on a technical loan to some full-time employees at Intel, and are usually desktop extreme edition processors.
https://en.wikipedia.org/wiki/Coreu
COREU (French: – Telex network of European correspondents, also EUKOR-Netzwerk in Austria) is a communication network of the European Union for the communication of the Council of the European Union, the European correspondents of the foreign ministries of the EU member states, permanent representatives of member states in Brussels, the European Commission, and the General Secretariat of the Council of the European Union. The European Parliament is not among the participants. COREU is the European equivalent of the American Secret Internet Protocol Router Network (SIPRNet, also known as Intelink-S). COREU's official aim is fast communication in case of crisis. The network enables a closer cooperation in matters regarding foreign affairs. In actuality the system's function exceeds that of mere communication, it also enables decision-making. COREU's first goal is to enable the exchange of information before and after decisions. Relaying upfront negotiations in preparation of meetings is the second goal. In addition, the system also allows the editing of documents and the decision-making, especially if there is little time. While the first two goals are preparatory measures for a shared foreign policy, the third is a methodical variant marked by practise that is defining for the image of the Common Foreign and Security Policy. Members (The following information dates from 2013):* There is one representative in each of the capital cities in the EU.(since 1973) In Germany for example, this is the European correspondent (EU-KOR) from the Foreign Office. In Austria it is the European correspondent from the Referat II.1.a in the Federal Ministry for Europe, Integration and Foreign Affairs They are the correspondents (since 1982) for the European Commission They comprise the secretariat for the European Council They also make up the European External Action Service (EEAS) (responsible for foreign policy issues, since 1987) Data volume and technical details COREU fu
https://en.wikipedia.org/wiki/Reliable%20multicast
A reliable multicast is any computer networking protocol that provides a reliable sequence of packets to multiple recipients simultaneously, making it suitable for applications such as multi-receiver file transfer. Overview Multicast is a network addressing method for the delivery of information to a group of destinations simultaneously using the most efficient strategy to deliver the messages over each link of the network only once, creating copies only when the links to the multiple destinations split (typically network switches and routers). However, like the User Datagram Protocol, multicast does not guarantee the delivery of a message stream. Messages may be dropped, delivered multiple times, or delivered out of order. A reliable multicast protocol adds the ability for receivers to detect lost and/or out-of-order messages and take corrective action (similar in principle to TCP), resulting in a gap-free, in-order message stream. Reliability The exact meaning of reliability depends on the specific protocol instance. A minimal definition of reliable multicast is eventual delivery of all the data to all the group members, without enforcing any particular delivery order. However, not all reliable multicast protocols ensure this level of reliability; many of them trade efficiency for reliability, in different ways. For example, while TCP makes the sender responsible for transmission reliability, multicast NAK-based protocols shift the responsibility to receivers: the sender never knows for sure that all the receivers have in fact received all the data. RFC- 2887 explores the design space for bulk data transfer, with a brief discussion on the various issues and some hints at the possible different meanings of reliable. Reliable Group Data Delivery Reliable Group Data Delivery (RGDD) is a form of multicasting where an object is to be moved from a single source to a fixed set of receivers known before transmission begins. A variety of applications may need su
https://en.wikipedia.org/wiki/Brazier%20effect
The Brazier effect was first discovered in 1927 by Brazier. He showed that when an initially straight tube was bent uniformly, the longitudinal tension and compression which resist the applied bending moment also tend to flatten or ovalise the cross-section. As the curvature increases, the flexural stiffness decreases. Brazier showed that under steadily increasing curvature the bending moment reaches a maximum value. After the bending moment reaches its maximum value, the structure becomes unstable, and so the object suddenly forms a "kink". From Brazier’s analysis it follows that the crushing pressure increases with the square of the curvature of the section, and thus with the square of the bending moment. See also Bending
https://en.wikipedia.org/wiki/Social%20software%20engineering
Social software engineering (SSE) is a branch of software engineering that is concerned with the social aspects of software development and the developed software. SSE focuses on the socialness of both software engineering and developed software. On the one hand, the consideration of social factors in software engineering activities, processes and CASE tools is deemed to be useful to improve the quality of both development process and produced software. Examples include the role of situational awareness and multi-cultural factors in collaborative software development. On the other hand, the dynamicity of the social contexts in which software could operate (e.g., in a cloud environment) calls for engineering social adaptability as a runtime iterative activity. Examples include approaches which enable software to gather users' quality feedback and use it to adapt autonomously or semi-autonomously. SSE studies and builds socially-oriented tools to support collaboration and knowledge sharing in software engineering. SSE also investigates the adaptability of software to the dynamic social contexts in which it could operate and the involvement of clients and end-users in shaping software adaptation decisions at runtime. Social context includes norms, culture, roles and responsibilities, stakeholder's goals and interdependencies, end-users perception of the quality and appropriateness of each software behaviour, etc. The participants of the 1st International Workshop on Social Software Engineering and Applications (SoSEA 2008) proposed the following characterization: Community-centered: Software is produced and consumed by and/or for a community rather than focusing on individuals Collaboration/collectiveness: Exploiting the collaborative and collective capacity of human beings Companionship/relationship: Making explicit the various associations among people Human/social activities: Software is designed consciously to support human activities and to address social p
https://en.wikipedia.org/wiki/Stieltjes%20constants
In mathematics, the Stieltjes constants are the numbers that occur in the Laurent series expansion of the Riemann zeta function: The constant is known as the Euler–Mascheroni constant. Representations The Stieltjes constants are given by the limit (In the case n = 0, the first summand requires evaluation of 00, which is taken to be 1.) Cauchy's differentiation formula leads to the integral representation Various representations in terms of integrals and infinite series are given in works of Jensen, Franel, Hermite, Hardy, Ramanujan, Ainsworth, Howell, Coppo, Connon, Coffey, Choi, Blagouchine and some other authors. In particular, Jensen-Franel's integral formula, often erroneously attributed to Ainsworth and Howell, states that where δn,k is the Kronecker symbol (Kronecker delta). Among other formulae, we find see. As concerns series representations, a famous series implying an integer part of a logarithm was given by Hardy in 1912 Israilov gave semi-convergent series in terms of Bernoulli numbers Connon, Blagouchine and Coppo gave several series with the binomial coefficients where Gn are Gregory's coefficients, also known as reciprocal logarithmic numbers (G1=+1/2, G2=−1/12, G3=+1/24, G4=−19/720,... ). More general series of the same nature include these examples and or where are the Bernoulli polynomials of the second kind and are the polynomials given by the generating equation respectively (note that ). Oloa and Tauraso showed that series with harmonic numbers may lead to Stieltjes constants Blagouchine obtained slowly-convergent series involving unsigned Stirling numbers of the first kind as well as semi-convergent series with rational terms only where m=0,1,2,... In particular, series for the first Stieltjes constant has a surprisingly simple form where Hn is the nth harmonic number. More complicated series for Stieltjes constants are given in works of Lehmer, Liang, Todd, Lavrik, Israilov, Stankus, Keiper, Nan-
https://en.wikipedia.org/wiki/List%20of%20numerical%20libraries
This is a list of numerical libraries, which are libraries used in software development for performing numerical calculations. It is not a complete listing but is instead a list of numerical libraries with articles on Wikipedia, with few exceptions. The choice of a typical library depends on a range of requirements such as: desired features (e.g. large dimensional linear algebra, parallel computation, partial differential equations), licensing, readability of API, portability or platform/compiler dependence (e.g. Linux, Windows, Visual C++, GCC), performance, ease-of-use, continued support from developers, standard compliance, specialized optimization in code for specific application scenarios or even the size of the code-base to be installed. Multi-language C C++ Delphi ALGLIB - an open source numerical analysis library. .NET Framework languages C#, F#, VB.NET and PowerShell Fortran Java Perl Perl Data Language gives standard Perl the ability to compactly store and speedily manipulate the large N-dimensional data arrays. It can perform complex and matrix maths, and has interfaces for the GNU Scientific Library, LINPACK, PROJ, and plotting with PGPLOT. There are libraries on CPAN adding support for the linear algebra library LAPACK, the Fourier transform library FFTW, and plotting with gnuplot, and PLplot. Python Others XNUMBERS – multi-precision floating-Point computing and numerical methods for Microsoft Excel. INTLAB – interval arithmetic library for MATLAB. See also List of computer algebra systems Comparison of numerical-analysis software List of information graphics software List of numerical-analysis software List of optimization software List of statistical software
https://en.wikipedia.org/wiki/TRANZ%20330
The TRANZ 330 is a popular point-of-sale device manufactured by VeriFone in 1985. The most common application for these units is bank and credit card processing, however, as a general purpose computer, they can perform other novel functions. Other applications include gift/benefit card processing, prepaid phone cards, payroll and employee timekeeping, and even debit and ATM cards. They are programmed in a proprietary VeriFone TCL language (Terminal Control Language), which is unrelated to the Tool Command Language used in UNIX environments. Point of sale companies Embedded systems Payment systems Banking equipment
https://en.wikipedia.org/wiki/Slot%20%28computer%20architecture%29
A slot comprises the operation issue and data path machinery surrounding a set of one or more execution unit (also called a functional unit (FU)) which share these resources. The term slot is common for this purpose in very long instruction word (VLIW) computers, where the relationship between operation in an instruction and pipeline to execute it is explicit. In dynamically scheduled machines, the concept is more commonly called an execute pipeline. Modern conventional central processing units (CPU) have several compute pipelines, for example: two arithmetic logic units (ALU), one floating point unit (FPU), one Streaming SIMD Extensions (SSE) (such as MMX), one branch. Each of them can issue one instruction per basic instruction cycle, but can have several instructions in process. These are what correspond to slots. The pipelines may have several FUs, such as an adder and a multiplier, but only one FU in a pipeline can be issued to in a given cycle. The FU population of a pipeline (slot) is a design option in a CPU.
https://en.wikipedia.org/wiki/Sum%20of%20squares
In mathematics, statistics and elsewhere, sums of squares occur in a number of contexts: Statistics For partitioning of variance, see Partition of sums of squares For the "sum of squared deviations", see Least squares For the "sum of squared differences", see Mean squared error For the "sum of squared error", see Residual sum of squares For the "sum of squares due to lack of fit", see Lack-of-fit sum of squares For sums of squares relating to model predictions, see Explained sum of squares For sums of squares relating to observations, see Total sum of squares For sums of squared deviations, see Squared deviations from the mean For modelling involving sums of squares, see Analysis of variance For modelling involving the multivariate generalisation of sums of squares, see Multivariate analysis of variance Number theory For the sum of squares of consecutive integers, see Square pyramidal number For representing an integer as a sum of squares of 4 integers, see Lagrange's four-square theorem Legendre's three-square theorem states which numbers can be expressed as the sum of three squares Jacobi's four-square theorem gives the number of ways that a number can be represented as the sum of four squares. For the number of representations of a positive integer as a sum of squares of k integers, see Sum of squares function. Fermat's theorem on sums of two squares says which primes are sums of two squares. The sum of two squares theorem generalizes Fermat's theorem to specify which composite numbers are the sums of two squares. Pythagorean triples are sets of three integers such that the sum of the squares of the first two equals the square of the third. A Pythagorean prime is a prime that is the sum of two squares; Fermat's theorem on sums of two squares states which primes are Pythagorean primes. Pythagorean triangles with integer altitude from the hypotenuse have the sum of squares of inverses of the integer legs equal to the square of the inverse of t
https://en.wikipedia.org/wiki/Proofreading%20%28biology%29
The term proofreading is used in genetics to refer to the error-correcting processes, first proposed by John Hopfield and Jacques Ninio, involved in DNA replication, immune system specificity, enzyme-substrate recognition among many other processes that require enhanced specificity. The proofreading mechanisms of Hopfield and Ninio are non-equilibrium active processes that consume ATP to enhance specificity of various biochemical reactions. In bacteria, all three DNA polymerases (I, II and III) have the ability to proofread, using 3’ → 5’ exonuclease activity. When an incorrect base pair is recognized, DNA polymerase reverses its direction by one base pair of DNA and excises the mismatched base. Following base excision, the polymerase can re-insert the correct base and replication can continue. In eukaryotes, only the polymerases that deal with the elongation (delta and epsilon) have proofreading ability (3’ → 5’ exonuclease activity). Proofreading also occurs in mRNA translation for protein synthesis. In this case, one mechanism is the release of any incorrect aminoacyl-tRNA before peptide bond formation. The extent of proofreading in DNA replication determines the mutation rate, and is different in different species. For example, loss of proofreading due to mutations in the DNA polymerase epsilon gene results in a hyper-mutated genotype with >100 mutations per Mbase of DNA in human colorectal cancers. The extent of proofreading in other molecular processes can depend on the effective population size of the species and the number of genes affected by the same proofreading mechanism. Bacteriophage T4 DNA polymerase Bacteriophage (phage) T4 gene 43 encodes the phage’s DNA polymerase replicative enzyme. Temperature-sensitive (ts) gene 43 mutants have been identified that have an antimutator phenotype, that is a lower rate of spontaneous mutation than wild type. Studies of one of these mutants, tsB120, showed that the DNA polymerase specified by this mutant c
https://en.wikipedia.org/wiki/List%20of%20PPAD-complete%20problems
This is a list of PPAD-complete problems. Fixed-point theorems Sperner's lemma Brouwer fixed-point theorem Kakutani fixed-point theorem Game theory Nash equilibrium Core of Balanced Games Equilibria in game theory and economics Fisher market equilibria Arrow-Debreu equilibria Approximate Competitive Equilibrium from Equal Incomes Finding clearing payments in financial networks Graph theory Fractional stable paths problems Fractional hypergraph matching (see also the NP-complete Hypergraph matching) Fractional strong kernel Miscellaneous Scarf's lemma Fractional bounded budget connection games
https://en.wikipedia.org/wiki/List%20of%20mathematical%20functions
In mathematics, some functions or groups of functions are important enough to deserve their own names. This is a listing of articles which explain some of these functions in more detail. There is a large theory of special functions which developed out of statistics and mathematical physics. A modern, abstract point of view contrasts large function spaces, which are infinite-dimensional and within which most functions are 'anonymous', with special functions picked out by properties such as symmetry, or relationship to harmonic analysis and group representations. See also List of types of functions Elementary functions Elementary functions are functions built from basic operations (e.g. addition, exponentials, logarithms...) Algebraic functions Algebraic functions are functions that can be expressed as the solution of a polynomial equation with integer coefficients. Polynomials: Can be generated solely by addition, multiplication, and raising to the power of a positive integer. Constant function: polynomial of degree zero, graph is a horizontal straight line Linear function: First degree polynomial, graph is a straight line. Quadratic function: Second degree polynomial, graph is a parabola. Cubic function: Third degree polynomial. Quartic function: Fourth degree polynomial. Quintic function: Fifth degree polynomial. Sextic function: Sixth degree polynomial. Rational functions: A ratio of two polynomials. nth root Square root: Yields a number whose square is the given one. Cube root: Yields a number whose cube is the given one. Elementary transcendental functions Transcendental functions are functions that are not algebraic. Exponential function: raises a fixed number to a variable power. Hyperbolic functions: formally similar to the trigonometric functions. Logarithms: the inverses of exponential functions; useful to solve equations involving exponentials. Natural logarithm Common logarithm Binary logarithm Power functions: raise a variable numb
https://en.wikipedia.org/wiki/Photonically%20Optimized%20Embedded%20Microprocessors
The Photonically Optimized Embedded Microprocessors (POEM) is DARPA program. It should demonstrate photonic technologies that can be integrated within embedded microprocessors and enable energy-efficient high-capacity communications between the microprocessor and DRAM. For realizing POEM technology CMOS and DRAM-compatible photonic links should operate at high bit-rates with very low power dissipation. Current research Currently research in this field is at University of Colorado, Berkley University, and Nanophotonic Systems Laboratory ( Ultra-Efficient CMOS-Compatible Grating Coupler Design).
https://en.wikipedia.org/wiki/Reduction%20%28mathematics%29
In mathematics, reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest whole-number denominator possible (while keeping the numerator a whole number) is called "reducing a fraction". Rewriting a radical (or "root") expression with the smallest possible whole number under the radical symbol is called "reducing a radical". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals. Algebra In linear algebra, reduction refers to applying simple rules to a series of equations or matrices to change them into a simpler form. In the case of matrices, the process involves manipulating either the rows or the columns of the matrix and so is usually referred to as row-reduction or column-reduction, respectively. Often the aim of reduction is to transform a matrix into its "row-reduced echelon form" or "row-echelon form"; this is the goal of Gaussian elimination. Calculus In calculus, reduction refers to using the technique of integration by parts to evaluate integrals by reducing them to simpler forms. Static (Guyan) reduction In dynamic analysis, static reduction refers to reducing the number of degrees of freedom. Static reduction can also be used in finite element analysis to refer to simplification of a linear algebraic problem. Since a static reduction requires several inversion steps it is an expensive matrix operation and is prone to some error in the solution. Consider the following system of linear equations in an FEA problem: where K and F are known and K, x and F are divided into submatrices as shown above. If F2 contains only zeros, and only x1 is desired, K can be reduced to yield the following system of equations is obtained by writing out the set of equations as follows: Equation () can be solved for (assuming invertibility of ): And substituting into () gives Thus In a similar fashion, any row or c
https://en.wikipedia.org/wiki/Convia
Convia, Inc., based in Buffalo Grove, Illinois, is an American manufacturer of components which provide an integrated energy management platform that allows for the control and metering of lighting, plug-loads and HVAC. It is notable as one of the first companies to deliver and control power while at the same time monitoring energy and adapting its use in real-time. History In the late 1990s, Herman Miller, Inc., Convia's parent company, realized that they could not create truly flexible environments until the infrastructure of the building became more flexible. They decided that if a building infrastructure embraced technology then the applications, including systems furniture, could also take advantage of that infrastructure and become more intelligent. The need for intelligent infrastructure led Herman Miller to partner with a leading technology think tank called Applied Minds in Glendale, California and their founder Danny Hillis. Danny Hillis is considered a pioneer of the parallel computing industry and is the lead designer of Convia. Convia was launched in 2004. Partners In 2009, Herman Miller, Inc., and Legrand North America, an innovative manufacturer of electrical and network infrastructure solutions, announced a strategic alliance designed to broaden the reach of energy management strategies to fuel the adoption of flexible, sustainable spaces, ultimately reducing real estate and building operating costs while improving worker productivity. Under the terms of the agreement, technology from Herman Miller's Convia, Inc. subsidiary is embedded into Wiremold wire and cable management systems from Legrand. These include modular power and lighting distribution systems, floor boxes, poke-thru devices and architectural columns, which provide flexible, accessible power distribution to building owners and managers. Convia technology integrates a facility's power delivery and other infrastructure and technology applications, including lighting, HVAC, and occupancy
https://en.wikipedia.org/wiki/Flux%20%28biology%29
In general, flux in biology relates to movement of a substance between compartments. There are several cases where the concept of flux is important. The movement of molecules across a membrane: in this case, flux is defined by the rate of diffusion or transport of a substance across a permeable membrane. Except in the case of active transport, net flux is directly proportional to the concentration difference across the membrane, the surface area of the membrane, and the membrane permeability constant. In ecology, flux is often considered at the ecosystem level – for instance, accurate determination of carbon fluxes using techniques like eddy covariance (at a regional and global level) is essential for modeling the causes and consequences of global warming. Metabolic flux refers to the rate of flow of metabolites through a biochemical network, along a linear metabolic pathway, or through a single enzyme. A calculation may also be made of carbon flux or flux of other elemental components of biomolecules (e.g. nitrogen). The general unit of flux is chemical mass /time (e.g., micromole/minute; mg/kg/minute). Flux rates are dependent on a number of factors, including: enzyme concentration; the concentration of precursor, product, and intermediate metabolites; post-translational modification of enzymes; and the presence of metabolic activators or repressors. Metabolic flux in biologic systems can refer to biosynthesis rates of polymers or other macromolecules, such as proteins, lipids, polynucleotides, or complex carbohydrates, as well as the flow of intermediary metabolites through pathways. Metabolic control analysis and flux balance analysis provide frameworks for understanding metabolic fluxes and their constraints. Measuring movement Flux is the net movement of particles across a specified area in a specified period of time. The particles may be ions or molecules, or they may be larger, like insects, muskrats or cars. The units of time can be anything from milli
https://en.wikipedia.org/wiki/Electronic%20engineering
Electronic engineering is a sub-discipline of electrical engineering which emerged in the early 20th century and is distinguished by the additional use of active components such as semiconductor devices to amplify and control electric current flow. Previously electrical engineering only used passive devices such as mechanical switches, resistors, inductors, and capacitors. It covers fields such as: analog electronics, digital electronics, consumer electronics, embedded systems and power electronics. It is also involved in many related fields, for example solid-state physics, radio engineering, telecommunications, control systems, signal processing, systems engineering, computer engineering, instrumentation engineering, electric power control, photonics and robotics. The Institute of Electrical and Electronics Engineers (IEEE) is one of the most important professional bodies for electronics engineers in the US; the equivalent body in the UK is the Institution of Engineering and Technology (IET). The International Electrotechnical Commission (IEC) publishes electrical standards including those for electronics engineering. History and development Electronics engineering as a profession emerged following the identification of the electron in 1897 and the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, that inaugurated the field of electronics. Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages such as radio signals from a radio antenna possible with a non-mechanical device. The growth of electronics was rapid. By the early 1920s, commercial radio broadcasting and communications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry. The discipline was further enhanced by the large a
https://en.wikipedia.org/wiki/Center%20of%20curvature
In geometry, the center of curvature of a curve is found at a point that is at a distance from the curve equal to the radius of curvature lying on the normal vector. It is the point at infinity if the curvature is zero. The osculating circle to the curve is centered at the centre of curvature. Cauchy defined the center of curvature C as the intersection point of two infinitely close normal lines to the curve. The locus of centers of curvature for each point on the curve comprise the evolute of the curve. This term is generally used in physics regarding the study of lenses and mirrors (see radius of curvature (optics)). It can also be defined as the spherical distance between the point at which all the rays falling on a lens or mirror either seems to converge to (in the case of convex lenses and concave mirrors) or diverge from (in the case of concave lenses or convex mirrors) and the lens/mirror itself. See also Curvature Differential geometry of curves
https://en.wikipedia.org/wiki/Path%20integral%20formulation
The path integral formulation is a description in quantum mechanics that generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude. This formulation has proven crucial to the subsequent development of theoretical physics, because manifest Lorentz covariance (time and space components of quantities enter equations in the same way) is easier to achieve than in the operator formalism of canonical quantization. Unlike previous methods, the path integral allows one to easily change coordinates between very different canonical descriptions of the same quantum system. Another advantage is that it is in practice easier to guess the correct form of the Lagrangian of a theory, which naturally enters the path integrals (for interactions of a certain type, these are coordinate space or Feynman path integrals), than the Hamiltonian. Possible downsides of the approach include that unitarity (this is related to conservation of probability; the probabilities of all physically possible outcomes must add up to one) of the S-matrix is obscure in the formulation. The path-integral approach has proven to be equivalent to the other formalisms of quantum mechanics and quantum field theory. Thus, by deriving either approach from the other, problems associated with one or the other approach (as exemplified by Lorentz covariance or unitarity) go away. The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s, which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition. The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all poss
https://en.wikipedia.org/wiki/Immunoglobulin%20class%20switching
Immunoglobulin class switching, also known as isotype switching, isotypic commutation or class-switch recombination (CSR), is a biological mechanism that changes a B cell's production of immunoglobulin from one type to another, such as from the isotype IgM to the isotype IgG. During this process, the constant-region portion of the antibody heavy chain is changed, but the variable region of the heavy chain stays the same (the terms variable and constant refer to changes or lack thereof between antibodies that target different epitopes). Since the variable region does not change, class switching does not affect antigen specificity. Instead, the antibody retains affinity for the same antigens, but can interact with different effector molecules. Mechanism Class switching occurs after activation of a mature B cell via its membrane-bound antibody molecule (or B cell receptor) to generate the different classes of antibody, all with the same variable domains as the original antibody generated in the immature B cell during the process of V(D)J recombination, but possessing distinct constant domains in their heavy chains. Naïve mature B cells produce both IgM and IgD, which are the first two heavy chain segments in the immunoglobulin locus. After activation by antigen, these B cells proliferate. If these activated B cells encounter specific signaling molecules via their CD40 and cytokine receptors (both modulated by T helper cells), they undergo antibody class switching to produce IgG, IgA or IgE antibodies. During class switching, the constant region of the immunoglobulin heavy chain changes but the variable regions do not, and therefore antigenic specificity, remains the same. This allows different daughter cells from the same activated B cell to produce antibodies of different isotypes or subtypes (e.g. IgG1, IgG2 etc.). In humans, the order of the heavy chain exons is as follows: μ - IgM δ - IgD γ3 - IgG3 γ1 - IgG1 α1 - IgA1 γ2 - IgG2 γ4 - IgG4 ε - IgE α2
https://en.wikipedia.org/wiki/Mother%20of%20vinegar
Mother of vinegar is a biofilm composed of a form of cellulose, yeast, and bacteria that sometimes develops on fermenting alcoholic liquids during the process that turns alcohol into acetic acid with the help of oxygen from the air and acetic acid bacteria (AAB). It is similar to the symbiotic culture of bacteria and yeast (SCOBY) mostly known from production of kombucha, but develops to a much lesser extent due to lesser availability of yeast, which is often no longer present in wine/cider at this stage, and a different population of bacteria. Mother of vinegar is often added to wine, cider, or other alcoholic liquids to produce vinegar at home, although only the bacteria is required, but historically has also been used in large scale production. Discovery Hermann Boerhaave was one of the first scientists to study vinegar. In the early 1700s, he showed the importance of the mother of vinegar in the acetification process, and how having an increased oxidation surface allowed for better vinegar production. He called the mother a "vegetal substance" or "flower." In 1822, South African botanist, Christian Hendrik Persoon named the mother of vinegar Mycoderma, which he believed was a fungus. He attributed the vinegar production to the Mycoderma, since it formed on the surface of wine when it has been left open to air. In 1861, Louis Pasteur made the conclusion that vinegar is made by a "plant" that belonged to the group Mycoderma, and not made purely by chemical oxidation of ethanol. He named the plant Mycoderma aceti. Mycoderma aceti, is a Neo-Latin expression, from the Greek μύκης ("fungus") plus δέρμα ("skin"), and the Latin aceti ("of the acid"). Martinus Willem Beijerinck, who was a founder of modern microbiology, identified acetic acid bacteria in the mother of vinegar. He named the bacteria Acetobacter aceti in 1898. In 1935, Toshinobu Asai, a Japanese microbiologst, discovered a new genus of bacteria in the mother of vinegar, Gluconobacter. After this disc
https://en.wikipedia.org/wiki/Spectral%20density
The power spectrum of a time series describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal (including noise) as analyzed in terms of its frequency content, is called its spectrum. When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the energy spectral density. More commonly used is the power spectral density (or simply power spectrum), which applies to signals existing over all time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The power spectral density (PSD) then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite. Summation or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integrating over the time domain, as dictated by Parseval's theorem. The spectrum of a physical process often contains essential information about the nature of . For instance, the pitch and timbre of a musical instrument are immediately determined from a spectral analysis. The color of a light source is determined by the spectrum of the electromagnetic wave's electric field as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when a dispersive prism is used to obtain a spectrum of light in a spectrograph, or when a sound is perceived
https://en.wikipedia.org/wiki/Unified%20Diagnostic%20Services
Unified Diagnostic Services (UDS) is a diagnostic communication protocol used in electronic control units (ECUs) within automotive electronics, which is specified in the ISO 14229-1. It is derived from ISO 14230-3 (KWP2000) and the now obsolete ISO 15765-3 (Diagnostic Communication over Controller Area Network (DoCAN)). 'Unified' in this context means that it is an international and not a company-specific standard. By now this communication protocol is used in all new ECUs made by Tier 1 suppliers of Original Equipment Manufacturer (OEM), and is incorporated into other standards, such as AUTOSAR. The ECUs in modern vehicles control nearly all functions, including electronic fuel injection (EFI), engine control, the transmission, anti-lock braking system, door locks, braking, window operation, and more. Diagnostic tools are able to contact all ECUs installed in a vehicle, which has UDS services enabled. In contrast to the CAN bus protocol, which only uses the first and second layers of the OSI model, UDS utilizes the fifth and seventh layers of the OSI model. The Service ID (SID) and the parameters associated with the services are contained in the payload of a message frame. Modern vehicles have a diagnostic interface for off-board diagnostics, which makes it possible to connect a computer (client) or diagnostics tool, which is referred to as tester, to the communication system of the vehicle. Thus, UDS requests can be sent to the controllers which must provide a response (this may be positive or negative). This makes it possible to interrogate the fault memory of the individual control units, to update them with new firmware, have low-level interaction with their hardware (e.g. to turn a specific output on or off), or to make use of special functions (referred to as routines) to attempt to understand the environment and operating conditions of an ECU to be able to diagnose faulty or otherwise undesirable behavior. Services SID (Service Identifier) See also On-
https://en.wikipedia.org/wiki/IPv4%20shared%20address%20space
In order to ensure proper working of carrier-grade NAT (CGN), and, by doing so, alleviating the demand for the last remaining IPv4 addresses, a size IPv4 address block was assigned by Internet Assigned Numbers Authority (IANA) to be used as shared address space. This block of addresses is specifically meant to be used by Internet service providers (or ISPs) that implement carrier-grade NAT, to connect their customer-premises equipment (CPE) to their core routers. Instead of using unique addresses from the rapidly depleting pool of available globally unique IPv4 addresses, ISPs use addresses in for this purpose. Because the network between CPEs and the ISP's routers is private to each ISP, all ISPs may share this block of addresses. Background If an ISP deploys a CGN and uses private Internet address space (networks , , ) to connect their customers, there is a risk that customer equipment using an internal network in the same range will stop working. The reason is that routing will not work if the same address ranges are used on both the private and public sides of a customer’s network address translation (NAT) equipment. Normal packet flow can therefore be disrupted and the customer effectively cut off the Internet, unless the customer chooses another private address range that does not conflict with the range selected by their ISP. This prompted some ISPs to develop policy within American Registry for Internet Numbers (ARIN) to allocate new private address space for CGNs. ARIN, however, deferred to the Internet Engineering Task Force (IETF) before implementing the policy, indicating that the matter was not typical allocation but a reservation for technical purposes. In 2012, the IETF defined a Shared Address Space for use in ISP CGN deployments and NAT devices that can handle the same addresses occurring both on inbound and outbound interfaces. ARIN returned space to the IANA as needed for this allocation and "The allocated address block is ". Transition to
https://en.wikipedia.org/wiki/HD-PLC
HD-PLC (short for High Definition Power Line Communication) is one of the wired communication technologies. It adopts high frequency band (2 MHz~28 MHz) over mediums like powerlines, phone lines, twisted-pair, and coaxial cables. It is the IEEE 1901-based standard. Specification and features There are essentially two different types of HD-PLC: HD-PLC Complete and HD-PLC Multi-hop. They are incompatible. HD-PLC Complete This is for high speed applications such as TV, AV, and surveillance camera. The major technical features include: IEEE 1901 full compliant QoS by the priority control CSMA/CA and DVTP(Dynamic Virtual Token Passing) supported Concurrent multi-AV stream, VoIP, and file transfer and file transfer supported using IP packet classification Multi-network access at priority CSMA/CA with network synchronization HD-PLC Multi-hop This is for long-distance applications such as smart meter, building network, factory, energy management, and IoT devices. The major technical features include: ITU-T G.9905 multihop technology Common features Uplinking/downlinking through 432 of 26 MHz (between 1.8 MHz and 28 MHz) bandwidth subcarriers with Wavelet OFDM Maximum 240 Mbit/s PHY rate Multilevel modulation for each subcarrier which suits the properties of the power line transmission channel and allows for the best transmission speed Subcarrier masking with the arbitrary number which can comply with the rules in each country Forward error correction (FEC) which enables effective frame transmission Channel estimation launch system with change detector for cycle and transmission channel HD-PLC network bridging compatible to Ethernet address system Advanced encryption with 128 bit AES 4th-generation HD-PLC (HD-PLC Quatro Core technology) We now come to communication speed issues like high-definition video images (4K/8K) or in some cases multi hop technology is not enough to reach an isolated and distant PLC terminal. HD-PLC Quatro Core has
https://en.wikipedia.org/wiki/Dell%20Networking%20Operating%20System
DNOS or Dell Networking Operating System is a network operating system running on switches from Dell Networking. It is derived from either the PowerConnect OS (DNOS 6.x) or Force10 OS/FTOS (DNOS 9.x) and will be made available for the 10G and faster Dell Networking S-series switches, the Z-series 40G core switches and DNOS6 is available for the N-series switches. Two version families The DNOS network operating system family comes in a few main versions: DNOS3 DNOS 3.x: This is a family of firmware for the campus access switches that can only be managed using a web based GUI or run as unmanaged device. DNOS6 DNOS 6.x: This is the operating system running on the Dell Networking N-series (campus) networking switches. It is the latest version of the 'PowerConnect' operating system, running on a Linux Kernel. It is available as upgrade for the PowerConnect 8100 series switches (which then become a Dell Networking N40xx switch) and it also is installed on all DN N1000, N2000 and N3000 series switches. It has a full web-based GUI together with a full CLI (command line interface) and the CLI will be very similar to the original PowerConnect CLI, though with a range of new features like PVSTP (per VLAN spanning tree), Policy Based Routing and MLAG. DNOS9 DNOS 9.x: TeUTg on NetBSD. Only the PowerConnect 8100 will be able to run on DNOS 6.x: all other PowerConnect ethernet switches will continue to run its own PowerConnect OS (on top of VxWorks) while the PowerConnect W-series run on a Dell specific version of ArubaOS. The Dell Networking S- xxxx and Z9x00 series will run on DNOS where the other Dell Networking switches will continue to run FTOS 8.x firmware. OS10 OS10 is a Linux-based open networking OS that can run on all Open Network Install Environment (ONIE) switches. As it runs directly in a Linux environment network admins can highly automate the network platform and manage the switches in a similar way as the (Linux) servers. Hardware Abstraction Layer Three
https://en.wikipedia.org/wiki/Outline%20of%20linear%20algebra
<noinclude>This is an outline of topics related to linear algebra, the branch of mathematics concerning linear equations and linear maps and their representations in vector spaces and through matrices. Linear equations Linear equation System of linear equations Determinant Minor Cauchy–Binet formula Cramer's rule Gaussian elimination Gauss–Jordan elimination Overcompleteness Strassen algorithm Matrices Matrix Matrix addition Matrix multiplication Basis transformation matrix Characteristic polynomial Trace Eigenvalue, eigenvector and eigenspace Cayley–Hamilton theorem Spread of a matrix Jordan normal form Weyr canonical form Rank Matrix inversion, invertible matrix Pseudoinverse Adjugate Transpose Dot product Symmetric matrix Orthogonal matrix Skew-symmetric matrix Conjugate transpose Unitary matrix Hermitian matrix, Antihermitian matrix Positive-definite, positive-semidefinite matrix Pfaffian Projection Spectral theorem Perron–Frobenius theorem List of matrices Diagonal matrix, main diagonal Diagonalizable matrix Triangular matrix Tridiagonal matrix Block matrix Sparse matrix Hessenberg matrix Hessian matrix Vandermonde matrix Stochastic matrix Toeplitz matrix Circulant matrix Hankel matrix (0,1)-matrix Matrix decompositions Matrix decomposition Cholesky decomposition LU decomposition QR decomposition Polar decomposition Reducing subspace Spectral theorem Singular value decomposition Higher-order singular value decomposition Schur decomposition Schur complement Haynsworth inertia additivity formula Relations Matrix equivalence Matrix congruence Matrix similarity Matrix consimilarity Row equivalence Computations Elementary row operations Householder transformation Least squares, linear least squares Gram–Schmidt process Woodbury matrix identity Vector spaces Vector space Linear combination Linear span Linear independence Scalar multiplication Basis Change of basis Hamel basis Cyclic decomposition theorem Dimension theorem for vector spaces Hamel dimension Examp
https://en.wikipedia.org/wiki/Interconnect%20bottleneck
The interconnect bottleneck comprises limits on integrated circuit (IC) performance due to connections between components instead of their internal speed. In 2006 it was predicted to be a "looming crisis" by 2010. Improved performance of computer systems has been achieved, in large part, by downscaling the IC minimum feature size. This allows the basic IC building block, the transistor, to operate at a higher frequency, performing more computations per second. However, downscaling of the minimum feature size also results in tighter packing of the wires on a microprocessor, which increases parasitic capacitance and signal propagation delay. Consequently, the delay due to the communication between the parts of a chip becomes comparable to the computation delay itself. This phenomenon, known as an “interconnect bottleneck”, is becoming a major problem in high-performance computer systems. This interconnect bottleneck can be solved by utilizing optical interconnects to replace the long metallic interconnects. Such hybrid optical/electronic interconnects promise better performance even with larger designs. Optics has widespread use in long-distance communications; still it has not yet been widely used in chip-to-chip or on-chip interconnections because they (in centimeter or micrometer range) are not yet industry-manufacturable owing to costlier technology and lack of fully mature technologies. As optical interconnections move from computer network applications to chip level interconnections, new requirements for high connection density and alignment reliability have become as critical for the effective utilization of these links. There are still many materials, fabrication, and packaging challenges in integrating optic and electronic technologies. See also Bus (computing) Interconnects (integrated circuits) Network-on-chip Optical network on chip Optical interconnect Photonics Von Neumann architecture
https://en.wikipedia.org/wiki/Arc%20fault
An arc fault is a high power discharge of electricity between two or more conductors. This discharge generates heat, which can break down the wire's insulation and trigger an electrical fire. Arc faults can range in current from a few amps up to thousands of amps, and are highly variable in strength and duration. Some common causes of arc fault are loose wire connections, over heated wires, or wires pinched by furniture. Location and detection Two types of wiring protection are standard thermal breakers and arc fault circuit breakers. Thermal breakers require an overload condition long enough that a heating element in the breaker trips the breaker off. In contrast, arc fault circuit breakers use magnetic or other means to detect increases in current draw much more quickly. Without such protection, visually detecting arc faults in defective wiring is very difficult, as the arc fault occurs in a very small area. A problem with arc fault circuit breaker is they are more likely to produce false positives due to normal circuit behaviors appearing to be arc faults. For instance, lightning strikes on the outside of an aircraft mimic arc faults in their voltage and current profiles. Research has been able to largely eliminate such false positives, however, providing the ability to quickly identify and locate repairs that need to be done. In simple wiring systems visual inspection can lead to finding the fault location, but in complex wiring systems, for instance aircraft wiring, devices such as a time-domain reflectometer are helpful, even on live wires. See also Arc flash Arc-fault circuit interrupter Time-domain reflectometer
https://en.wikipedia.org/wiki/Low%20Frequency%20Analyzer%20and%20Recorder
Two closely related terms, Low Frequency Analyzer and Recorder and Low Frequency Analysis and Recording bearing the acronym LOFAR, deal with the equipment and process respectively for presenting a visual spectrum representation of low frequency sounds in a time–frequency analysis. The process was originally applied to fixed surveillance passive antisubmarine sonar systems and later to sonobuoy and other systems. Originally the analysis was electromechanical and the display was produced on electrostatic recording paper, a Lofargram, with stronger frequencies presented as lines against background noise. The analysis migrated to digital and both analysis and display were digital after a major system consolidation into centralized processing centers during the 1990s. Both the equipment and process had specific and classified application to fixed surveillance sonar systems and was the basis for the United States Navy's ocean wide Sound Surveillance System (SOSUS) established in the early 1950s. The research and development of systems utilizing LOFAR was given the code name Project Jezebel. The installation and maintenance of SOSUS was under the unclassified code name Project Caesar. The principle was later applied to air, surface and submarine tactical sonar systems with some incorporating the name "Jezebel". Origin In 1949 when the US Navy approached the Committee for Undersea Warfare, an academic advisory group formed in 1946 under the National Academy of Sciences, to research antisubmarine warfare. As a result, the Navy formed a study group designated Project Hartwell under Massachusetts Institute of Technology (MIT) leadership. The Hartwell panel recommended that spending of annually to develop systems to counter the Soviet submarine threat consisting primarily of a large fleet of diesel submarines. One recommendation was a system to monitor low-frequency sound in the SOFAR channel using multiple listening sites equipped with hydrophones and a processing facility
https://en.wikipedia.org/wiki/Relay%20network
A relay network is a broad class of network topology commonly used in wireless networks, where the source and destination are interconnected by means of some nodes. In such a network the source and destination cannot communicate to each other directly because the distance between the source and destination is greater than the transmission range of both of them, hence the need for intermediate node(s) to relay. A relay network is a type of network used to send information between two devices, for e.g. server and computer, that are too far away to send the information to each other directly. Thus the network must send or "relay" the information to different devices, referred to as nodes, that pass on the information to its destination. A well-known example of a relay network is the Internet. A user can view a web page from a server halfway around the world by sending and receiving the information through a series of connected nodes. In many ways, a relay network resembles a chain of people standing together. One person has a note he needs to pass to the girl at the end of the line. He is the sender, she is the recipient, and the people in between them are the messengers, or the nodes. He passes the message to the first node, or person, who passes it to the second and so on until it reaches the girl and she reads it. The people might stand in a circle, however, instead of a line. Each person is close enough to reach the person on either side of him and across from him. Together the people represent a network and several messages can now pass around or through the network in different directions at once, as opposed to the straight line that could only run messages in a specific direction. This concept, the way a network is laid out and how it shares data, is known as network topology. Relay networks can use many different topologies, from a line to a ring to a tree shape, to pass along information in the fastest and most efficient way possible. Often the relay net
https://en.wikipedia.org/wiki/%E2%88%82
The character ∂ (Unicode: U+2202) is a stylized cursive d mainly used as a mathematical symbol, usually to denote a partial derivative such as (read as "the partial derivative of z with respect to x"). It is also used for boundary of a set, the boundary operator in a chain complex, and the conjugate of the Dolbeault operator on smooth differential forms over a complex manifold. It should be distinguished from other similar-looking symbols such as lowercase Greek letter delta (δ) or the lowercase Latin letter eth (ð). History The symbol was originally introduced in 1770 by Nicolas de Condorcet, who used it for a partial differential, and adopted for the partial derivative by Adrien-Marie Legendre in 1786. It represents a specialized cursive type of the letter d, just as the integral sign originates as a specialized type of a long s (first used in print by Leibniz in 1686). Use of the symbol was discontinued by Legendre, but it was taken up again by Carl Gustav Jacob Jacobi in 1841, whose usage became widely adopted. Names and coding The symbol is variously referred to as "partial", "curly d", "funky d", "rounded d", "curved d", "dabba", "number 6 mirrored", or "Jacobi's delta", or as "del" (but this name is also used for the "nabla" symbol ∇). It may also be pronounced simply "dee", "partial dee", "doh", or "die". The Unicode character is accessed by HTML entities &#8706; or &part;, and the equivalent LaTeX symbol (Computer Modern glyph: ) is accessed by \partial. Uses ∂ is also used to denote the following: The Jacobian . The boundary of a set in topology. The boundary operator on a chain complex in homological algebra. The boundary operator of a differential graded algebra. The conjugate of the Dolbeault operator on complex differential forms. The boundary ∂(S) of a set of vertices S in a graph is the set of edges leaving S, which defines a cut. See also d'Alembert operator Differentiable programming List of mathematical symbols Notation for diff
https://en.wikipedia.org/wiki/Table%20of%20prime%20factors
The tables contain the prime factorization of the natural numbers from 1 to 1000. When n is a prime number, the prime factorization is just n itself, written in bold below. The number 1 is called a unit. It has no prime factors and is neither prime nor composite. Properties Many properties of a natural number n can be seen or directly computed from the prime factorization of n. The multiplicity of a prime factor p of n is the largest exponent m for which pm divides n. The tables show the multiplicity for each prime factor. If no exponent is written then the multiplicity is 1 (since p = p1). The multiplicity of a prime which does not divide n may be called 0 or may be considered undefined. Ω(n), the big Omega function, is the number of prime factors of n counted with multiplicity (so it is the sum of all prime factor multiplicities). A prime number has Ω(n) = 1. The first: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 . There are many special types of prime numbers. A composite number has Ω(n) > 1. The first: 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21 . All numbers above 1 are either prime or composite. 1 is neither. A semiprime has Ω(n) = 2 (so it is composite). The first: 4, 6, 9, 10, 14, 15, 21, 22, 25, 26, 33, 34 . A k-almost prime (for a natural number k) has Ω(n) = k (so it is composite if k > 1). An even number has the prime factor 2. The first: 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24 . An odd number does not have the prime factor 2. The first: 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23 . All integers are either even or odd. A square has even multiplicity for all prime factors (it is of the form a2 for some a). The first: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144 . A cube has all multiplicities divisible by 3 (it is of the form a3 for some a). The first: 1, 8, 27, 64, 125, 216, 343, 512, 729, 1000, 1331, 1728 . A perfect power has a common divisor m > 1 for all multiplicities (it is of the form am for some a > 1 and m > 1). The first: 4, 8, 9, 16, 25,
https://en.wikipedia.org/wiki/Assembly%20language
In computer programming, assembly language (alternatively assembler language or symbolic machine code), often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported. The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, Coding for A.R.C.. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time. Because assembly depends on the machine code instructions, each assembly language is specific to a particular computer architecture. Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable ac
https://en.wikipedia.org/wiki/LGM-35%20Sentinel
The LGM-35 Sentinel, also known as the Ground Based Strategic Deterrent (GBSD), is a future American land-based intercontinental ballistic missile system (ICBM) currently in the early stages of development. It is slated to replace Minuteman III missiles, currently stationed in North Dakota, Wyoming, Montana, and Nebraska from 2029 through 2075. In 2020 the Department of the Air Force awarded defense contractor Northrop Grumman a $13.3 billion sole-source contract for development of the LGM-35 after Boeing withdrew its proposal. Northrop Grumman's subcontractors on the LGM-35 include Lockheed Martin, General Dynamics, Bechtel, Honeywell, Aerojet Rocketdyne, Parsons, Textron, and others. Name According to the United States Air Force website, the L in LGM is the Department of Defense designation for silo-launched; G means surface attack; and "M" stands for guided missile. History In 2010, the ICBM Coalition, legislators from states that house nuclear missiles, told President Obama they would not support ratification of the New START treaty with Russia unless Obama agreed to revamp the US nuclear triad: nuclear weapons that could be launched from land, sea, and air. In a written statement, President Obama agreed to "modernize or replace" all three legs of the triad. A request for proposal for development and maintenance of a next-generation nuclear ICBM was made by the US Air Force Nuclear Weapons Center in July 2016. The GBSD would replace the Minuteman III, which was first deployed in 1970, in the land-based portion of the US nuclear triad. The new missiles, to be phased in over a decade from the late 2020s, are estimated over a fifty-year life cycle to cost around $264 billion. Boeing and Northrop Grumman competed for the contract. In August 2017, the Air Force awarded three-year development contracts to Boeing and Northrop Grumman for $349 million and $329 million, respectively. One of these companies was to be selected to produce a ground-based nuclear ICBM i
https://en.wikipedia.org/wiki/Mesowear
Mesowear is a method, used in different branches and fields of biology. This method can apply to both extant and extinct animals, according to the scope of the study. Mesowear is based on studying an animal's tooth wearing fingerprint. In brief, each animal has special feeding habits, which cause unique tooth wearing. Rough feeds cause serious tooth abrasion, while smooth one triggers moderate abrasion, so browsers have teeth with moderate abrasion and grazers have teeth with rough abrasion. Scoring systems can quantify tooth abrasion observations and ease comparisons between individuals. Mesowear definition The mesowear method or tooth wear scoring method is a quick and inexpensive process of determining the lifelong diet of a taxon (grazer or browser) and was first introduced in the year 2000. The mesowear technique can be extended to extinct and also extant animals. Mesowear analyses require large sample populations (>20), which can be problematic for some localities, but the method yields an accurate depiction of an animal's average lifelong diet. Mesowear analysis is based on the physical properties of ungulate foods as reflected in the relative amounts of attritive and abrasive wear that they cause on the dental enamel of the occlusal surfaces. Mesowear was recorded by examining the buccal apices of molar tooth cusps. Apices were characterized as sharp, rounded, or blunt, and the valleys between them either high or low. The method has been developed only for selenodont and trilophodont molars, but the principle is readily extendable to other crown types. In collecting the data the teeth are inspected at close range, a hand lens will be used. Mesowear analysis is insensitive to wear stage as long as the very early and very late stages are excluded. Mesowear analysis follows standard protocols. Specimens are digitally photographed in labial view so that cusp shape and occlusal relief can be scored. this method helps zoologists and nutritionists to prepare pr
https://en.wikipedia.org/wiki/List%20of%20general%20topology%20topics
This is a list of general topology topics. Basic concepts Topological space Topological property Open set, closed set Clopen set Closure (topology) Boundary (topology) Dense (topology) G-delta set, F-sigma set closeness (mathematics) neighbourhood (mathematics) Continuity (topology) Homeomorphism Local homeomorphism Open and closed maps Germ (mathematics) Base (topology), subbase Open cover Covering space Atlas (topology) Limits Limit point Net (topology) Filter (topology) Ultrafilter Topological properties Baire category theorem Nowhere dense Baire space Banach–Mazur game Meagre set Comeagre set Compactness and countability Compact space Relatively compact subspace Heine–Borel theorem Tychonoff's theorem Finite intersection property Compactification Measure of non-compactness Paracompact space Locally compact space Compactly generated space Axiom of countability Sequential space First-countable space Second-countable space Separable space Lindelöf space Sigma-compact space Connectedness Connected space Separation axioms T0 space T1 space Hausdorff space Completely Hausdorff space Regular space Tychonoff space Normal space Urysohn's lemma Tietze extension theorem Paracompact Separated sets Topological constructions Direct sum and the dual construction product Subspace and the dual construction quotient Topological tensor product Examples Discrete space Locally constant function Trivial topology Cofinite topology Finer topology Product topology Restricted product Quotient space Unit interval Continuum (topology) Extended real number line Long line (topology) Sierpinski space Cantor set, Cantor space, Cantor cube Space-filling curve Topologist's sine curve Uniform norm Weak topology Strong topology Hilbert cube Lower limit topology Sorgenfrey plane Real tree Compact-open topology Zariski topology Kuratowski closure axioms Unicoherent Solenoid (mathematics) Uniform spaces Uniform continuity Lipschitz continuity Uniform isomorphism Uniform property Uni
https://en.wikipedia.org/wiki/7400-series%20integrated%20circuits
The 7400 series is a popular logic family of transistor–transistor logic (TTL) integrated circuits (ICs). In 1964, Texas Instruments introduced the SN5400 series of logic chips, in a ceramic semiconductor package. A low-cost plastic package SN7400 series was introduced in 1966 which quickly gained over 50% of the logic chip market, and eventually becoming de facto standardized electronic components. Over the decades, many generations of pin-compatible descendant families evolved to include support for low power CMOS technology, lower supply voltages, and surface mount packages. Overview The 7400 series contains hundreds of devices that provide everything from basic logic gates, flip-flops, and counters, to special purpose bus transceivers and arithmetic logic units (ALU). Specific functions are described in a list of 7400 series integrated circuits. Some TTL logic parts were made with an extended military-specification temperature range. These parts are prefixed with 54 instead of 74 in the part number. The less-common 64 and 84 prefixes on Texas Instruments parts indicated an industrial temperature range. Since the 1970s, new product families have been released to replace the original 7400 series. More recent TTL logic families were manufactured using CMOS or BiCMOS technology rather than TTL. Today, surface-mounted CMOS versions of the 7400 series are used in various applications in electronics and for glue logic in computers and industrial electronics. The original through-hole devices in dual in-line packages (DIP/DIL) were the mainstay of the industry for many decades. They are useful for rapid breadboard-prototyping and for education and remain available from most manufacturers. The fastest types and very low voltage versions are typically surface-mount only, however. The first part number in the series, the 7400, is a 14-pin IC containing four two-input NAND gates. Each gate uses two input pins and one output pin, with the remaining two pins being po
https://en.wikipedia.org/wiki/Euler%27s%20constant
Euler's constant (sometimes called the Euler–Mascheroni constant) is a mathematical constant, usually denoted by the lowercase Greek letter gamma (), defined as the limiting difference between the harmonic series and the natural logarithm, denoted here by : Here, represents the floor function. The numerical value of Euler's constant, to 50 decimal places, is: History The constant first appeared in a 1734 paper by the Swiss mathematician Leonhard Euler, titled De Progressionibus harmonicis observationes (Eneström Index 43). Euler used the notations and for the constant. In 1790, the Italian mathematician Lorenzo Mascheroni used the notations and for the constant. The notation appears nowhere in the writings of either Euler or Mascheroni, and was chosen at a later time perhaps because of the constant's connection to the gamma function. For example, the German mathematician Carl Anton Bretschneider used the notation in 1835 and Augustus De Morgan used it in a textbook published in parts from 1836 to 1842. Appearances Euler's constant appears, among other places, in the following (where '*' means that this entry contains an explicit equation): Expressions involving the exponential integral* The Laplace transform* of the natural logarithm The first term of the Laurent series expansion for the Riemann zeta function*, where it is the first of the Stieltjes constants* Calculations of the digamma function A product formula for the gamma function The asymptotic expansion of the gamma function for small arguments. An inequality for Euler's totient function The growth rate of the divisor function In dimensional regularization of Feynman diagrams in quantum field theory The calculation of the Meissel–Mertens constant The third of Mertens' theorems* Solution of the second kind to Bessel's equation In the regularization/renormalization of the harmonic series as a finite value The mean of the Gumbel distribution The information entropy of the Weibull and
https://en.wikipedia.org/wiki/List%20of%20combinatorial%20computational%20geometry%20topics
List of combinatorial computational geometry topics enumerates the topics of computational geometry that states problems in terms of geometric objects as discrete entities and hence the methods of their solution are mostly theories and algorithms of combinatorial character. See List of numerical computational geometry topics for another flavor of computational geometry that deals with geometric objects as continuous entities and applies methods and algorithms of nature characteristic to numerical analysis. Construction/representation Boolean operations on polygons Convex hull Hyperplane arrangement Polygon decomposition Polygon triangulation Minimal convex decomposition Minimal convex cover problem (NP-hard) Minimal rectangular decomposition Tessellation problems Shape dissection problems Straight skeleton Stabbing line problem Triangulation Delaunay triangulation Point-set triangulation Polygon triangulation Voronoi diagram Extremal shapes Minimum bounding box (Smallest enclosing box, Smallest bounding box) 2-D case: Smallest bounding rectangle (Smallest enclosing rectangle) There are two common variants of this problem. In many areas of computer graphics, the bounding box (often abbreviated to bbox) is understood to be the smallest box delimited by sides parallel to coordinate axes which encloses the objects in question. In other applications, such as packaging, the problem is to find the smallest box the object (or objects) may fit in ("packaged"). Here the box may assume an arbitrary orientation with respect to the "packaged" objects. Smallest bounding sphere (Smallest enclosing sphere) 2-D case: Smallest bounding circle Largest empty rectangle (Maximum empty rectangle) Largest empty sphere 2-D case: Maximum empty circle (largest empty circle) Interaction/search Collision detection Line segment intersection Point location Point in polygon Polygon intersection Range searching Orthogonal range searching Simplex range searchi
https://en.wikipedia.org/wiki/Instantaneous%20phase%20and%20frequency
Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase (also known as local phase or simply phase) of a complex-valued function s(t), is the real-valued function: where arg is the complex argument function. The instantaneous frequency is the temporal rate of change of the instantaneous phase. And for a real-valued function s(t), it is determined from the function's analytic representation, sa(t): where represents the Hilbert transform of s(t). When φ(t) is constrained to its principal value, either the interval or , it is called wrapped phase. Otherwise it is called unwrapped phase, which is a continuous function of argument t, assuming sa(t) is a continuous function of t. Unless otherwise indicated, the continuous form should be inferred. Examples Example 1 where ω > 0. In this simple sinusoidal example, the constant θ is also commonly referred to as phase or phase offset. φ(t) is a function of time; θ is not. In the next example, we also see that the phase offset of a real-valued sinusoid is ambiguous unless a reference (sin or cos) is specified. φ(t) is unambiguously defined. Example 2 where ω > 0. In both examples the local maxima of s(t) correspond to φ(t) = 2N for integer values of N. This has applications in the field of computer vision. Formulations Instantaneous angular frequency is defined as: and instantaneous (ordinary) frequency is defined as: where φ(t) must be the unwrapped phase; otherwise, if φ(t) is wrapped, discontinuities in φ(t) will result in Dirac delta impulses in f(t). The inverse operation, which always unwraps phase, is: This instantaneous frequency, ω(t), can be derived directly from the real and imaginary parts of sa(t), instead of the complex arg without concern of phase unwrapping. 2m1 and m2 are the integer multiples of necessary to add to unwrap the phase. At values of time, t, whe
https://en.wikipedia.org/wiki/List%20of%20CERN%20Scientific%20Committees
Proposals for experiments are made at CERN and have to go through the correct channels in order to be approved. One of the last steps in the process is to submit the proposal to an appropriate CERN Scientific Committee. The committees will discuss the proposal and then pass on their recommendations to the Research Board (previously the Nuclear Physics Research Committee) for the final decision. Proposals approved become part of the CERN experimental programme. In 1960, John Adams, the Director General, created three committees to manage experiments for each bubble chamber experimental technique used at CERN. These replaced the previous Advisory and Bubble Chamber committees. At the end of the bubble chamber period, the system was again changed and based on machine, rather than experimental technique. The committees were changed and merged in order to accommodate to this. Since then, the committees have changed based on the creation and decommissioning of facilities and accelerators. Current committees Past committees
https://en.wikipedia.org/wiki/Y-factor
The Y-factor method is a widely used technique for measuring the gain and noise temperature of an amplifier. It is based on the Johnson–Nyquist noise of a resistor at two different, known temperatures. Consider a microwave amplifier with a 50-ohm impedance with a 50-ohm resistor connected to the amplifier input. If the resistor is at a physical temperature TR, then the Johnson–Nyquist noise power coupled to the amplifier input is PJ = kBTRB, where kB is Boltzmann’s constant, and B is the bandwidth. The noise power at the output of the amplifier (i.e. the noise power coupled to an impedance-matched load that is connected to the amplifier output) is Pout = GkB(TR + Tamp)B, where G is the amplifier power gain, and Tamp is the amplifier noise temperature. In the Y-factor technique, Pout is measured for two different, known values of TR. Pout is then converted to an effective temperature Tout (in units of kelvin) by dividing by kB and the measurement bandwidth B. The two values of Tout are then plotted as a function of TR (also in units of kelvin), and a line is fit to these points (see figure). The slope of this line is equal to the amplifier power gain. The x intercept of the line is equal to the negative of the amplifier noise temperature −Tamp in kelvins. The amplifier noise temperature can also be determined from the y intercept, which is equal to Tamp multiplied by the gain.
https://en.wikipedia.org/wiki/Spread-spectrum%20time-domain%20reflectometry
Spread-spectrum time-domain reflectometry (SSTDR) is a measurement technique to identify faults, usually in electrical wires, by observing reflected spread spectrum signals. This type of time-domain reflectometry can be used in various high-noise and live environments. Additionally, SSTDR systems have the additional benefit of being able to precisely locate the position of the fault. Specifically, SSTDR is accurate to within a few centimeters for wires carrying 400 Hz aircraft signals as well as MIL-STD-1553 data bus signals. AN SSTDR system can be run on a live wire because the spread spectrum signals can be isolated from the system noise and activity. At the most basic level, the system works by sending spread spectrum signals down a wireline and waiting for those signals to be reflected back to the SSTDR system. The reflected signal is then correlated with a copy of the sent signal. Mathematical algorithms are applied to both the shape and timing of the signals to locate either the short or the end of an open circuit. Detecting intermittent faults in live wires Spread-spectrum time domain reflectometry is used in detecting intermittent faults in live wires. From buildings and homes to aircraft and naval ships, this technology can discover irregular shorts on live wire running 400 Hz, 115 V. For accurate location of a wiring system's fault the SSTDR associates the PN code with the signal on the line then stores the exact location of the correlation before the arc dissipates. Present SSTDR can collect a complete data set in under 5 ms. SSTDR technology allows for analysis of a network of wires. One SSTDR sensor can measure up to 4 junctions in a branched wire system. See also Spread spectrum Time-domain reflectometry
https://en.wikipedia.org/wiki/Mandelstam%20variables
In theoretical physics, the Mandelstam variables are numerical quantities that encode the energy, momentum, and angles of particles in a scattering process in a Lorentz-invariant fashion. They are used for scattering processes of two particles to two particles. The Mandelstam variables were first introduced by physicist Stanley Mandelstam in 1958. If the Minkowski metric is chosen to be , the Mandelstam variables are then defined by , where p1 and p2 are the four-momenta of the incoming particles and p3 and p4 are the four-momenta of the outgoing particles. is also known as the square of the center-of-mass energy (invariant mass) and as the square of the four-momentum transfer. Feynman diagrams The letters s,t,u are also used in the terms s-channel (timelike channel), t-channel, and u-channel (both spacelike channels). These channels represent different Feynman diagrams or different possible scattering events where the interaction involves the exchange of an intermediate particle whose squared four-momentum equals s,t,u, respectively. {|cellpadding="10" | | | |- |align="center"|s-channel |align="center"|t-channel |align="center"|u-channel |} For example, the s-channel corresponds to the particles 1,2 joining into an intermediate particle that eventually splits into 3,4: The t-channel represents the process in which the particle 1 emits the intermediate particle and becomes the final particle 3, while the particle 2 absorbs the intermediate particle and becomes 4. The u-channel is the t-channel with the role of the particles 3,4 interchanged. When evaluating a Feynman amplitude one often finds scalar products of the external four momenta. One can use the Mandelstam variables to simplify these: Where is the mass of the particle with corresponding momentum . Sum Note that where mi is the mass of particle i. To prove this, we need to use two facts: The square of a particle's four momentum is the square of its mass, And conservation of four-momentum,
https://en.wikipedia.org/wiki/Digital%20signal%20controller
A digital signal controller (DSC) is a hybrid of microcontrollers and digital signal processors (DSPs). Like microcontrollers, DSCs have fast interrupt responses, offer control-oriented peripherals like PWMs and watchdog timers, and are usually programmed using the C programming language, although they can be programmed using the device's native assembly language. On the DSP side, they incorporate features found on most DSPs such as single-cycle multiply–accumulate (MAC) units, barrel shifters, and large accumulators. Not all vendors have adopted the term DSC. The term was first introduced by Microchip Technology in 2002 with the launch of their 6000 series DSCs and subsequently adopted by most, but not all DSC vendors. For example, Infineon and Renesas refer to their DSCs as microcontrollers. DSCs are used in a wide range of applications, but the majority go into motor control, power conversion, and sensor processing applications. Currently, DSCs are being marketed as green technologies for their potential to reduce power consumption in electric motors and power supplies. In order of market share, the top three DSC vendors are Texas Instruments, Freescale, and Microchip Technology, according to market research firm Forward Concepts (2007). These three companies dominate the DSC market, with other vendors such as Infineon and Renesas taking a smaller slice of the pie. DSC chips NOTE: Data is from 2012 (Microchip and TI) and table currently only includes offering from the top 3 DSC vendors. DSC software DSCs, like microcontrollers and DSPs, require software support. There are a growing number of software packages that offer the features required by both DSP applications and microcontroller applications. With a broader set of requirements, software solutions are more rare. They require: development tools, DSP libraries, optimization for DSP processing, fast interrupt handling, multi-threading, and a tiny footprint.
https://en.wikipedia.org/wiki/System%20administrator
A system administrator, sysadmin, or admin is a person who is responsible for the upkeep, configuration, and reliable operation of computer systems, especially multi-user computers, such as servers. The system administrator seeks to ensure that the uptime, performance, resources, and security of the computers they manage meet the needs of the users, without exceeding a set budget when doing so. To meet these needs, a system administrator may acquire, install, or upgrade computer components and software; provide routine automation; maintain security policies; troubleshoot; train or supervise staff; or offer technical support for projects. Related fields Many organizations staff offer jobs related to system administration. In a larger company, these may all be separate positions within a computer support or Information Services (IS) department. In a smaller group they may be shared by a few sysadmins, or even a single person. A database administrator (DBA) maintains a database system, and is responsible for the integrity of the data and the efficiency and performance of the system. A network administrator maintains network infrastructure such as switches and routers, and diagnoses problems with these or with the behavior of network-attached computers. A security administrator is a specialist in computer and network security, including the administration of security devices such as firewalls, as well as consulting on general security measures. A web administrator maintains web server services (such as Apache or IIS) that allow for internal or external access to web sites. Tasks include managing multiple sites, administering security, and configuring necessary components and software. Responsibilities may also include software change management. A computer operator performs routine maintenance and upkeep, such as changing backup tapes or replacing failed drives in a redundant array of independent disks (RAID). Such tasks usually require physical presence in the
https://en.wikipedia.org/wiki/Pairing%20%28computing%29
Pairing, sometimes known as bonding, is a process used in computer networking that helps set up an initial linkage between computing devices to allow communications between them. The most common example is used in Bluetooth, where the pairing process is used to link devices like a Bluetooth headset with a mobile phone. Computer networking Computing terminology 2 (number)
https://en.wikipedia.org/wiki/Time-driven%20switching
In telecommunication and computer networking, time-driven switching (TDS) is a node by node time variant implementation of circuit switching, where the propagating datagram is shorter in space than the distance between source and destination. With TDS it is no longer necessary to own a complete circuit between source and destination, but only the fraction of circuit where the propagating datagram is temporarily located. TDS adds flexibility and capacity to circuit-switched networks but requires precise synchronization among nodes and propagating datagrams. Datagrams are formatted according to schedules that depend on quality of service and availability of switching nodes and physical links. In respect to circuit switching, the added time dimension introduces additional complexity to network management. Like circuit switching, TDS operates without buffers and header processing according to the pipeline forwarding principle; therefore an all optical implementation with optical fibers and optical switches is possible with low cost. The TDS concept itself pervades and is applicable with advantage to existing data switching technologies, including packet switching, where packets, or sets of packets become the datagrams that are routed through the network. TDS has been invented in 2002 by Prof. Mario Baldi and Prof. Yoram Ofek of Synchrodyne Networks that is the assignee of several patents issued by both the United States Patent and Trademark Office and the European Patent Office.
https://en.wikipedia.org/wiki/Four-vector
In special relativity, a four-vector (or 4-vector) is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the (,) representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts (a change by a constant velocity to another inertial reference frame). Four-vectors describe, for instance, position in spacetime modeled as Minkowski space, a particle's four-momentum , the amplitude of the electromagnetic four-potential at a point in spacetime, and the elements of the subspace spanned by the gamma matrices inside the Dirac algebra. The Lorentz group may be represented by 4×4 matrices . The action of a Lorentz transformation on a general contravariant four-vector (like the examples above), regarded as a column vector with Cartesian coordinates with respect to an inertial frame in the entries, is given by (matrix multiplication) where the components of the primed object refer to the new frame. Related to the examples above that are given as contravariant vectors, there are also the corresponding covariant vectors , and . These transform according to the rule where denotes the matrix transpose. This rule is different from the above rule. It corresponds to the dual representation of the standard representation. However, for the Lorentz group the dual of any representation is equivalent to the original representation. Thus the objects with covariant indices are four-vectors as well. For an example of a well-behaved four-component object in special relativity that is not a four-vector, see bispinor. It is similarly defined, the difference being that the transformation rule under Lorentz transformations is given by
https://en.wikipedia.org/wiki/Landscape%20limnology
Landscape limnology is the spatially explicit study of lakes, streams, and wetlands as they interact with freshwater, terrestrial, and human landscapes to determine the effects of pattern on ecosystem processes across temporal and spatial scales. Limnology is the study of inland water bodies inclusive of rivers, lakes, and wetlands; landscape limnology seeks to integrate all of these ecosystem types. The terrestrial component represents spatial hierarchies of landscape features that influence which materials, whether solutes or organisms, are transported to aquatic systems; aquatic connections represent how these materials are transported; and human activities reflect features that influence how these materials are transported as well as their quantity and temporal dynamics. Foundation The core principles or themes of landscape ecology provide the foundation for landscape limnology. These ideas can be synthesized into a set of four landscape ecology themes that are broadly applicable to any aquatic ecosystem type, and that consider the unique features of such ecosystems. A landscape limnology framework begins with the premise of Thienemann (1925). Wiens (2002): freshwater ecosystems can be considered patches. As such, the location of these patches and their placement relative to other elements of the landscape is important to the ecosystems and their processes. Therefore, the four main themes of landscape limnology are: Patch characteristics: The characteristics of a freshwater ecosystem include its physical morphometry, chemical, and biological features, as well as its boundaries. These boundaries are often more easily defined for aquatic ecosystems than for terrestrial ecosystems (e.g., shoreline, riparian zones, and emergent vegetation zone) and are often a focal-point for important ecosystem processes linking terrestrial and aquatic components. Patch context: The freshwater ecosystem is embedded in a complex terrestrial mosaic (e.g., soils, geology, and
https://en.wikipedia.org/wiki/Bracket
A bracket, as used in British English, is either of two tall fore- or back-facing punctuation marks commonly used to isolate a segment of text or data from its surroundings. Typically deployed in symmetric pairs, an individual bracket may be identified as a 'left' or 'right' bracket or, alternatively, an "opening bracket" or "closing bracket", respectively, depending on the directionality of the context. There are four primary types of brackets. In British usage they are known as round brackets (or simply brackets), square brackets, curly brackets, and angle brackets; in American usage they are respectively known as parentheses, brackets, braces, and chevrons. There are also various less common symbols considered brackets. Various forms of brackets are used in mathematics, with specific mathematical meanings, often for denoting specific mathematical functions and subformulas. History Angle brackets or chevrons ⟨ ⟩ were the earliest type of bracket to appear in written English. Erasmus coined the term to refer to the round brackets or parentheses () recalling the shape of the crescent moon (). Most typewriters only had the left and right parentheses. Square brackets appeared with some teleprinters. Braces (curly brackets) first became part of a character set with the 8-bit code of the IBM 7030 Stretch. In 1961, ASCII contained parenthesis, square, and curly brackets, and also less-than and greater-than signs that could be used as angle brackets. Typography In English, typographers mostly prefer not to set brackets in italics, even when the enclosed text is italic. However, in other languages like German, if brackets enclose text in italics, they are usually also set in italics. Parentheses or (round) brackets ( and ) are called parentheses (singular parenthesis ) in American English, and "brackets" informally in the UK, India, Ireland, Canada, the West Indies, New Zealand, South Africa, and Australia; they are also known as "round brackets", "parens" ,
https://en.wikipedia.org/wiki/Positional%20notation
Positional notation (or place-value notation, or positional numeral system) usually denotes the extension to any base of the Hindu–Arabic numeral system (or decimal system). More generally, a positional system is a numeral system in which the contribution of a digit to the value of a number is the value of the digit multiplied by a factor determined by the position of the digit. In early numeral systems, such as Roman numerals, a digit has only one value: I means one, X means ten and C a hundred (however, the value may be negated if placed before another digit). In modern positional systems, such as the decimal system, the position of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different positions in the digit string. The Babylonian numeral system, base 60, was the first positional system to be developed, and its influence is present today in the way time and angles are counted in tallies related to 60, such as 60 minutes in an hour and 360 degrees in a circle. Today, the Hindu–Arabic numeral system (base ten) is the most commonly used system globally. However, the binary numeral system (base two) is used in almost all computers and electronic devices because it is easier to implement efficiently in electronic circuits. Systems with negative base, complex base or negative digits have been described. Most of them do not require a minus sign for designating negative numbers. The use of a radix point (decimal point in base ten), extends to include fractions and allows representing any real number with arbitrary accuracy. With positional notation, arithmetical computations are much simpler than with any older numeral system; this led to the rapid spread of the notation when it was introduced in western Europe. History Today, the base-10 (decimal) system, which is presumably motivated by counting with the ten fingers, is ubiquitous.
https://en.wikipedia.org/wiki/Branch%20Queue
In Computer Architecture, While Branch predictions Branch queue takes place. When Branch Predictor predicts if the branch is taken or not, Branch queue stores the predictions that to be used later. Branch queue consists 2 values only. Taken or Not Taken. Branch queue helps other algorithms to increase parallelism and optimization. It is not software implemented or Hardware one, It falls under hardware software co-design.
https://en.wikipedia.org/wiki/Lanthanide%20probes
Lanthanide probes are a non-invasive analytical tool commonly used for biological and chemical applications. Lanthanides are metal ions which have their 4f energy level filled and generally refer to elements cerium to lutetium in the periodic table. The fluorescence of lanthanide salts is weak because the energy absorption of the metallic ion is low; hence chelated complexes of lanthanides are most commonly used. The term chelate derives from the Greek word for “claw,” and is applied to name ligands, which attach to a metal ion with two or more donor atoms through dative bonds. The fluorescence is most intense when the metal ion has the oxidation state of 3+. Not all lanthanide metals can be used and the most common are: Sm(III), Eu(III), Tb(III), and Dy(III). History It has been known since the early 1930s that the salts of certain lanthanides are fluorescent. The reaction of lanthanide salts with nucleic acids was discussed in a number of publications during the 1930s and the 1940s where lanthanum-containing reagents were employed for the fixation of nucleic acid structures. In 1942 complexes of europium, terbium, and samarium were discovered to exhibit unusual luminescence properties when excited by UV light. However, the first staining of biological cells with lanthanides occurred twenty years later when bacterial smears of E. coli were treated with aqueous solutions of a europium complex, which under mercury lamp illumination appeared as bright red spots. Attention to lanthanide probes increased greatly in the mid-1970s when Finnish researchers proposed Eu(III), Sm(III), Tb(III), and Dy(III) polyaminocarboxylates as luminescent sensors in time-resolved luminescent (TRL) immunoassays. Optimization of analytical methods from the 1970s onward for lanthanide chelates and time-resolved luminescence microscopy (TRLM) resulted in the use of lanthanide probes in many scientific, medical and commercial fields. Techniques There are two main assaying techniques: heter
https://en.wikipedia.org/wiki/Homogeneity%20%28physics%29
In physics, a homogeneous material or system has the same properties at every point; it is uniform without irregularities. A uniform electric field (which has the same strength and the same direction at each point) would be compatible with homogeneity (all points experience the same physics). A material constructed with different constituents can be described as effectively homogeneous in the electromagnetic materials domain, when interacting with a directed radiation field (light, microwave frequencies, etc.). Mathematically, homogeneity has the connotation of invariance, as all components of the equation have the same degree of value whether or not each of these components are scaled to different values, for example, by multiplication or addition. Cumulative distribution fits this description. "The state of having identical cumulative distribution function or values". Context The definition of homogeneous strongly depends on the context used. For example, a composite material is made up of different individual materials, known as "constituents" of the material, but may be defined as a homogeneous material when assigned a function. For example, asphalt paves our roads, but is a composite material consisting of asphalt binder and mineral aggregate, and then laid down in layers and compacted. However, homogeneity of materials does not necessarily mean isotropy. In the previous example, a composite material may not be isotropic. In another context, a material is not homogeneous in so far as it is composed of atoms and molecules. However, at the normal level of our everyday world, a pane of glass, or a sheet of metal is described as glass, or stainless steel. In other words, these are each described as a homogeneous material. A few other instances of context are: dimensional homogeneity (see below) is the quality of an equation having quantities of same units on both sides; homogeneity (in space) implies conservation of momentum; and homogeneity in time implies co
https://en.wikipedia.org/wiki/Flash%20memory%20emulator
A flash emulator or flash memory emulator is a tool that is used to temporarily replace flash memory or ROM chips in an embedded device for the purpose of debugging embedded software. Such tools contain Dual-ported RAM, one port of which is connected to a target system (i.e. system, that is being debugged), and second is connected to a host (i.e. PC, which runs debugger). This allows the programmer to change executable code while it is running, set break points, and use other advanced debugging techniques on an embedded system, where such operations would not be possible otherwise. This type of tool appeared in 1980s-1990s, when most embedded systems were using discrete ROM (or later flash memory) chip, containing executable code. This allowed for easy replacing of ROM/flash chip with emulator. Together with excellent productivity of this tool this had driven an almost universal use of it among embedded developers. Later, when most embedded systems started to include both processor and flash on a single chip for cost and IP protection reasons, thus making external flash emulator tool impossible, search for a replacement tool started. And as often happens when a direct replacement is being searched for, many replacement techniques contain words "flash emulation" in them, for example, TI's "Flash Emulation Tool" debugging interface (FET) for its MSP430 chips, or more generic in-circuit emulators, even though none of two above had anything to do with flash or emulation as it is. Flash emulator could also be retrofitted to an embedded system to facilitate reverse engineering. For example, that was main hardware instrument in reverse engineering Wii gaming console bootloader. See also In-circuit emulator
https://en.wikipedia.org/wiki/Molybdenum%20in%20biology
Molybdenum is an essential element in most organisms. It is most notably present in nitrogenase which is an essential part of nitrogen fixation. Mo-containing enzymes Molybdenum is an essential element in most organisms; a 2008 research paper speculated that a scarcity of molybdenum in the Earth's early oceans may have strongly influenced the evolution of eukaryotic life (which includes all plants and animals). At least 50 molybdenum-containing enzymes have been identified, mostly in bacteria. Those enzymes include aldehyde oxidase, sulfite oxidase and xanthine oxidase. With one exception, Mo in proteins is bound by molybdopterin to give the molybdenum cofactor. The only known exception is nitrogenase, which uses the FeMoco cofactor, which has the formula Fe7MoS9C. In terms of function, molybdoenzymes catalyze the oxidation and sometimes reduction of certain small molecules in the process of regulating nitrogen, sulfur, and carbon. In some animals, and in humans, the oxidation of xanthine to uric acid, a process of purine catabolism, is catalyzed by xanthine oxidase, a molybdenum-containing enzyme. The activity of xanthine oxidase is directly proportional to the amount of molybdenum in the body. An extremely high concentration of molybdenum reverses the trend and can inhibit purine catabolism and other processes. Molybdenum concentration also affects protein synthesis, metabolism, and growth. Mo is a component in most nitrogenases. Among molybdoenzymes, nitrogenases are unique in lacking the molybdopterin. Nitrogenases catalyze the production of ammonia from atmospheric nitrogen: The biosynthesis of the FeMoco active site is highly complex. Molybdate is transported in the body as MoO42−. Human metabolism and deficiency Molybdenum is an essential trace dietary element. Four mammalian Mo-dependent enzymes are known, all of them harboring a pterin-based molybdenum cofactor (Moco) in their active site: sulfite oxidase, xanthine oxidoreductase, aldehyde oxida
https://en.wikipedia.org/wiki/Yupana
A yupana (from Quechua: yupay 'count') is a counting board used to perform arithmetic operations, dating back to the time of the Incas. Very little documentation exists concerning its precise physical form or how it was used. Types The term yupana refers to two distinct classes of objects: Table Yupana (or archaeological yupana): a system of geometric boxes of different sizes and materials. The first example of this type was found in 1869 in the Ecuadorian province of Azuay and prompted searches for more of these objects. All examples of the archaeological yupana vary greatly from each other. Some archaeological yupanas found in Manchán (an archaeological site in Casma) and Huacones-Vilcahuasi (in Cañete) were embedded into the floor. Poma de Ayala Yupana: a picture on page 360 of El primer nueva corónica y buen gobierno, written by the Amerindian chronicler Felipe Guaman Poma de Ayala shows a 5x4 chessboard (shown right). The chessboard, though resembling a table yupana, differs from this style in most notably in each of its rectangular trays have the same dimensions, while table yupanas have trays of other polygonal shapes of differing sizes. Although very different from each other, most scholars who have dealt with table yupanas have extended reasoning and theories to the Poma de Ayala yupana and vice versa, perhaps in an attempt to find a unifying thread or a common method of creation. For example, the Nueva coronica (New Chronicle) discovered in 1916 in the library of Copenhagen contained evidence that a portion of the studies on the Poma de Ayala yupana were based on previous studies and theories regarding table yupanas. History Several chroniclers of the Indies described, in brief, this Incan abacus and its operation. Felipe Guaman Poma de Ayala The first was Guaman Poma de Ayala around the year 1615 who wrote: In addition to providing this brief description, Poma de Ayala drew a picture of the yupana: a board of five rows and four columns with e
https://en.wikipedia.org/wiki/Chamfer%20%28geometry%29
In geometry, chamfering or edge-truncation is a topological operator that modifies one polyhedron into another. It is similar to expansion, moving faces apart and outward, but also maintains the original vertices. For polyhedra, this operation adds a new hexagonal face in place of each original edge. In Conway polyhedron notation it is represented by the letter . A polyhedron with edges will have a chamfered form containing new vertices, new edges, and new hexagonal faces. Chamfered Platonic solids In the chapters below the chamfers of the five Platonic solids are described in detail. Each is shown in a version with edges of equal length and in a canonical version where all edges touch the same midsphere. (They only look noticeably different for solids containing triangles.) The shown duals are dual to the canonical versions. Chamfered tetrahedron The chamfered tetrahedron (or alternate truncated cube) is a convex polyhedron constructed as an alternately truncated cube or chamfer operation on a tetrahedron, replacing its 6 edges with hexagons. It is the Goldberg polyhedron GIII(2,0), containing triangular and hexagonal faces. Chamfered cube The chamfered cube is a convex polyhedron with 32 vertices, 48 edges, and 18 faces: 12 hexagons and 6 squares. It is constructed as a chamfer of a cube. The squares are reduced in size and new hexagonal faces are added in place of all the original edges. Its dual is the tetrakis cuboctahedron. It is also inaccurately called a truncated rhombic dodecahedron, although that name rather suggests a rhombicuboctahedron. It can more accurately be called a tetratruncated rhombic dodecahedron because only the order-4 vertices are truncated. The hexagonal faces are equilateral but not regular. They are formed by a truncated rhombus, have 2 internal angles of about 109.47°, or , and 4 internal angles of about 125.26°, while a regular hexagon would have all 120° angles. Because all its faces have an even number of sides with
https://en.wikipedia.org/wiki/Time-varied%20gain
Time varied gain (TVG) is signal compensation that is applied by the receiver electronics through analog or digital signal processing. The desired result is that targets of the same size produce echoes of the same size, regardless of target range. See also Automatic gain control
https://en.wikipedia.org/wiki/SOS%20chromotest
The SOS chromotest is a biological assay to assess the genotoxic potential of chemical compounds. The test is a colorimetric assay which measures the expression of genes induced by genotoxic agents in Escherichia coli, by means of a fusion with the structural gene for β-galactosidase. The test is performed over a few hours in columns of a 96-well microplate with increasing concentrations of test samples. This test was developed as a practical complement or alternative to the traditional Ames test assay for genotoxicity, which involves growing bacteria on agar plates and comparing natural mutation rates to mutation rates of bacteria exposed to potentially mutagenic compounds or samples. The SOS chromotest is comparable in accuracy and sensitivity to established methods such as the Ames test and is a useful tool to screen genotoxic compounds, which could prove carcinogenic in humans, in order to single out chemicals for further in-depth analysis. As with other bacterial gentoxicity and mutagenicity assays, compounds requiring metabolic activation for activity can be investigated with the addition of S9 microsomal rat liver extract. Mechanism The SOS response plays a central role in the response of E. coli to genotoxic compounds because it responds to a wide array of chemical agents. Triggering of this system can and has been used as an early sign of DNA damage. Two genes play a key role in the SOS response: lexA encodes a repressor for all the genes in the system, and recA encodes a protein able to cleave the LexA repressor upon activation by an SOS inducing signal (caused in this case by the presence of a genotoxic compound). Although the exact mechanism of the SOS response is still unknown, it is induced when DNA lesions perturb or stop DNA replication. . Various end-points are possible indicators of the triggering of the SOS system; activation of the RecA protein, cleavage of the LexA repressor, expression of any of the SOS genes, etc. One of the simplest assays
https://en.wikipedia.org/wiki/Echo%20removal
Echo removal is the process of removing echo and reverberation artifacts from audio signals. The reverberation is typically modeled as the convolution of a (sometimes time-varying) impulse response with a hypothetical clean input signal, where both the clean input signal (which is to be recovered) and the impulse response are unknown. This is an example of an inverse problem. In almost all cases, there is insufficient information in the input signal to uniquely determine a plausible original image, making it an ill-posed problem. This is generally solved by the use of a regularization term to attempt to eliminate implausible solutions. This problem is analogous to deblurring in the image processing domain. See also Echo suppression and cancellation Digital room correction Noise reduction Linear prediction coder Signal processing
https://en.wikipedia.org/wiki/Hilbert%20spectrum
The Hilbert spectrum (sometimes referred to as the Hilbert amplitude spectrum), named after David Hilbert, is a statistical tool that can help in distinguishing among a mixture of moving signals. The spectrum itself is decomposed into its component sources using independent component analysis. The separation of the combined effects of unidentified sources (blind signal separation) has applications in climatology, seismology, and biomedical imaging. Conceptual summary The Hilbert spectrum is computed by way of a 2-step process consisting of: Preprocessing a signal separate it into intrinsic mode functions using a mathematical decomposition such as singular value decomposition (SVD) or empirical mode decomposition (EMD); Applying the Hilbert transform to the results of the above step to obtain the instantaneous frequency spectrum of each of the components. The Hilbert transform defines the imaginary part of the function to make it an analytic function (sometimes referred to as a progressive function), i.e. a function whose signal strength is zero for all frequency components less than zero. With the Hilbert transform, the singular vectors give instantaneous frequencies that are functions of time, so that the result is an energy distribution over time and frequency. The result is an ability to capture time-frequency localization to make the concept of instantaneous frequency and time relevant (the concept of instantaneous frequency is otherwise abstract or difficult to define for all but monocomponent signals). Definition For a given signal decomposed (with for example Empirical Mode Decomposition) to where is the number of intrinsic mode functions that consists of and The instantaneous angle frequency is then defined as From this, we can define the Hilbert Spectrum for as The Hilbert Spectrum of is then given by Marginal Hilbert Spectrum A two dimensional representation of a Hilbert Spectrum, called Marginal Hilbert Spectrum, is defined as where
https://en.wikipedia.org/wiki/GP5%20chip
The GP5 is a co-processor accelerator built to accelerate discrete belief propagation on factor graphs and other large-scale tensor product operations for machine learning. It is related to, and anticipated by a number of years, the Google Tensor Processing Unit It is designed to run as a co-processor with another controller (such as a CPU (x86) or an ARM/MIPS/Tensilica core). It was developed as the culmination of DARPA's Analog Logic program The GP5 has a fairly exotic architecture, resembling neither a GPU nor a DSP, and leverages massive fine-grained and coarse-grained parallelism. It is deeply pipelined. The different algorithmic tasks involved in performing belief propagation updates are performed by independent, heterogeneous compute units. The performance of the chip is governed by the structure of the machine learning workload being evaluated. In typical cases, the GP5 is roughly 100 times faster and 100 times more energy efficient than a single core of a modern core i7 performing a comparable task. It is roughly 10 times faster and 1000 times more energy efficient than a state-of-the art GPU. It is roughly 1000 times faster and 10 times more energy efficient than a state-of-the-art ARM processor. It was benchmarked on typical machine learning and inference workloads that included protein side-chain folding, turbo error correction decoding, stereo vision, signal noise reduction, and others. Analog Devices, Inc. acquired the intellectual property for the GP5 when it acquired Lyric Semiconductor, Inc. in 2011.
https://en.wikipedia.org/wiki/Hot-carrier%20injection
Hot carrier injection (HCI) is a phenomenon in solid-state electronic devices where an electron or a “hole” gains sufficient kinetic energy to overcome a potential barrier necessary to break an interface state. The term "hot" refers to the effective temperature used to model carrier density, not to the overall temperature of the device. Since the charge carriers can become trapped in the gate dielectric of a MOS transistor, the switching characteristics of the transistor can be permanently changed. Hot-carrier injection is one of the mechanisms that adversely affects the reliability of semiconductors of solid-state devices. Physics The term “hot carrier injection” usually refers to the effect in MOSFETs, where a carrier is injected from the conducting channel in the silicon substrate to the gate dielectric, which usually is made of silicon dioxide (SiO2). To become “hot” and enter the conduction band of SiO2, an electron must gain a kinetic energy of ~3.2 eV. For holes, the valence band offset in this case dictates they must have a kinetic energy of 4.6 eV. The term "hot electron" comes from the effective temperature term used when modelling carrier density (i.e., with a Fermi-Dirac function) and does not refer to the bulk temperature of the semiconductor (which can be physically cold, although the warmer it is, the higher the population of hot electrons it will contain all else being equal). The term “hot electron” was originally introduced to describe non-equilibrium electrons (or holes) in semiconductors. More broadly, the term describes electron distributions describable by the Fermi function, but with an elevated effective temperature. This greater energy affects the mobility of charge carriers and as a consequence affects how they travel through a semiconductor device. Hot electrons can tunnel out of the semiconductor material, instead of recombining with a hole or being conducted through the material to a collector. Consequent effects include increa
https://en.wikipedia.org/wiki/Multiseat%20configuration
A multiseat, multi-station or multiterminal system is a single computer which supports multiple independent local users at the same time. A "seat" consists of all hardware devices assigned to a specific workplace at which one user sits at and interacts with the computer. It consists of at least one graphics device (graphics card or just an output (e.g. HDMI/VGA/DisplayPort port) and the attached monitor/video projector) for the output and a keyboard and a mouse for the input. It can also include video cameras, sound cards and more. Motivation Since the 1960s computers have been shared between users. Especially in the early days of computing when computers were extremely expensive the usual paradigm was a central mainframe computer connected to numerous terminals. With the advent of personal computing this paradigm has been largely replaced by personal computers (or one computer per user). Multiseat setups are a return to this multiuser paradigm but based around a PC which supports a number of zero-clients usually consisting of a terminal per user (screen, keyboard, mouse). In some situations a multiseat setup is more cost-effective because it is not necessary to buy separate motherboards, microprocessors, RAM, hard disks and other components for each user. For example, buying one high speed CPU, usually costs less than buying several slower CPUs. History In the 1970s, it was very commonplace to connect multiple computer terminals to a single mainframe computer, even graphical terminals. Early terminals were connected with RS-232 type serial connections, either directly, or through modems. With the advent of Internet Protocol based networking, it became possible for multiple users to log into a host using telnet or – for a graphic environment – an X Window System "server". These systems would retain a physically secure "root console" for system administration and direct access to the host machine. Support for multiple consoles in a PC running the X interface w
https://en.wikipedia.org/wiki/Molecular%20risk%20assessment
Molecular risk assessment is a procedure in which biomarkers (for example, biological molecules or changes in tumor cell DNA) are used to estimate a person's risk for developing cancer. Specific biomarkers may be linked to particular types of cancer. Sources External links Molecular risk assessment entry in the public domain NCI Dictionary of Cancer Terms Biological techniques and tools Cancer screening
https://en.wikipedia.org/wiki/Copeland%E2%80%93Erd%C5%91s%20constant
The Copeland–Erdős constant is the concatenation of "0." with the base 10 representations of the prime numbers in order. Its value, using the modern definition of prime, is approximately 0.235711131719232931374143… . The constant is irrational; this can be proven with Dirichlet's theorem on arithmetic progressions or Bertrand's postulate (Hardy and Wright, p. 113) or Ramare's theorem that every even integer is a sum of at most six primes. It also follows directly from its normality (see below). By a similar argument, any constant created by concatenating "0." with all primes in an arithmetic progression dn + a, where a is coprime to d and to 10, will be irrational; for example, primes of the form 4n + 1 or 8n + 1. By Dirichlet's theorem, the arithmetic progression dn · 10m + a contains primes for all m, and those primes are also in cd + a, so the concatenated primes contain arbitrarily long sequences of the digit zero. In base 10, the constant is a normal number, a fact proven by Arthur Herbert Copeland and Paul Erdős in 1946 (hence the name of the constant). The constant is given by where pn is the nth prime number. Its continued fraction is [0; 4, 4, 8, 16, 18, 5, 1, …] (). Related constants Copeland and Erdős's proof that their constant is normal relies only on the fact that is strictly increasing and , where is the nth prime number. More generally, if is any strictly increasing sequence of natural numbers such that and is any natural number greater than or equal to 2, then the constant obtained by concatenating "0." with the base- representations of the 's is normal in base . For example, the sequence satisfies these conditions, so the constant 0.003712192634435363748597110122136… is normal in base 10, and 0.003101525354661104…7 is normal in base 7. In any given base b the number which can be written in base b as 0.0110101000101000101…b where the nth digit is 1 if and only if n is prime, is irrational. See also Smarandache–Wellin numbers:
https://en.wikipedia.org/wiki/Connectedness
In mathematics, connectedness is used to refer to various properties meaning, in some sense, "all one piece". When a mathematical object has such a property, we say it is connected; otherwise it is disconnected. When a disconnected object can be split naturally into connected pieces, each piece is usually called a component (or connected component). Connectedness in topology A topological space is said to be connected if it is not the union of two disjoint nonempty open sets. A set is open if it contains no point lying on its boundary; thus, in an informal, intuitive sense, the fact that a space can be partitioned into disjoint open sets suggests that the boundary between the two sets is not part of the space, and thus splits it into two separate pieces. Other notions of connectedness Fields of mathematics are typically concerned with special kinds of objects. Often such an object is said to be connected if, when it is considered as a topological space, it is a connected space. Thus, manifolds, Lie groups, and graphs are all called connected if they are connected as topological spaces, and their components are the topological components. Sometimes it is convenient to restate the definition of connectedness in such fields. For example, a graph is said to be connected if each pair of vertices in the graph is joined by a path. This definition is equivalent to the topological one, as applied to graphs, but it is easier to deal with in the context of graph theory. Graph theory also offers a context-free measure of connectedness, called the clustering coefficient. Other fields of mathematics are concerned with objects that are rarely considered as topological spaces. Nonetheless, definitions of connectedness often reflect the topological meaning in some way. For example, in category theory, a category is said to be connected if each pair of objects in it is joined by a sequence of morphisms. Thus, a category is connected if it is, intuitively, all one piece. There ma
https://en.wikipedia.org/wiki/Pulsed-field%20gel%20electrophoresis
Pulsed-field gel electrophoresis (PFGE) is a technique used for the separation of large DNA molecules by applying to a gel matrix an electric field that periodically changes direction. Pulsed-field gel electrophoresis is a method used to separate large segments of DNA using an alternating and cross field. In a uniform magnetic field, components larger than 50kb move through the gel in a zigzag pattern, allowing for more effective separation of DNA molecules. This method is commonly used in microbiology for typing bacteria and is a valuable tool for epidemiological studies and gene mapping in microbes and mammalian cells. It also played a role in the development of large-insert cloning systems such as bacterial and yeast artificial chromosomes. PFGE can be used to determine the genetic similarity between bacteria, as close and similar species will have similar profiles while dissimilar ones will have different profiles. This feature is useful in identifying the prevalent agent of a disease. Additionally, it can be used to monitor and evaluate micro-organisms in clinical samples, soil and water. It is also considered a reliable and standard method in vaccine preparation. In recent years, PFGE has been widely used as a powerful tool for controlling, preventing and monitoring diseases in different populations Discovery The discovery of PFGE can be traced back to the late 1970s and early 1980s. One of the earliest references to the use of PFGE for DNA analysis is a 1977 paper by Dr. David Burke and colleagues at the University of Colorado, where they described a method of separating DNA molecules based on their size using conventional gel electrophoresis. The first reference to the use of the term "pulsed-field gel electrophoresis" appears in a 1983 paper by Dr. Richard L. Sweeley and colleagues at the DuPont Company, where they described a method of separating large DNA molecules (over 50 kb) by applying a series of alternating electric fields to a gel matrix. In the
https://en.wikipedia.org/wiki/Circumscriptional%20name
In biological classification, circumscriptional names are taxon names that are not ruled by ICZN and are defined by the particular set of members included. Circumscriptional names are used mainly for taxa above family-group level (e. g. order or class), but can be also used for taxa of any ranks, as well as for rank-less taxa. Non-typified names other than those of the genus- or species-group constitute the majority of generally accepted names of taxa higher than superfamily. The ICZN regulates names of taxa up to family group rank (i. e. superfamily). There are no generally accepted rules of naming higher taxa (orders, classes, phyla, etc.). Under the approach of circumscription-based (circumscriptional) nomenclatures, a circumscriptional name is associated with a certain circumscription of a taxon without regard of its rank or position. Some authors advocate introducing a mandatory standardized typified nomenclature of higher taxa. They suggest all names of higher taxa to be derived in the same manner as family-group names, i.e. by modifying names of type genera with endings to reflect the rank. There is no consensus on what such higher rank endings should be. A number of established practices exist as to the use of typified names of higher taxa, depending on animal group. See also Descriptive botanical name, optional forms still used in botany for ranks above family and for a few family names