source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/First-order%20hold
First-order hold (FOH) is a mathematical model of the practical reconstruction of sampled signals that could be done by a conventional digital-to-analog converter (DAC) and an analog circuit called an integrator. For FOH, the signal is reconstructed as a piecewise linear approximation to the original signal that was sampled. A mathematical model such as FOH (or, more commonly, the zero-order hold) is necessary because, in the sampling and reconstruction theorem, a sequence of Dirac impulses, xs(t), representing the discrete samples, x(nT), is low-pass filtered to recover the original signal that was sampled, x(t). However, outputting a sequence of Dirac impulses is impractical. Devices can be implemented, using a conventional DAC and some linear analog circuitry, to reconstruct the piecewise linear output for either predictive or delayed FOH. Even though this is not what is physically done, an identical output can be generated by applying the hypothetical sequence of Dirac impulses, xs(t), to a linear time-invariant system, otherwise known as a linear filter with such characteristics (which, for an LTI system, are fully described by the impulse response) so that each input impulse results in the correct piecewise linear function in the output. Basic first-order hold First-order hold is the hypothetical filter or LTI system that converts the ideally sampled signal {| |- | | |- | | |} to the piecewise linear signal resulting in an effective impulse response of where is the triangular function. The effective frequency response is the continuous Fourier transform of the impulse response. {| |- | | |- | | |- | | |} where is the normalized sinc function. The Laplace transform transfer function of FOH is found by substituting s = i 2 π f: {| |- | | |- | | |} This is an acausal system in that the linear interpolation function moves toward the value of the next sample before such sample is applied to the hypothetical FOH filter. Delayed first-order ho
https://en.wikipedia.org/wiki/Magnetofection
Magnetofection is a transfection method that uses magnetic fields to concentrate particles containing vectors to target cells in the body. Magnetofection has been adapted to a variety of vectors, including nucleic acids, non-viral transfection systems, and viruses. This method offers advantages such as high transfection efficiency and biocompatibility which are balanced with limitations. Mechanism Principle The term magnetofection, currently trademarked by the company OZ Biosciences, combines the words magnetic and transfection. Magnetofection uses nucleic acids associated with magnetic nanoparticles. These molecular complexes are then concentrated and transported into cells using an applied magnetic field. Synthesis The magnetic nanoparticles are typically made from iron oxide, which is fully biodegradable, using methods such as coprecipitation or microemulsion. The nanoparticles are then combined with gene vectors (DNA, siRNA, ODN, virus, etc.). One method involves linking viral particles to magnetic particles using an avidin-biotin interaction. Viruses can also bind to the nanoparticles via hydrophobic interaction. Another synthesis method involves coating magnetic nanoparticles with cationic lipids or polymers via salt-induced aggregation. For example, nanoparticles may be conjugated with the polyethylenimine (PEI), a positively charged polymer used commonly as a transfection agent. The PEI solution must have a high pH during synthesis to encourage high gene expression. The positively charged nanoparticles can then associate with negatively charged nucleic acids via electrostatic interaction. Cellular uptake Magnetic particles loaded with vectors are concentrated on the target cells by the influence of an external magnetic field. The cells then take up genetic material naturally via endocytosis and pinocytosis. Consequently, membrane architecture and structure stays intact, in contrast to other physical transfection methods such as electroporation or ge
https://en.wikipedia.org/wiki/Rolled%20oats
Rolled oats are a type of lightly processed whole-grain food. They are made from oat groats that have been dehusked and steamed, before being rolled into flat flakes under heavy rollers and then stabilized by being lightly toasted. Thick-rolled oats usually remain unbroken during processing, while thin-rolled oats often become fragmented. Rolled whole oats, without further processing, can be cooked into a porridge and eaten as old-fashioned oats or Scottish oats; when the oats are rolled thinner and steam-cooked more in the factory, they will later absorb water much more easily and cook faster into a porridge, and when processed this way are sometimes called "quick" or "instant" oats. Rolled oats are most often the main ingredient in granola and muesli. They can be further processed into a coarse powder, which breaks down to nearly a liquid consistency when boiled. Cooked oatmeal powder is often used as baby food. Process The oat, like other cereals, has a hard, inedible outer husk that must be removed before the grain can be eaten. After the outer husk (or chaff) has been removed from the still bran-covered oat grains, the remainder is called oat groats. Since the bran layer, though nutritious, makes the grains tougher to chew and contains an enzyme that can cause the oats to go rancid, raw oat groats are often further steam-treated to soften them for a quicker cooking time and to denature the enzymes for a longer shelf life. Steel-cut or pinhead oats Steel-cut oats (sometimes called "pinhead oats", especially if cut small) are oat groats that have been chopped by a sharp-bladed machine before any steaming, and thus retain bits of the bran layer. Preparation Rolled oats can be eaten without further heating or cooking, if they are soaked for 1–6 hours in water-based liquid, such as water, milk, or plant-based dairy substitutes. The required soaking duration depends on shape, size and pre-processing technique. Whole oat groats can be cooked as a breakfast ce
https://en.wikipedia.org/wiki/Mathematics%20Genealogy%20Project
The Mathematics Genealogy Project (MGP) is a web-based database for the academic genealogy of mathematicians. it contained information on 274,575 mathematical scientists who contributed to research-level mathematics. For a typical mathematician, the project entry includes graduation year, thesis title (in its Mathematics Subject Classification), alma mater, doctoral advisor, and doctoral students. Origin of the database The project grew out of founder Harry Coonce's desire to know the name of his advisor's advisor. Coonce was Professor of Mathematics at Minnesota State University, Mankato, at the time of the project's founding, and the project went online there in fall 1997. Coonce retired from Mankato in 1999, and in fall 2002 the university decided that it would no longer support the project. The project relocated at that time to North Dakota State University. Since 2003, the project has also operated under the auspices of the American Mathematical Society and in 2005 it received a grant from the Clay Mathematics Institute. Harry Coonce has been assisted by Mitchel T. Keller, Assistant Professor at Morningside College. Keller is currently the Managing Director of the project. Mission and scope The Mathematics Genealogy Mission statement: "Throughout this project when we use the word 'mathematics' or 'mathematician' we mean that word in a very inclusive sense. Thus, all relevant data from statistics, computer science, philosophy or operations research is welcome." Scope The genealogy information is obtained from sources such as Dissertation Abstracts International and Notices of the American Mathematical Society, but may be supplied by anyone via the project's website. The searchable database contains the name of the mathematician, university which awarded the degree, year when the degree was awarded, title of the dissertation, names of the advisor and second advisor, a flag of the country where the degree was awarded, a listing of doctoral students, and a cou
https://en.wikipedia.org/wiki/Dissection
Dissection (from Latin "to cut to pieces"; also called anatomization) is the dismembering of the body of a deceased animal or plant to study its anatomical structure. Autopsy is used in pathology and forensic medicine to determine the cause of death in humans. Less extensive dissection of plants and smaller animals preserved in a formaldehyde solution is typically carried out or demonstrated in biology and natural science classes in middle school and high school, while extensive dissections of cadavers of adults and children, both fresh and preserved are carried out by medical students in medical schools as a part of the teaching in subjects such as anatomy, pathology and forensic medicine. Consequently, dissection is typically conducted in a morgue or in an anatomy lab. Dissection has been used for centuries to explore anatomy. Objections to the use of cadavers have led to the use of alternatives including virtual dissection of computer models. In the field of surgery, the term "dissection" or "dissecting" means more specifically to the practice of separating an anatomical structure (an organ, nerve or blood vessel) from its surrounding connective tissue in order to minimize unwanted damage during a surgical procedure. Overview Plant and animal bodies are dissected to analyze the structure and function of its components. Dissection is practised by students in courses of biology, botany, zoology, and veterinary science, and sometimes in arts studies. In medical schools, students dissect human cadavers to learn anatomy. Zoötomy is sometimes used to describe "dissection of an animal". Human dissection A key principle in the dissection of human cadavers (sometimes called androtomy) is the prevention of human disease to the dissector. Prevention of transmission includes the wearing of protective gear, ensuring the environment is clean, dissection technique and pre-dissection tests to specimens for the presence of HIV and hepatitis viruses. Specimens are dissected
https://en.wikipedia.org/wiki/Data%20acquisition
Data acquisition is the process of sampling signals that measure real-world physical conditions and converting the resulting samples into digital numeric values that can be manipulated by a computer. Data acquisition systems, abbreviated by the acronyms DAS, DAQ, or DAU, typically convert analog waveforms into digital values for processing. The components of data acquisition systems include: Sensors, to convert physical parameters to electrical signals. Signal conditioning circuitry, to convert sensor signals into a form that can be converted to digital values. Analog-to-digital converters, to convert conditioned sensor signals to digital values. Data acquisition applications are usually controlled by software programs developed using various general purpose programming languages such as Assembly, BASIC, C, C++, C#, Fortran, Java, LabVIEW, Lisp, Pascal, etc. Stand-alone data acquisition systems are often called data loggers. There are also open-source software packages providing all the necessary tools to acquire data from different, typically specific, hardware equipment. These tools come from the scientific community where complex experiment requires fast, flexible, and adaptable software. Those packages are usually custom-fit but more general DAQ packages like the Maximum Integrated Data Acquisition System can be easily tailored and are used in several physics experiments. History In 1963, IBM produced computers that specialized in data acquisition. These include the IBM 7700 Data Acquisition System, and its successor, the IBM 1800 Data Acquisition and Control System. These expensive specialized systems were surpassed in 1974 by general-purpose S-100 computers and data acquisition cards produced by Tecmar/Scientific Solutions Inc. In 1981 IBM introduced the IBM Personal Computer and Scientific Solutions introduced the first PC data acquisition products. Methodology Sources and systems Data acquisition begins with the physical phenomenon or physical prop
https://en.wikipedia.org/wiki/Service%20Data%20Objects
Service Data Objects is a technology that allows heterogeneous data to be accessed in a uniform way. The SDO specification was originally developed in 2004 as a joint collaboration between Oracle (BEA) and IBM and approved by the Java Community Process in JSR 235. Version 2.0 of the specification was introduced in November 2005 as a key part of the Service Component Architecture. Relation to other technologies Originally, the technology was known as Web Data Objects, or WDO, and was shipped in IBM WebSphere Application Server 5.1 and IBM WebSphere Studio Application Developer 5.1.2. Other similar technologies are JDO, EMF, JAXB and ADO.NET. Design Service Data Objects denote the use of language-agnostic data structures that facilitate communication between structural tiers and various service-providing entities. They require the use of a tree structure with a root node and provide traversal mechanisms (breadth/depth-first) that allow client programs to navigate the elements. Objects can be static (fixed number of fields) or dynamic with a map-like structure allowing for unlimited fields. The specification defines meta-data for all fields and each object graph can also be provided with change summaries that can allow receiving programs to act more efficiently on them. Developers The specification is now being developed by IBM, Rogue Wave, Oracle, SAP, Siebel, Sybase, Xcalia, Software AG within the OASIS Member Section Open CSA since April 2007. Collaborative work and materials remain on the collaboration platform of Open SOA, an informal group of actors of the industry. Implementations The following SDO products are available: Rogue Wave Software HydraSDO Xcalia (for Java and .Net) Oracle (Data Service Integrator) IBM (Virtual XML Garden) IBM (WebSphere Process Server) There are open source implementations of SDO from: The Eclipse Persistence Services Project (EclipseLink) The Apache Tuscany project for Java and C++ The fcl-sdo library included with
https://en.wikipedia.org/wiki/Mathematics%20and%20architecture
Mathematics and architecture are related, since, as with other arts, architects use mathematics for several reasons. Apart from the mathematics needed when engineering buildings, architects use geometry: to define the spatial form of a building; from the Pythagoreans of the sixth century BC onwards, to create forms considered harmonious, and thus to lay out buildings and their surroundings according to mathematical, aesthetic and sometimes religious principles; to decorate buildings with mathematical objects such as tessellations; and to meet environmental goals, such as to minimise wind speeds around the bases of tall buildings. In ancient Egypt, ancient Greece, India, and the Islamic world, buildings including pyramids, temples, mosques, palaces and mausoleums were laid out with specific proportions for religious reasons. In Islamic architecture, geometric shapes and geometric tiling patterns are used to decorate buildings, both inside and outside. Some Hindu temples have a fractal-like structure where parts resemble the whole, conveying a message about the infinite in Hindu cosmology. In Chinese architecture, the tulou of Fujian province are circular, communal defensive structures. In the twenty-first century, mathematical ornamentation is again being used to cover public buildings. In Renaissance architecture, symmetry and proportion were deliberately emphasized by architects such as Leon Battista Alberti, Sebastiano Serlio and Andrea Palladio, influenced by Vitruvius's De architectura from ancient Rome and the arithmetic of the Pythagoreans from ancient Greece. At the end of the nineteenth century, Vladimir Shukhov in Russia and Antoni Gaudí in Barcelona pioneered the use of hyperboloid structures; in the Sagrada Família, Gaudí also incorporated hyperbolic paraboloids, tessellations, catenary arches, catenoids, helicoids, and ruled surfaces. In the twentieth century, styles such as modern architecture and Deconstructivism explored different geometries to achi
https://en.wikipedia.org/wiki/Gracility
Gracility is slenderness, the condition of being gracile, which means slender. It derives from the Latin adjective gracilis (masculine or feminine), or gracile (neuter), which in either form means slender, and when transferred for example to discourse takes the sense of "without ornament", "simple" or various similar connotations. In Glossary of Botanic Terms, B. D. Jackson speaks dismissively of an entry in earlier dictionary of A. A. Crozier as follows: "Gracilis (Lat.), slender. Crozier has the needless word 'gracile'". However, his objection would be hard to sustain in current usage; apart from the fact that gracile is a natural and convenient term, it is hardly a neologism. The Shorter Oxford English Dictionary gives the source date for that usage as 1623 and indicates the word is misused (through association with grace) for "gracefully slender". This misuse is unfortunate at least, because the terms gracile and grace are unrelated: the etymological root of grace is the Latin word gratia from gratus, meaning 'pleasing', and has nothing to do with slenderness or thinness. In biology In biology, the term is in common use, whether as English or Latin: The term gracile—and its opposite, robust—occur in discussion of the morphology of various hominids for example. The gracile fasciculus is a particular bundle of axon fibres in the spinal cord The gracile nucleus is a particular structure of neurons in the medulla oblongata "GRACILE syndrome", is associated with a BCS1L mutation In biological taxonomy, gracile is the specific name or specific epithet for various species. Where the gender is appropriate, the form is gracilis. Examples include: Campylobacter gracilis, a species of bacterium implicated in foodborne disease Ctenochasma gracile, a late Jurassic pterosaur Eriophorum gracile, a species of sedge, Cyperaceae Euglena gracilis, a unicellular flagellate protist Hydrophis gracilis, a species of sea snakes Melampodium gracile, a flowering plant s
https://en.wikipedia.org/wiki/List%20of%20laser%20articles
This is a list of laser topics. A 3D printing, additive manufacturing Abnormal reflection Above-threshold ionization Absorption spectroscopy Accelerator physics Acoustic microscopy Acousto-optic deflector Acousto-optic modulator Acousto-optical spectrometer Acousto-optics Active laser medium Active optics Advanced Precision Kill Weapon System Advanced Tactical Laser Afocal system Airborne laser Airborne wind turbine Airy beam ALKA All gas-phase iodine laser Ambient ionization Amplified spontaneous emission Analytical chemistry Aneutronic fusion Antiproton Decelerator Apache Arrowhead Apache Point Observatory Lunar Laser-ranging Operation Arago spot Argon fluoride laser Argus laser Asterix IV laser Astrophysical maser Atmospheric-pressure laser ionization Atom interferometer Atom laser Atom probe Atomic clock Atomic coherence Atomic fountain Atomic line filter Atomic ratio Atomic spectroscopy Atomic vapor laser isotope separation Audience scanning Autler–Townes effect Autologous patient-specific tumor antigen response Automated guided vehicle Autonomous cruise control system Avalanche photodiode Axicon B Babinet's principle Ballistic photon Bandwidth-limited pulse Bandwidth (signal processing) Barcode reader Basir Beam-powered propulsion Beam diameter Beam dump Beam expander Beam homogenizer Beam parameter product Beamz Big Bang Observer Biophotonics Biosensor Black silicon Blood irradiation therapy Blu-ray Disc Blue laser Boeing Laser Avenger Boeing NC-135 Boeing YAL-1 Bubblegram C CLidar CALIPSO, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations Calligraphic projection Calutron Carbon dioxide laser Carrier generation and recombination Catastrophic optical damage Cauterization Cavity ring-down laser absorption spectroscopy Ceilometer Chaos in optical systems Chemical laser Chemical oxygen iodine laser Chirped mirror Chirped pulse amplification Clementine (sp
https://en.wikipedia.org/wiki/Memory%20disambiguation
Memory disambiguation is a set of techniques employed by high-performance out-of-order execution microprocessors that execute memory access instructions (loads and stores) out of program order. The mechanisms for performing memory disambiguation, implemented using digital logic inside the microprocessor core, detect true dependencies between memory operations at execution time and allow the processor to recover when a dependence has been violated. They also eliminate spurious memory dependencies and allow for greater instruction-level parallelism by allowing safe out-of-order execution of loads and stores. Background Dependencies When attempting to execute instructions out of order, a microprocessor must respect true dependencies between instructions. For example, consider a simple true dependence: 1: add $1, $2, $3 # R1 <= R2 + R3 2: add $5, $1, $4 # R5 <= R1 + R4 (dependent on 1) In this example, the add instruction on line 2 is dependent on the add instruction on line 1 because the register R1 is a source operand of the addition operation on line 2. The add on line 2 cannot execute until the add on line 1 completes. In this case, the dependence is static and easily determined by a microprocessor, because the sources and destinations are registers. The destination register of the add instruction on line 1 (R1) is part of the instruction encoding, and so can be determined by the microprocessor early on, during the decode stage of the pipeline. Similarly, the source registers of the add instruction on line 2 (R1 and R4) are also encoded into the instruction itself and are determined in decode. To respect this true dependence, the microprocessor's scheduler logic will issue these instructions in the correct order (instruction 1 first, followed by instruction 2) so that the results of 1 are available when instruction 2 needs them. Complications arise when the dependence is not statically determinable. Such non-static dependencies arise with mem
https://en.wikipedia.org/wiki/Sister%20group
In phylogenetics, a sister group or sister taxon, also called an adelphotaxon, comprises the closest relative(s) of another given unit in an evolutionary tree. Definition The expression is most easily illustrated by a cladogram: Taxon A and taxon B are sister groups to each other. Taxa A and B, together with any other extant or extinct descendants of their most recent common ancestor (MRCA), form a monophyletic group, the clade AB. Clade AB and taxon C are also sister groups. Taxa A, B, and C, together with all other descendants of their MRCA form the clade ABC. The whole clade ABC is itself a subtree of a larger tree which offers yet more sister group relationships, both among the leaves and among larger, more deeply rooted clades. The tree structure shown connects through its root to the rest of the universal tree of life. In cladistic standards, taxa A, B, and C may represent specimens, species, genera, or any other taxonomic units. If A and B are at the same taxonomic level, terminology such as sister species or sister genera can be used. Example The term sister group is used in phylogenetic analysis, however, only groups identified in the analysis are labeled as "sister groups". An example is birds, whose commonly cited living sister group is the crocodiles, but that is true only when discussing extant organisms; when other, extinct groups are considered, the relationship between birds and crocodiles appears distant. Although the bird family tree is rooted in the dinosaurs, there were a number of other, earlier groups, such as the pterosaurs, that branched off of the line leading to the dinosaurs after the last common ancestor of birds and crocodiles. The term sister group must thus be seen as a relative term, with the caveat that the sister group is only the closest relative among the groups/species/specimens that are included in the analysis. Notes
https://en.wikipedia.org/wiki/List%20of%20Fourier-related%20transforms
This is a list of linear transformations of functions related to Fourier analysis. Such transformations map a function to a set of coefficients of basis functions, where the basis functions are sinusoidal and are therefore strongly localized in the frequency spectrum. (These transforms are generally designed to be invertible.) In the case of the Fourier transform, each basis function corresponds to a single frequency component. Continuous transforms Applied to functions of continuous arguments, Fourier-related transforms include: Two-sided Laplace transform Mellin transform, another closely related integral transform Laplace transform Fourier transform, with special cases: Fourier series When the input function/waveform is periodic, the Fourier transform output is a Dirac comb function, modulated by a discrete sequence of finite-valued coefficients that are complex-valued in general. These are called Fourier series coefficients. The term Fourier series actually refers to the inverse Fourier transform, which is a sum of sinusoids at discrete frequencies, weighted by the Fourier series coefficients. When the non-zero portion of the input function has finite duration, the Fourier transform is continuous and finite-valued. But a discrete subset of its values is sufficient to reconstruct/represent the portion that was analyzed. The same discrete set is obtained by treating the duration of the segment as one period of a periodic function and computing the Fourier series coefficients. Sine and cosine transforms: When the input function has odd or even symmetry around the origin, the Fourier transform reduces to a sine or cosine transform. Hartley transform Short-time Fourier transform (or short-term Fourier transform) (STFT) Rectangular mask short-time Fourier transform Chirplet transform Fractional Fourier transform (FRFT) Hankel transform: related to the Fourier Transform of radial functions. Fourier–Bros–Iagolnitzer transform Linear canonical t
https://en.wikipedia.org/wiki/N-philes
N-philes are group of radical molecules which are specifically attracted to the C=N bonds, defying often the selectivity rules of electrophilic attack. N-philes can often masquerade as electrophiles, where acyl radicals are excellent examples which interact with pi electrons of aryl groups.
https://en.wikipedia.org/wiki/Trustworthy%20computing
The term Trustworthy Computing (TwC) has been applied to computing systems that are inherently secure, available, and reliable. It is particularly associated with the Microsoft initiative of the same name, launched in 2002. History Until 1995, there were restrictions on commercial traffic over the Internet. On, May 26, 1995, Bill Gates sent the "Internet Tidal Wave" memorandum to Microsoft executives assigning "...the Internet this highest level of importance..." but Microsoft's Windows 95 was released without a web browser as Microsoft had not yet developed one. The success of the web had caught them by surprise but by mid 1995, they were testing their own web server, and on August 24, 1995, launched a major online service, MSN. The National Research Council recognized that the rise of the Internet simultaneously increased societal reliance on computer systems while increasing the vulnerability of such systems to failure and produced an important report in 1999, "Trust in Cyberspace". This report reviews the cost of un-trustworthy systems and identifies actions required for improvement. Microsoft and Trustworthy Computing Bill Gates launched Microsoft's "Trustworthy Computing" initiative with a January 15, 2002 memo, referencing an internal whitepaper by Microsoft CTO and Senior Vice President Craig Mundie. The move was reportedly prompted by the fact that they "...had been under fire from some of its larger customers–government agencies, financial companies and others–about the security problems in Windows, issues that were being brought front and center by a series of self-replicating worms and embarrassing attacks." such as Code Red, Nimda, Klez and Slammer. Four areas were identified as the initiative's key areas: Security, Privacy, Reliability, and Business Integrity, and despite some initial scepticism, at its 10-year anniversary it was generally accepted as having "...made a positive impact on the industry...". The Trustworthy Computing campaign was t
https://en.wikipedia.org/wiki/Logic%20optimization
Logic optimization is a process of finding an equivalent representation of the specified logic circuit under one or more specified constraints. This process is a part of a logic synthesis applied in digital electronics and integrated circuit design. Generally, the circuit is constrained to a minimum chip area meeting a predefined response delay. The goal of logic optimization of a given circuit is to obtain the smallest logic circuit that evaluates to the same values as the original one. Usually, the smaller circuit with the same function is cheaper, takes less space, consumes less power, has shorter latency, and minimizes risks of unexpected cross-talk, hazard of delayed signal processing, and other issues present at the nano-scale level of metallic structures on an integrated circuit. In terms of Boolean algebra, the optimization of a complex boolean expression is a process of finding a simpler one, which would upon evaluation ultimately produce the same results as the original one. Motivation The problem with having a complicated circuit (i.e. one with many elements, such as logic gates) is that each element takes up physical space and costs time and money to produce. Circuit minimization may be one form of logic optimization used to reduce the area of complex logic in integrated circuits. With the advent of logic synthesis, one of the biggest challenges faced by the electronic design automation (EDA) industry was to find the most simple circuit representation of the given design description. While two-level logic optimization had long existed in the form of the Quine–McCluskey algorithm, later followed by the Espresso heuristic logic minimizer, the rapidly improving chip densities, and the wide adoption of Hardware description languages for circuit description, formalized the logic optimization domain as it exists today, including Logic Friday (graphical interface), Minilog, and ESPRESSO-IISOJS (many-valued logic). Methods The methods of logic circuit sim
https://en.wikipedia.org/wiki/List%20of%20spherical%20symmetry%20groups
Finite spherical symmetry groups are also called point groups in three dimensions. There are five fundamental symmetry classes which have triangular fundamental domains: dihedral, cyclic, tetrahedral, octahedral, and icosahedral symmetry. This article lists the groups by Schoenflies notation, Coxeter notation, orbifold notation, and order. John Conway uses a variation of the Schoenflies notation, based on the groups' quaternion algebraic structure, labeled by one or two upper case letters, and whole number subscripts. The group order is defined as the subscript, unless the order is doubled for symbols with a plus or minus, "±", prefix, which implies a central inversion. Hermann–Mauguin notation (International notation) is also given. The crystallography groups, 32 in total, are a subset with element orders 2, 3, 4 and 6. Involutional symmetry There are four involutional groups: no symmetry (C1), reflection symmetry (Cs), 2-fold rotational symmetry (C2), and central point symmetry (Ci). Cyclic symmetry There are four infinite cyclic symmetry families, with n = 2 or higher. (n may be 1 as a special case as no symmetry) Dihedral symmetry There are three infinite dihedral symmetry families, with n = 2 or higher (n may be 1 as a special case). Polyhedral symmetry There are three types of polyhedral symmetry: tetrahedral symmetry, octahedral symmetry, and icosahedral symmetry, named after the triangle-faced regular polyhedra with these symmetries. Continuous symmetries All of the discrete point symmetries are subgroups of certain continuous symmetries. They can be classified as products of orthogonal groups O(n) or special orthogonal groups SO(n). O(1) is a single orthogonal reflection, dihedral symmetry order 2, Dih1. SO(1) is just the identity. Half turns, C2, are needed to complete. See also Crystallographic point group Triangle group List of planar symmetry groups Point groups in two dimensions
https://en.wikipedia.org/wiki/ONAP
ONAP (Open Network Automation Platform), is an open-source, orchestration and automation framework. It is hosted by The Linux Foundation. History On February 23, 2017, ONAP was announced as a result of a merger of the OpenECOMP and Open-Orchestrator (Open-O) projects. The goal of the project is to develop a widely used platform for orchestrating and automating physical and virtual network elements, with full lifecycle management. ONAP was formed as a merger of OpenECOMP, the open source version of AT&T's ECOMP project, and the Open-Orchestrator project, a project begun under the aegis of the Linux Foundation with China Mobile, Huawei and ZTE as lead contributors. The merger brought together both sets of source code and their developer communities, who then elaborated a common architecture for the new project. The first release of the combined ONAP architecture, code named "Amsterdam", was announced on November 20, 2017. The next release ("Beijing") was released on June 12, 2018. As of January, 2018, ONAP became a project within the LF Networking Fund, which consolidated membership across multiple projects into a common governance structure. Most ONAP members became members of the new LF Networking fund. Overview ONAP provides a platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management. ONAP incorporates or collaborates with other open-source projects, including OpenDaylight, FD.io, OPNFV and others. Contributing organizations include AT&T, Samsung, Nokia, Ericsson, Orange, Huawei, Intel, IBM and more. Architecture
https://en.wikipedia.org/wiki/Symbolic%20language%20%28programming%29
In computer science, a symbolic language is a language that uses characters or symbols to represent concepts, such as mathematical operations and the entities (or operands) on which these operations are performed. Modern programming languages use symbols to represent concepts and/or data and are therefore, examples of symbolic languages. Some programming languages (such as Lisp and Mathematica) make it easy to represent higher-level abstractions as expressions in the language, enabling symbolic programming., See also Mathematical notation Notation (general) Programming language specification Symbol table Symbolic language (other)
https://en.wikipedia.org/wiki/Cross-recurrence%20quantification
Cross-recurrence quantification (CRQ) is a non-linear method that quantifies how similarly two observed data series unfold over time. CRQ produces measures reflecting coordination, such as how often two data series have similar values or reflect similar system states (called percentage recurrence, or %REC), among other measures.
https://en.wikipedia.org/wiki/Signal%20analyzer
A signal analyzer is an instrument that measures the magnitude and phase of the input signal at a single frequency within the IF bandwidth of the instrument. It employs digital techniques to extract useful information that is carried by an electrical signal. In common usage the term is related to both spectrum analyzers and vector signal analyzers. While spectrum analyzers measure the amplitude or magnitude of signals, a signal analyzer with appropriate software or programming can measure any aspect of the signal such as modulation. Today’s high-frequency signal analyzers achieve good performance by optimizing both the analog front end and the digital back end. Theory of operation Modern signal analyzers use a superheterodyne receiver to downconvert a portion of the signal spectrum for analysis. As shown in the figure to the right, the signal is first converted to an intermediate frequency and then filtered in order to band-limit the signal and prevent aliasing. The downconversion can operate in a swept-tuned mode similar to a traditional spectrum analyzer, or in a fixed-tuned mode. In the fixed-tuned mode the range of frequencies downconverted does not change and the downconverter output is then digitized for further analysis. The digitizing process typically involves in-phase/quadrature (I/Q) or complex sampling so that all characteristics of the signal are preserved, as opposed to the magnitude-only processing of a spectrum analyzer. The sampling rate of the digitizing process may be varied in relation to the frequency span under consideration or (more typically) the signal may be digitally resampled. Typical usage Signal analyzers can perform the operations of both spectrum analyzers and vector signal analyzers. A signal analyzer can be viewed as a measurement platform, with operations such as spectrum analysis (including phase noise, power, and distortion) and vector signal analysis (including demodulation or modulation quality analysis) performed as m
https://en.wikipedia.org/wiki/Ambiguity%20function
In pulsed radar and sonar signal processing, an ambiguity function is a two-dimensional function of propagation delay and Doppler frequency , . It represents the distortion of a returned pulse due to the receiver matched filter (commonly, but not exclusively, used in pulse compression radar) of the return from a moving target. The ambiguity function is defined by the properties of the pulse and of the filter, and not any particular target scenario. Many definitions of the ambiguity function exist; some are restricted to narrowband signals and others are suitable to describe the delay and Doppler relationship of wideband signals. Often the definition of the ambiguity function is given as the magnitude squared of other definitions (Weiss). For a given complex baseband pulse , the narrowband ambiguity function is given by where denotes the complex conjugate and is the imaginary unit. Note that for zero Doppler shift (), this reduces to the autocorrelation of . A more concise way of representing the ambiguity function consists of examining the one-dimensional zero-delay and zero-Doppler "cuts"; that is, and , respectively. The matched filter output as a function of time (the signal one would observe in a radar system) is a Doppler cut, with the constant frequency given by the target's Doppler shift: . Background and motivation Pulse-Doppler radar equipment sends out a series of radio frequency pulses. Each pulse has a certain shape (waveform)—how long the pulse is, what its frequency is, whether the frequency changes during the pulse, and so on. If the waves reflect off a single object, the detector will see a signal which, in the simplest case, is a copy of the original pulse but delayed by a certain time —related to the object's distance—and shifted by a certain frequency —related to the object's velocity (Doppler shift). If the original emitted pulse waveform is , then the detected signal (neglecting noise, attenuation, and distortion, and wideband correctio
https://en.wikipedia.org/wiki/User-in-the-loop
User-in-the-Loop (UIL) refers to the notion that a technology (e.g., network) can improve a performance objective by engaging its human users (Layer 8). The idea can be applied in various technological fields. UIL assumes that human users of a network are among the smartest but also most unpredictable units of that network. Furthermore, human users often have a certain set of (input) values that they sense (more or less observe, but also acoustic or haptic feedback is imaginable: imagine a gas pedal in a car giving some resistance, like for a speedomat). Both elements of smart decision-making and observed values can help towards improving the bigger objective. The input values are meant to encourage/discourage human users to behave in certain ways that improve the overall performance of the system. One example of a historic implementation related to UIL has appeared in electric power networks where a price chart is introduced to users of electrical power. This price chart differentiates the values of electricity based on off-peak, mid-peak and on-peak periods, for instance. Faced with a non-homogenous pattern of pricing, human users respond by changing their power consumption accordingly that eventually leads to the overall improvement of access to electrical power (reduce peak hour consumption). Recently, UIL has been also introduced for wireless telecommunications (cellular networks). Wireless resources including the bandwidth (frequency) are an increasingly scarce resource and the while current demand on wireless network is below the supply in most of the times (potentials capacity of the wireless links based on technology limitations), the rapid and exponential increase in demand will render wireless access an increasingly expensive resource in a matter of few years. While usual technological responses to this perspective such as innovative new generations of cellular systems, more efficient resource allocations, cognitive radio and machine learning are certa
https://en.wikipedia.org/wiki/List%20of%20uniform%20polyhedra
In geometry, a uniform polyhedron is a polyhedron which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.e. there is an isometry mapping any vertex onto any other). It follows that all vertices are congruent, and the polyhedron has a high degree of reflectional and rotational symmetry. Uniform polyhedra can be divided between convex forms with convex regular polygon faces and star forms. Star forms have either regular star polygon faces or vertex figures or both. This list includes these: all 75 nonprismatic uniform polyhedra; a few representatives of the infinite sets of prisms and antiprisms; one degenerate polyhedron, Skilling's figure with overlapping edges. It was proven in that there are only 75 uniform polyhedra other than the infinite families of prisms and antiprisms. John Skilling discovered an overlooked degenerate example, by relaxing the condition that only two faces may meet at an edge. This is a degenerate uniform polyhedron rather than a uniform polyhedron, because some pairs of edges coincide. Not included are: The uniform polyhedron compounds. 40 potential uniform polyhedra with degenerate vertex figures which have overlapping edges (not counted by Coxeter); The uniform tilings (infinite polyhedra) 11 Euclidean convex uniform tilings; 28 Euclidean nonconvex or apeirogonal uniform tilings; Infinite number of uniform tilings in hyperbolic plane. Any polygons or 4-polytopes Indexing Four numbering schemes for the uniform polyhedra are in common use, distinguished by letters: [C] Coxeter et al., 1954, showed the convex forms as figures 15 through 32; three prismatic forms, figures 33–35; and the nonconvex forms, figures 36–92. [W] Wenninger, 1974, has 119 figures: 1–5 for the Platonic solids, 6–18 for the Archimedean solids, 19–66 for stellated forms including the 4 regular nonconvex polyhedra, and ended with 67–119 for the nonconvex uniform polyhedra. [K] Kaleido, 1993: The 80 figure
https://en.wikipedia.org/wiki/Abstract%20nonsense
In mathematics, abstract nonsense, general abstract nonsense, generalized abstract nonsense, and general nonsense are nonderogatory terms used by mathematicians to describe long, theoretical parts of a proof they skip over when readers are expected to be familiar with them. These terms are mainly used for abstract methods related to category theory and homological algebra. More generally, "abstract nonsense" may refer to a proof that relies on category-theoretic methods, or even to the study of category theory itself. Background Roughly speaking, category theory is the study of the general form, that is, categories of mathematical theories, without regard to their content. As a result, mathematical proofs that rely on category-theoretic ideas often seem out-of-context, somewhat akin to a non sequitur. Authors sometimes dub these proofs "abstract nonsense" as a light-hearted way of alerting readers to their abstract nature. Labeling an argument "abstract nonsense" is usually not intended to be derogatory, and is instead used jokingly, in a self-deprecating way, affectionately, or even as a compliment to the generality of the argument. Certain ideas and constructions in mathematics share a uniformity throughout many domains, unified by category theory. Typical methods include the use of classifying spaces and universal properties, use of the Yoneda lemma, natural transformations between functors, and diagram chasing. When an audience can be assumed to be familiar with the general form of such arguments, mathematicians will use the expression "Such and such is true by abstract nonsense" rather than provide an elaborate explanation of particulars. For example, one might say that "By abstract nonsense, products are unique up to isomorphism when they exist", instead of arguing about how these isomorphisms can be derived from the universal property that defines the product. This allows one to skip proof details that can be considered trivial or not providing much insi
https://en.wikipedia.org/wiki/PC/104
PC/104 (or PC104) is a family of embedded computer standards which define both form factors and computer buses by the PC/104 Consortium. Its name derives from the 104 pins on the interboard connector (ISA) in the original PC/104 specification and has been retained in subsequent revisions, despite changes to connectors. PC/104 is intended for specialized environments where a small, rugged computer system is required. The standard is modular, and allows consumers to stack together boards from a variety of COTS manufacturers to produce a customized embedded system. The original PC/104 form factor is somewhat smaller than a desktop PC motherboard at . Unlike other popular computer form factors such as ATX, which rely on a motherboard or backplane, PC/104 boards are stacked on top of each other like building blocks. The PC/104 specification defines four mounting holes at the corners of each module, which allow the boards to be fastened to each other using standoffs. The stackable bus connectors and use of standoffs provides a more rugged mounting than slot boards found in desktop PCs. The compact board size further contributes to the ruggedness of the form factor by reducing the possibility of PCB flexing under shock and vibration. A typical PC/104 system (commonly referred to as a "stack") will include a CPU board, power supply board, and one or more peripheral boards, such as a data acquisition module, GPS receiver, or Wireless LAN controller. A wide array of peripheral boards are available from various vendors. Users may design a stack that incorporates boards from multiple vendors. The overall height, weight, and power consumption of the stack can vary depending on the number of boards that are used. PC/104 is sometimes referred to as a "stackable PC", as most of the architecture derives from the desktop PC. The majority of PC/104 CPU boards are x86 compatible and include standard PC interfaces such as Serial Ports, USB, Ethernet, and VGA. A x86 PC/104 s
https://en.wikipedia.org/wiki/Error%20concealment
Error concealment is a technique used in signal processing that aims to minimize the deterioration of signals caused by missing data, called packet loss. A signal is a message sent from a transmitter to a receiver in multiple small packets. Packet loss occurs when these packets are misdirected, delayed, resequenced, or corrupted. Receiver-Based Techniques When error recovery occurs at the receiving end of the signal, it is receiver-based. These techniques focus on correcting corrupted or missing data. Waveform substitution Preliminary attempts at receiver-based error concealment involved packet repetition, replacing lost packets with copies of previously received packets. This function is computationally simple and is performed by a device on the receiver end called a "drop-out compensator". Zero Insertion When this technique is used, if a packet is lost, its entries are replaced with 0s. Interpolation Interpolation involves making educated guesses about the nature of a missing packet. For example, by following speech patterns in audio or faces in video. Buffer Data buffers are used for temporarily storing data while waiting for delayed packets to arrive. They are common in internet browser loading bars and video applications, like YouTube. Transmitter-Based Techniques Rather than attempting to recover lost packets, other techniques involve anticipating data loss, manipulating the data prior to transmission. Retransmission The simplest transmitter-based technique is retransmission, sending the message multiple times. Although this idea is simple, because of the extra time required to send multiple signals, this technique is incapable of supporting real-time applications. Packet Repetition Packet repetition, also called forward error correction (FEC), adds redundant data, which the receiver can use to recover lost packets. This minimizes loss, but increases the size of the packet. Interleaving Interleaving involves scrambling the data before transmi
https://en.wikipedia.org/wiki/List%20of%20differential%20geometry%20topics
This is a list of differential geometry topics. See also glossary of differential and metric geometry and list of Lie group topics. Differential geometry of curves and surfaces Differential geometry of curves List of curves topics Frenet–Serret formulas Curves in differential geometry Line element Curvature Radius of curvature Osculating circle Curve Fenchel's theorem Differential geometry of surfaces Theorema egregium Gauss–Bonnet theorem First fundamental form Second fundamental form Gauss–Codazzi–Mainardi equations Dupin indicatrix Asymptotic curve Curvature Principal curvatures Mean curvature Gauss curvature Elliptic point Types of surfaces Minimal surface Ruled surface Conical surface Developable surface Nadirashvili surface Foundations Calculus on manifolds See also multivariable calculus, list of multivariable calculus topics Manifold Differentiable manifold Smooth manifold Banach manifold Fréchet manifold Tensor analysis Tangent vector Tangent space Tangent bundle Cotangent space Cotangent bundle Tensor Tensor bundle Vector field Tensor field Differential form Exterior derivative Lie derivative pullback (differential geometry) pushforward (differential) jet (mathematics) Contact (mathematics) jet bundle Frobenius theorem (differential topology) Integral curve Differential topology Diffeomorphism Large diffeomorphism Orientability characteristic class Chern class Pontrjagin class spin structure differentiable map submersion immersion Embedding Whitney embedding theorem Critical value Sard's theorem Saddle point Morse theory Lie derivative Hairy ball theorem Poincaré–Hopf theorem Stokes' theorem De Rham cohomology Sphere eversion Frobenius theorem (differential topology) Distribution (differential geometry) integral curve foliation integrability conditions for differential systems Fiber bundles Fiber bundle Principal bundle Frame bundle Hopf bundle Associated bundle Vector bundle Tangent bundle Cotangent bundle Line bundle Jet bundle Fundamental st
https://en.wikipedia.org/wiki/Hierarchical%20internetworking%20model
The Hierarchical internetworking model is a three-layer model for network design first proposed by Cisco. It divides enterprise networks into three layers: core, distribution, and access layer. Access layer End-stations and servers connect to the enterprise at the access layer. Access layer devices are usually commodity switching platforms, and may or may not provide layer 3 switching services. The traditional focus at the access layer is minimizing "cost-per-port": the amount of investment the enterprise must make for each provisioned Ethernet port. This layer is also called the desktop layer because it focuses on connecting client nodes, such as workstations to the network. Distribution layer The distribution layer is the smart layer in the three-layer model. Routing, filtering, and QoS policies are managed at the distribution layer. Distribution layer devices also often manage individual branch-office WAN connections. This layer is also called the Workgroup layer. Core layer The core is the backbone of a network, where the internet(internetwork) gateway are located. The core network provides high-speed, highly redundant forwarding services to move packets between distribution-layer devices in different regions of the network. Core switches and routers are usually the most powerful, in terms of raw forwarding power, in the enterprise; core network devices manage the highest-speed connections, such as 10 Gigabit Ethernet or 100 Gigabit Ethernet. See also Multi-tier architecture Service layer
https://en.wikipedia.org/wiki/Standard%20Test%20and%20Programming%20Language
JAM / STAPL ("Standard Test and Programming Language") is an Altera-developed standard for JTAG in-circuit programming of programmable logic devices which is defined by JEDEC standard JESD-71. STAPL defines a standard .jam file format which supports in-system programmability or configuration of programmable devices. A JTAG device programmer implements a JAM player which reads the file as a set of instructions directing it to program a PLD. The standard is supported by multiple PLD and device programmer manufacturers.
https://en.wikipedia.org/wiki/Detection%20theory
Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator). In the field of electronics, signal recovery is the separation of such patterns from a disguising background. According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g., fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat. Much of the early work in detection theory was done by radar researchers. By 1954, the theory was fully developed on the theoretical side as described by Peterson, Birdsall and Fox and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, and John A. Swets, also in 1954. Detection theory was used in 1966 by John A. Swets and David M. Green for psychophysics. Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential) response biases. Detection theory has applications in many fields such as diagnostics of any kind, quality control, telecommunications, and psychology. The concept is similar to the signal-to-noise ratio used in the
https://en.wikipedia.org/wiki/Global%20Digital%20Mathematics%20Library
The Global Digital Mathematics Library (GDML) is a project organized under the auspices of the International Mathematical Union (IMU) to establish a digital library focused on mathematics. A working group was convened in September 2014, following the 2014 International Congress of Mathematicians, by former IMU President Ingrid Daubechies and Chair Peter J. Olver of the IMU’s Committee on Electronic Information and Communication (CEIC). Currently the working group has eight members, namely: Thierry Bouche, Institut Fourier & Cellule MathDoc, Grenoble, France Bruno Buchberger, RISC, Hagenberg/Linz, Austria Patrick Ion, Mathematical Reviews/AMS, Ann Arbor, MI, US Michael Kohlhase, Jacobs University, Bremen, Germany Jim Pitman, University of California, Berkeley, CA, US Olaf Teschke, zbMATH/FIZ, Berlin, Germany Stephen M. Watt, University of Waterloo, Waterloo, ON, Canada Eric Weisstein, Wolfram Research, McAllen, TX, US Background In the spring of 2014, the Committee on Planning a Global Library of the Mathematical Sciences released a comprehensive study entitled “Developing a 21st Century Global Library for Mathematics Research.” This report states in its Strategic Plan section, “There is a compelling argument that through a combination of machine learning methods and editorial effort by both paid and volunteer editors, a significant portion of the information and knowledge in the global mathematical corpus could be made available to researchers as linked open data through the GDML." Workshop A workshop titled "Semantic Representation of Mathematical Knowledge" was held at the Fields Institute in Toronto during February 3–5, 2016. The goal of the workshop was to lay down the foundations of a prototype semantic representation language for the GDML. The workshop's organizers recognized that the extremely wide scope of mathematics as a whole made it unrealistic to map out the detailed concepts, structures, and operations needed and used in individual mathema
https://en.wikipedia.org/wiki/University%20of%20Chicago%20School%20Mathematics%20Project
The University of Chicago School Mathematics Project (UCSMP) is a multi-faceted project of the University of Chicago in the United States, intended to improve competency in mathematics in the United States by elevating educational standards for children in elementary and secondary schools. Overview The UCSMP supports educators by supplying training materials to them and offering a comprehensive mathematics curriculum at all levels of primary and secondary education. It seeks to bring international strengths into the United States, translating non-English math textbooks for English students and sponsoring international conferences on the subject of math education. Launched in 1983 with the aid of a six-year grant from Amoco, the UCSMP is used throughout the United States. UCSMP developed Everyday Mathematics, a pre-K and elementary school mathematics curriculum. UCSMP publishers Wright Group-McGraw-Hill (K-6 Materials) Wright Group-McGraw-Hill (6-12 Materials) American Mathematical Society (Translations of Foreign Texts) See also Zalman Usiskin
https://en.wikipedia.org/wiki/Compilospecies
A compilospecies is a genetically aggressive species which acquires the heredities of a closely related sympatric species by means of hybridisation and comprehensive introgression. The target species may be incorporated to the point of despeciation, rendering it extinct. This type of genetic aggression is associated with species in newly disturbed habitats (such as pioneering species), weed species and domestication. They can be diploid or polyploid, as well as sexual or primarily asexual. The term compilospecies derives from the Latin word compilo, which means to seize, to collect, to rob or to plunder. A proposed explanation for the existence of such a species with weak reproductive barriers and frequent introgression is that it allows for genetic variation. An increase in the gene pool through viable hybrids can facilitate new phenotypes and the colonisation of novel habitats. The concept of compilospecies is not frequent in scientific literature and may not be fully regarded by the biological community as a true evolutionary concept, especially due to low supporting evidence. History Compilospecies were first described by Harlan and de Wet in 1962, who examined a wide range of grasses and other species such as Bothriochloa intermedia, otherwise known as Australian bluestem grass. B. intermedia was found to introgress heavily with neighboring sympatric grass species and even genera, particularly in geographically restricted areas. The species itself is of hybrid origin, containing genetic material from five or more different grass species. Harlan and de Wet examined the interactions between the genera Bothriochloa, Dichanthium and Capillipedium - an apomictic complex of grasses from the tribe Andropogoneae - and used the cytogenetic model of these as a basis for the compilospecies concept. Species within these genera exhibit both sexual and asexual reproduction, high heterozygosity, ploidies from 2x to 6x, and gene flow between bordering populations as evid
https://en.wikipedia.org/wiki/List%20of%20superlative%20trees
The world's superlative trees can be ranked by any factor. Records have been kept for trees with superlative height, trunk diameter or girth, canopy coverage, airspace volume, wood volume, estimated mass, and age. Tallest The heights of the tallest trees in the world have been the subject of considerable dispute and much exaggeration. Modern verified measurements with laser rangefinders or with tape drop measurements made by tree climbers (such as those carried out by canopy researchers), have shown that some older tree height measurement methods are often unreliable, sometimes producing exaggerations of 5% to 15% or more above the real height. Historical claims of trees growing to , and even , are now largely disregarded as unreliable, and attributed to human error. The following are the tallest reliably measured specimens from the top 10 species. This table shows only currently standing specimens: Tallest historically Despite the high heights attained by trees nowadays, records exist of much greater heights in the past, before widespread logging took place. Some, if not most, of these records are without a doubt greatly exaggerated, but some have been reportedly measured with semi-reliable instruments when cut down and on the ground. Some of the heights recorded in this way exceed the maximum possible height of a tree as calculated by theorists, lending some limited credibility to speculation that some superlative trees are able to 'reverse' transpiration streams and absorb water through needles in foggy environments. All three of the tallest tree species continue to be Coast redwoods, Douglas fir and Giant mountain ash. Stoutest The girth of a tree is usually much easier to measure than the height, as it is a simple matter of stretching a tape round the trunk, and pulling it taut to find the circumference. Despite this, UK tree author Alan Mitchell made the following comment about measurements of yew trees: As a general standard, tree girth is taken at "b
https://en.wikipedia.org/wiki/Alert%20correlation
Alert correlation is a type of log analysis. It focuses on the process of clustering alerts (events), generated by NIDS and HIDS computer systems, to form higher-level pieces of information. Example of simple alert correlation is grouping invalid login attempts to report single incident like "10000 invalid login attempts on host X". See also ACARM ACARM-ng OSSIM Prelude Hybrid IDS Snort Computer systems Computer-aided engineering
https://en.wikipedia.org/wiki/Proof%20by%20intimidation
Proof by intimidation (or argumentum verbosum) is a jocular phrase used mainly in mathematics to refer to a specific form of hand-waving, whereby one attempts to advance an argument by marking it as obvious or trivial, or by giving an argument loaded with jargon and obscure results. It attempts to intimidate the audience into simply accepting the result without evidence, by appealing to their ignorance and lack of understanding. The phrase is often used when the author is an authority in their field, presenting their proof to people who respect a priori the author's insistence of the validity of the proof, while in other cases, the author might simply claim that their statement is true because it is trivial or because they say so. Usage of this phrase is for the most part in good humour, though it can also appear in serious criticism. A proof by intimidation is often associated with phrases such as: "Clearly..." "It is self-evident that..." "It can be easily shown that..." "... does not warrant a proof." "The proof is left as an exercise for the reader." Outside mathematics, "proof by intimidation" is also cited by critics of junk science, to describe cases in which scientific evidence is thrown aside in favour of dubious arguments—such as those presented to the public by articulate advocates who pose as experts in their field. Proof by intimidation may also back valid assertions. Ronald A. Fisher claimed in the book credited with the new evolutionary synthesis, "...by the analogy of compound interest the present value of the future offspring of persons aged x is easily seen to be...", thence presenting a novel integral-laden definition of reproductive value. At this, Hal Caswell remarked, "With all due respect to Fisher, I have yet to meet anyone who finds this equation 'easily seen.'" Valid proofs were provided by subsequent researchers such as Leo A. Goodman (1968). In a memoir, Gian-Carlo Rota claimed that the expression "proof by intimidation" was coi
https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s%20constant
In mathematics, Apéry's constant is the sum of the reciprocals of the positive cubes. That is, it is defined as the number where is the Riemann zeta function. It has an approximate value of . The constant is named after Roger Apéry. It arises naturally in a number of physical problems, including in the second- and third-order terms of the electron's gyromagnetic ratio using quantum electrodynamics. It also arises in the analysis of random minimum spanning trees and in conjunction with the gamma function when solving certain integrals involving exponential functions in a quotient, which appear occasionally in physics, for instance, when evaluating the two-dimensional case of the Debye model and the Stefan–Boltzmann law. Irrational number was named Apéry's constant after the French mathematician Roger Apéry, who proved in 1978 that it is an irrational number. This result is known as Apéry's theorem. The original proof is complex and hard to grasp, and simpler proofs were found later. Beukers's simplified irrationality proof involves approximating the integrand of the known triple integral for , by the Legendre polynomials. In particular, van der Poorten's article chronicles this approach by noting that where , are the Legendre polynomials, and the subsequences are integers or almost integers. It is still not known whether Apéry's constant is transcendental. Series representations Classical In addition to the fundamental series: Leonhard Euler gave the series representation: in 1772, which was subsequently rediscovered several times. Fast convergence Since the 19th century, a number of mathematicians have found convergence acceleration series for calculating decimal places of . Since the 1990s, this search has focused on computationally efficient series with fast convergence rates (see section "Known digits"). The following series representation was found by A. A. Markov in 1890, rediscovered by Hjortnaes in 1953, and rediscovered once more
https://en.wikipedia.org/wiki/No-analog%20%28ecology%29
In paleoecology and ecological forecasting, a no-analog community or climate is one that is compositionally different from a (typically modern) baseline for measurement. Alternative naming conventions to describe no-analog communities and climates may include novel, emerging, mosaic, disharmonious and intermingled. Modern climates, communities and ecosystems are often studied in an attempt to understand no-analogs that have happened in the past and those that may occur in the future. This use of a modern analog to study the past draws on the concept of uniformitarianism. Along with the use of these modern analogs, actualistic studies and taphonomy are additional tools that are used in understanding no-analogs. Statistical tools are also used to identify no-analogs and their baselines, often through the use of dissimilarity analyses or analog matching Study of no-analog fossil remains are often carefully evaluated as to rule out mixing of fossils in an assemblage due to erosion, animal activity or other processes. No-analog climates Conditions that are considered no-analog climates are those that have no modern analog, such as the climate during the last glaciation. Glacial climates varied from current climates in seasonality and temperature, having an overall more steady climate without as many extreme temperatures as today's climate. Climates with no modern analog may be used to infer species range shifts, biodiversity changes, ecosystem arrangements and help in understanding species fundamental niche space. Past climates are often studied to understand how changes in a species' fundamental niche may lead to the formation of no analog communities. Seasonality and temperatures that are outside the climates at present provide opportunity for no-analog communities to arise, as is seen in the late Holocene plant communities. Evidence of deglacial temperature controls having significant effects on the formation of no-analog communities in the midwestern United State
https://en.wikipedia.org/wiki/Devicetree
In computing, a devicetree (also written device tree) is a data structure describing the hardware components of a particular computer so that the operating system's kernel can use and manage those components, including the CPU or CPUs, the memory, the buses and the integrated peripherals. The device tree was derived from SPARC-based computers via the Open Firmware project. The current Devicetree specification is targeted at smaller systems, but is still used with some server-class systems (for instance, those described by the Power Architecture Platform Reference). Personal computers with the x86 architecture generally do not use device trees, relying instead on various auto configuration protocols (e.g. ACPI) to discover hardware. Systems which use device trees usually pass a static device tree (perhaps stored in EEPROM, or stored in NAND device like eUFS) to the operating system, but can also generate a device tree in the early stages of booting. As an example, Das U-Boot and kexec can pass a device tree when launching a new operating system. On systems with a boot loader that does not support device trees, a static device tree may be installed along with the operating system; the Linux kernel supports this approach. The Devicetree specification is currently managed by a community named devicetree.org, which is associated with, among others, Linaro and Arm. Formats A device tree can hold any kind of data as internally it is a tree of named nodes and properties. Nodes contain properties and child nodes, while properties are name–value pairs. Device trees have both a binary format for operating systems to use and a textual format for convenient editing and management. Usage Linux Given the correct device tree, the same compiled kernel can support different hardware configurations within a wider architecture family. The Linux kernel for the ARC, ARM, C6x, H8/300, MicroBlaze, MIPS, NDS32, Nios II, OpenRISC, PowerPC, RISC-V, SuperH, and Xtensa architectures rea
https://en.wikipedia.org/wiki/De%20Bruijn%E2%80%93Newman%20constant
The de Bruijn–Newman constant, denoted by Λ and named after Nicolaas Govert de Bruijn and Charles Michael Newman, is a mathematical constant defined via the zeros of a certain function H(λ,z), where λ is a real parameter and z is a complex variable. More precisely, , where is the super-exponentially decaying function and Λ is the unique real number with the property that H has only real zeros if and only if λ≥Λ. The constant is closely connected with Riemann's hypothesis concerning the zeros of the Riemann zeta-function: since the Riemann hypothesis is equivalent to the claim that all the zeroes of H(0, z) are real, the Riemann hypothesis is equivalent to the conjecture that Λ≤0. Brad Rodgers and Terence Tao proved that Λ<0 cannot be true, so Riemann's hypothesis is equivalent to Λ = 0. A simplified proof of the Rodgers–Tao result was later given by Alexander Dobner. History De Bruijn showed in 1950 that H has only real zeros if λ ≥ 1/2, and moreover, that if H has only real zeros for some λ, H also has only real zeros if λ is replaced by any larger value. Newman proved in 1976 the existence of a constant Λ for which the "if and only if" claim holds; and this then implies that Λ is unique. Newman also conjectured that Λ ≥ 0, which was then proven by Brad Rodgers and Terence Tao in 2018. Upper bounds De Bruijn's upper bound of was not improved until 2008, when Ki, Kim and Lee proved , making the inequality strict. In December 2018, the 15th Polymath project improved the bound to . A manuscript of the Polymath work was submitted to arXiv in late April 2019, and was published in the journal Research In the Mathematical Sciences in August 2019. This bound was further slightly improved in April 2020 by Platt and Trudgian to . Historical bounds
https://en.wikipedia.org/wiki/Computer%20science%20and%20engineering
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B
https://en.wikipedia.org/wiki/Eventually%20%28mathematics%29
In the mathematical areas of number theory and analysis, an infinite sequence or a function is said to eventually have a certain property, if it doesn't have the said property across all its ordered instances, but will after some instances have passed. The use of the term "eventually" can be often rephrased as "for sufficiently large numbers", and can be also extended to the class of properties that apply to elements of any ordered set (such as sequences and subsets of ). Notation The general form where the phrase eventually (or sufficiently large) is found appears as follows: is eventually true for ( is true for sufficiently large ), where and are the universal and existential quantifiers, which is actually a shorthand for: such that is true or somewhat more formally: This does not necessarily mean that any particular value for is known, but only that such an exists. The phrase "sufficiently large" should not be confused with the phrases "arbitrarily large" or "infinitely large". For more, see Arbitrarily large#Arbitrarily large vs. sufficiently large vs. infinitely large. Motivation and definition For an infinite sequence, one is often more interested in the long-term behaviors of the sequence than the behaviors it exhibits early on. In which case, one way to formally capture this concept is to say that the sequence possesses a certain property eventually, or equivalently, that the property is satisfied by one of its subsequences , for some . For example, the definition of a sequence of real numbers converging to some limit is: For each positive number , there exists a natural number such that for all , . When the term "eventually" is used as a shorthand for "there exists a natural number such that for all ", the convergence definition can be restated more simply as: For each positive number , eventually . Here, notice that the set of natural numbers that do not satisfy this property is a finite set; that is, the set is empty or has
https://en.wikipedia.org/wiki/CEN/XFS
CEN/XFS or XFS (extensions for financial services) provides a client-server architecture for financial applications on the Microsoft Windows platform, especially peripheral devices such as EFTPOS terminals and ATMs which are unique to the financial industry. It is an international standard promoted by the European Committee for Standardization (known by the acronym CEN, hence CEN/XFS). The standard is based on the WOSA Extensions for Financial Services or WOSA/XFS developed by Microsoft. With the move to a more standardized software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. XFS provides a common API for accessing and manipulating various financial services devices regardless of the manufacturer. History Chronology: 1991 - Microsoft forms "Banking Solutions Vendor Council" 1995 - WOSA/XFS 1.11 released 1997 - WOSA/XFS 2.0 released - additional support for 24 hours-a-day unattended operation 1998 - adopted by European Committee for Standardization as an international standard. 2000 - XFS 3.0 released by CEN 2008 - XFS 3.10 released by CEN 2011 - XFS 3.20 released by CEN 2015 - XFS 3.30 released by CEN 2020 - XFS 3.40 released by CEN WOSA/XFS changed name to simply XFS when the standard was adopted by the international CEN/ISSS standards body. However, it is most commonly called CEN/XFS by the industry participants. XFS middleware While the perceived benefit of XFS is similar to Java's "write once, run anywhere" mantra, often different hardware vendors have different interpretations of the XFS standard. The result of these differences in interpretation means that applications typically use a middleware to even out the differences between various platforms implementation of XFS. Notable XFS middleware platforms include: F1 Solutions - F1 TPS (multi-vendor ATM & POS solution) Serquo - Dwide (REST API middleware for XFS) Nexus Software LLC - Nexu
https://en.wikipedia.org/wiki/Human%E2%80%93robot%20interaction
Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, and psychology. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems. Origins Human–robot interaction has been a topic of both science fiction and academic speculation even before any robots existed. Because much of active HRI development depends on natural language processing, many aspects of HRI are continuations of human communications, a field of research which is much older than robotics. The origin of HRI as a discrete problem was stated by 20th-century author Isaac Asimov in 1941, in his novel I, Robot. Asimov coined Three Laws of Robotics, namely: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. These three laws provide an overview of the goals engineers and researchers hold for safety in the HRI field, although the fields of robot ethics and machine ethics are more complex than these three principles. However, generally human–robot interaction prioritizes the safety of humans that interact with potentially dangerous robotics equipment. Solutions to this problem range from the philosophical approach of treating robots as ethical agents (individuals with moral agency), to the practical approach of creating safety zones. These safety zones use technologies such as lidar to detect human presence or physical barriers to protect humans by preventing any contact between machine and operator. Although initially robots in the human–robot
https://en.wikipedia.org/wiki/Pulse%20compression
Pulse compression is a signal processing technique commonly used by radar, sonar and echography to either increase the range resolution when pulse length is constrained or increase the signal to noise ratio when the peak power and the bandwidth (or equivalently range resolution) of the transmitted signal are constrained. This is achieved by modulating the transmitted pulse and then correlating the received signal with the transmitted pulse. Simple pulse Signal description The ideal model for the simplest, and historically first type of signals a pulse radar or sonar can transmit is a truncated sinusoidal pulse (also called a CW --carrier wave-- pulse), of amplitude and carrier frequency, , truncated by a rectangular function of width, . The pulse is transmitted periodically, but that is not the main topic of this article; we will consider only a single pulse, . If we assume the pulse to start at time , the signal can be written the following way, using the complex notation: Range resolution Let us determine the range resolution which can be obtained with such a signal. The return signal, written , is an attenuated and time-shifted copy of the original transmitted signal (in reality, Doppler effect can play a role too, but this is not important here.) There is also noise in the incoming signal, both on the imaginary and the real channel. The noise is assumed to be band-limited, that is to have frequencies only in (this generally holds in reality, where a bandpass filter is generally used as one of the first stages in the reception chain); we write to denote that noise. To detect the incoming signal, a matched filter is commonly used. This method is optimal when a known signal is to be detected among additive noise having a normal distribution. In other words, the cross-correlation of the received signal with the transmitted signal is computed. This is achieved by convolving the incoming signal with a conjugated and time-reversed version of the transmitted
https://en.wikipedia.org/wiki/Dextrose%20equivalent
Dextrose equivalent (DE) is a measure of the amount of reducing sugars present in a sugar product, expressed as a percentage on a dry basis relative to dextrose. The dextrose equivalent gives an indication of the average degree of polymerisation (DP) for starch sugars. As a rule of thumb, DE × DP = 120. In all glucose polymers, from the native starch to glucose syrup, the molecular chain begins with a reducing sugar, containing a free aldehyde. As the starch is hydrolysed, the molecules become shorter and more reducing sugars are present. Therefore, the dextrose equivalent describes the degree of conversion of starch to dextrose. The standard method of determining the dextrose equivalent is the Lane-Eynon titration, based on the reduction of copper(II) sulfate in an alkaline tartrate solution, an application of Fehling's test. Examples: A maltodextrin with a DE of 10 would have 10% of the reducing power of dextrose which has a DE of 100. Maltose, a disaccharide made of two glucose (dextrose) molecules, has a DE of 52, correcting for the water loss in molecular weight when the two molecules are combined. Glucose (dextrose) has a molecular mass of 180, while water has a molecular mass of 18. For each 2 glucose monomers binding, a water molecule is removed. Therefore, the molecular mass of a glucose polymer can be calculated by using the formula (180*n - 18*(n-1)) with n the DP (degree of polymerisation) of the glucose polymer. The DE can be calculated as 100*(180 / Molecular mass( glucose polymer)). In this example the DE is calculated as 100*(180/(180*2-18*1)) = 52. Sucrose actually has a DE of zero even though it is a disaccharide, because both reducing groups of the monosaccharides that make it are connected, so there are no remaining reducing groups. Because different reducing sugars (e.g. fructose and glucose) have different sweetness, it is incorrect to assume that there is any direct relationship between dextrose equivalent and sweetness.
https://en.wikipedia.org/wiki/Table%20of%20Lie%20groups
This article gives a table of some common Lie groups and their associated Lie algebras. The following are noted: the topological properties of the group (dimension; connectedness; compactness; the nature of the fundamental group; and whether or not they are simply connected) as well as on their algebraic properties (abelian; simple; semisimple). For more examples of Lie groups and other related topics see the list of simple Lie groups; the Bianchi classification of groups of up to three dimensions; see classification of low-dimensional real Lie algebras for up to four dimensions; and the list of Lie group topics. Real Lie groups and their algebras Column legend Cpt: Is this group G compact? (Yes or No) : Gives the group of components of G. The order of the component group gives the number of connected components. The group is connected if and only if the component group is trivial (denoted by 0). : Gives the fundamental group of G whenever G is connected. The group is simply connected if and only if the fundamental group is trivial (denoted by 0). UC: If G is not simply connected, gives the universal cover of G. Real Lie algebras Complex Lie groups and their algebras Note that a "complex Lie group" is defined as a complex analytic manifold that is also a group whose multiplication and inversion are each given by a holomorphic map. The dimensions in the table below are dimensions over C. Note that every complex Lie group/algebra can also be viewed as a real Lie group/algebra of twice the dimension. Complex Lie algebras The dimensions given are dimensions over C. Note that every complex Lie algebra can also be viewed as a real Lie algebra of twice the dimension. The Lie algebra of affine transformations of dimension two, in fact, exist for any field. An instance has already been listed in the first table for real Lie algebras. See also Classification of low-dimensional real Lie algebras Simple Lie group#Full classification
https://en.wikipedia.org/wiki/Compositional%20domain
A compositional domain in genetics is a region of DNA with a distinct guanine (G) and cytosine (C) G-C and C-G content (collectively GC content). The homogeneity of compositional domains is compared to that of the chromosome on which they reside. As such, compositional domains can be homogeneous or nonhomogeneous domains. Compositionally homogeneous domains that are sufficiently long (= 300 kb) are termed isochores or isochoric domains. The compositional domain model was proposed as an alternative to the isochoric model. The isochore model was proposed by Bernardi and colleagues to explain the observed non-uniformity of genomic fragments in the genome. However, recent sequencing of complete genomic data refuted the isochoric model. Its main predictions were: GC content of the third codon position (GC3) of protein coding genes is correlated with the GC content of the isochores embedding the corresponding genes. This prediction was found to be incorrect. GC3 could not predict the GC content of nearby sequences. The genome organization of warm-blooded vertebrates is a mosaic of isochores. This prediction was rejected by many studies that used the complete human genome data. The genome organization of cold-blooded vertebrates is characterized by low GC content levels and lower compositional heterogeneity. This prediction was disproved by finding high and low GC content domains in fish genomes. The compositional domain model describes the genome as a mosaic of short and long homogeneous and nonhomogeneous domains. The composition and organization of the domains were shaped by different evolutionary processes that either fused or broke down the domains. This genomic organization model was confirmed in many new genomic studies of cow, honeybee, sea urchin, body louse, Nasonia, beetle, and ant genomes. The human genome was described as consisting of a mixture of compositionally nonhomogeneous domains with numerous short compositionally homogeneous domains and relativ
https://en.wikipedia.org/wiki/List%20of%20polygons
In geometry, a polygon is traditionally a plane figure that is bounded by a finite chain of straight line segments closing in a loop to form a closed chain. These segments are called its edges or sides, and the points where two of the edges meet are the polygon's vertices (singular: vertex) or corners. The word polygon comes from Late Latin polygōnum (a noun), from Greek πολύγωνον (polygōnon/polugōnon), noun use of neuter of πολύγωνος (polygōnos/polugōnos, the masculine adjective), meaning "many-angled". Individual polygons are named (and sometimes classified) according to the number of sides, combining a Greek-derived numerical prefix with the suffix -gon, e.g. pentagon, dodecagon. The triangle, quadrilateral and nonagon are exceptions, although the regular forms trigon, tetragon, and enneagon are sometimes encountered as well. Greek numbers Polygons are primarily named by prefixes from Ancient Greek numbers. Systematic polygon names To construct the name of a polygon with more than 20 and fewer than 100 edges, combine the prefixes as follows. The "kai" connector is not included by some authors. Extending the system up to 999 is expressed with these prefixes; the names over 99 no longer correspond to how they are actually expressed in Greek. List of n-gons by Greek numerical prefixes See also Platonic solid Dice List of polygons, polyhedra and polytopes Circle Ellipses Shapes
https://en.wikipedia.org/wiki/List%20of%20International%20Mathematical%20Olympiads
The first of the International Mathematical Olympiads (IMOs) was held in Romania in 1959. The oldest of the International Science Olympiads, the IMO has since been held annually, except in 1980. That year, the competition initially planned to be held in Mongolia was cancelled due to the Soviet invasion of Afghanistan. Because the competition was initially founded for Eastern European countries participating in the Warsaw Pact, under the influence of the Eastern Bloc, the earlier IMOs were hosted only in Eastern European countries, gradually spreading to other nations. Sources differ about the cities hosting some of the early IMOs and the exact dates when they took place. The first IMO was held in Romania in 1959. Seven countries entered – Bulgaria, Czechoslovakia, East Germany, Hungary, Poland, Romania and the Soviet Union – with the hosts finishing as the top-ranked nation. The number of participating countries has since risen: 14 countries took part in 1969, 50 in 1989, and 104 in 2009. North Korea is the only country to have been caught cheating, resulting in its disqualification at the 32nd IMO in 1991 and the 51st IMO in 2010. In January 2011, Google gave €1 million to the IMO organization to help cover the costs of the events from 2011 to 2015. List of Olympiads See also Asian Pacific Mathematics Olympiad Provincial Mathematical Olympiad List of mathematics competitions List of International Mathematical Olympiad participants Notes
https://en.wikipedia.org/wiki/Existence%20theorem
In mathematics, an existence theorem is a theorem which asserts the existence of a certain object. It might be a statement which begins with the phrase "there exist(s)", or it might be a universal statement whose last quantifier is existential (e.g., "for all , , ... there exist(s) ..."). In the formal terms of symbolic logic, an existence theorem is a theorem with a prenex normal form involving the existential quantifier, even though in practice, such theorems are usually stated in standard mathematical language. For example, the statement that the sine function is continuous everywhere, or any theorem written in big O notation, can be considered as theorems which are existential by nature—since the quantification can be found in the definitions of the concepts used. A controversy that goes back to the early twentieth century concerns the issue of purely theoretic existence theorems, that is, theorems which depend on non-constructive foundational material such as the axiom of infinity, the axiom of choice or the law of excluded middle. Such theorems provide no indication as to how to construct (or exhibit) the object whose existence is being claimed. From a constructivist viewpoint, such approaches are not viable as it lends to mathematics losing its concrete applicability, while the opposing viewpoint is that abstract methods are far-reaching, in a way that numerical analysis cannot be. 'Pure' existence results In mathematics, an existence theorem is purely theoretical if the proof given for it does not indicate a construction of the object whose existence is asserted. Such a proof is non-constructive, since the whole approach may not lend itself to construction. In terms of algorithms, purely theoretical existence theorems bypass all algorithms for finding what is asserted to exist. These are to be contrasted with the so-called "constructive" existence theorems, which many constructivist mathematicians working in extended logics (such as intuitionistic logic) b
https://en.wikipedia.org/wiki/Standard%20Model
The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy. Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain baryon asymmetry, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses. The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the
https://en.wikipedia.org/wiki/SCSI%20RDMA%20Protocol
In computing the SCSI RDMA Protocol (SRP) is a protocol that allows one computer to access SCSI devices attached to another computer via remote direct memory access (RDMA). The SRP protocol is also known as the SCSI Remote Protocol. The use of RDMA makes higher throughput and lower latency possible than what is generally available through e.g. the TCP/IP communication protocol. Though the SRP protocol has been designed to use RDMA networks efficiently, it is also possible to implement the SRP protocol over networks that do not support RDMA. History SRP was published as an ANSI standard (ANSI INCITS 365-2002) in 2002 and renewed in 2007 and 2019. Related Protocols As with the ISCSI Extensions for RDMA (iSER) communication protocol, there is the notion of a target (a system that stores the data) and an initiator (a client accessing the target) with the target initiating data transfers. In other words, when an initiator writes data to a target, the target executes an RDMA read to fetch the data from the initiator and when a user issues a SCSI read command, the target sends an RDMA write to the initiator. While the SRP protocol is easier to implement than the iSER protocol, iSER offers more management functionality, e.g. the target discovery infrastructure enabled by the iSCSI protocol. Performance Bandwidth and latency of storage targets supporting the SRP or the iSER protocol should be similar. On Linux, there are two SRP and two iSER storage target implementations available that run inside the kernel (SCST and LIO) and an iSER storage target implementation that runs in user space (STGT). Measurements have shown that the SCST SRP target has a lower latency and a higher bandwidth than the STGT iSER target. This is probably because the RDMA communication overhead is lower for a component implemented in the Linux kernel than for a user space Linux process, and not because of protocol differences. Implementations In order to use the SRP protocol, an SRP initiato
https://en.wikipedia.org/wiki/BrainChip
BrainChip (ASX:BRN, OTCQX:BRCHF) is an Australia-based technology company, founded in 2004 by Peter Van Der Made, that specializes in developing advanced artificial intelligence (AI) and machine learning (ML) hardware. The company's primary products are the MetaTF development environment, which allows the training and deployment of spiking neural networks (SNN), and the AKD1000 neuromorphic processor, a hardware implementation of their spiking neural network system. BrainChip's technology is based on a neuromorphic computing architecture, which attempts to mimic the way the human brain works. The company is a part of Intel Foundry Services and Arm AI partnership. History Australian mining company Aziana acquired BrainChip in March 2015. Later, via a reverse merger of the now dormant Aziana in September 2015 BrainChip was put on the Australian Stock Exchange (ASX), and van der Made started commercializing his original idea for artificial intelligence processor hardware. In 2016, the company appointed former Exar CEO Louis Di Nardo as CEO; Van Der Made then took the position of CTO. In October 2021, the company announced that it was taking orders for its Akida AI Processor Development Kits, and in January 2022, that it was taking orders for its Akida AI Processor PCIe boards. In April 2022, BrainChip partnered with NVISO to provide collaboration with applications and technologies. In November 2022, BrainChip added the Rochester Institute of Technology to its University AI accelerator program. The next month, BrainChip was a part of Intel Foundry Services. In January 2023, Edge Impulse announced support for BrainChip's AKD processor. MetaTF The MetaTF software is designed to work with a variety of image, video, and sensor data, and is intended to be implemented in a range of applications, including security, surveillance, autonomous vehicles, and industrial automation. The software uses Python to create spiking neural networks (or convert other neural networks to
https://en.wikipedia.org/wiki/Google%20Silicon%20Initiative
The Google Open Silicon Initiative is an initiative launched by the Google Hardware Toolchains team to democratize access to custom silicon design. Google has partnered with SkyWater Technology and GlobalFoundries to open-source their Process Design Kits for 180nm, 130nm and 90nm process. This initiative provides free software tools for chip designers to create, verify and test virtual chip circuit designs before they are physically produced in factories. The aim of the initiative is to reduce the cost of chip designs and production, which will benefit DIY enthusiasts, researchers, universities, and chip startups. The program has gained more partners, including the US Department of Defense, which injected $15 million in funding to SkyWater, one of the manufacturers supporting the program.
https://en.wikipedia.org/wiki/Food%20quality
Food quality is a concept often based on the organoleptic characteristics (e.g., taste, aroma, appearance) and nutritional value of food. Producers reducing potential pathogens and other hazards through food safety practices is another important factor in gauging standards. A food's origin, and even its branding, can play a role in how consumers perceive the quality of products. Sensory Consumer acceptability of foods is typically based upon flavor and texture, as well as its color and smell. Safety The International Organization for Standardization identifies requirements for a producer's food safety management system, including the processes and procedures a company must follow to control hazards and promote safe products, through ISO 22000. Federal and state level departments, specifically The Food and Drug Administration, are responsible for promoting public health by, among other things, ensuring food safety. Food quality in the United States is enforced by the Food Safety Act 1990. The European Food Safety Authority provides scientific advice and communicates on risks associated with the food chain on the continent. There are many existing international quality institutes testing food products in order to indicate to all consumers which are higher quality products. Founded in 1961 in Brussels, The international Monde Selection quality award is the oldest in evaluating food quality. The judgements are based on the following areas: taste, health, convenience, labelling, packaging, environmental friendliness and innovation. As many consumers rely on manufacturing and processing standards, the Institute Monde Selection takes into account the European Food Law. Food quality in the United States is enforced by the Food Safety Act 1990. Members of the public complain to trading standards professionals, [specify] who submit complaint samples and also samples used to routinely monitor the food marketplace to public analysts. Public analysts carry out scientific ana
https://en.wikipedia.org/wiki/Probability%20and%20statistics
Probability and statistics are two closely related fields in mathematics, sometimes combined for academic purposes. They are covered in several articles: Probability Statistics Glossary of probability and statistics Notation in probability and statistics Timeline of probability and statistics
https://en.wikipedia.org/wiki/Polynomial%20Wigner%E2%80%93Ville%20distribution
In signal processing, the polynomial Wigner–Ville distribution is a quasiprobability distribution that generalizes the Wigner distribution function. It was proposed by Boualem Boashash and Peter O'Shea in 1994. Introduction Many signals in nature and in engineering applications can be modeled as , where is a polynomial phase and . For example, it is important to detect signals of an arbitrary high-order polynomial phase. However, the conventional Wigner–Ville distribution have the limitation being based on the second-order statistics. Hence, the polynomial Wigner–Ville distribution was proposed as a generalized form of the conventional Wigner–Ville distribution, which is able to deal with signals with nonlinear phase. Definition The polynomial Wigner–Ville distribution is defined as where denotes the Fourier transform with respect to , and is the polynomial kernel given by where is the input signal and is an even number. The above expression for the kernel may be rewritten in symmetric form as The discrete-time version of the polynomial Wigner–Ville distribution is given by the discrete Fourier transform of where and is the sampling frequency. The conventional Wigner–Ville distribution is a special case of the polynomial Wigner–Ville distribution with Example One of the simplest generalizations of the usual Wigner–Ville distribution kernel can be achieved by taking . The set of coefficients and must be found to completely specify the new kernel. For example, we set The resulting discrete-time kernel is then given by Design of a Practical Polynomial Kernel Given a signal , where is a polynomial function, its instantaneous frequency (IF) is . For a practical polynomial kernel , the set of coefficients and should be chosen properly such that When , When Applications Nonlinear FM signals are common both in nature and in engineering applications. For example, the sonar system of some bats use hyperbolic FM and quadratic FM signals for e
https://en.wikipedia.org/wiki/Viral%20eukaryogenesis
Viral eukaryogenesis is the hypothesis that the cell nucleus of eukaryotic life forms evolved from a large DNA virus in a form of endosymbiosis within a methanogenic archaeon or a bacterium. The virus later evolved into the eukaryotic nucleus by acquiring genes from the host genome and eventually usurping its role. The hypothesis was first proposed by Philip Bell in 2001 and was further popularized with the discovery of large, complex DNA viruses (such as Mimivirus) that are capable of protein biosynthesis. Viral eukaryogenesis has been controversial for several reasons. For one, it is sometimes argued that the posited evidence for the viral origins of the nucleus can be conversely used to suggest the nuclear origins of some viruses. Secondly, this hypothesis has further inflamed the longstanding debate over whether viruses are living organisms. Hypothesis The viral eukaryogenesis hypothesis posits that eukaryotes are composed of three ancestral elements: a viral component that became the modern nucleus; a prokaryotic cell (an archaeon according to the eocyte hypothesis) which donated the cytoplasm and cell membrane of modern cells; and another prokaryotic cell (here bacterium) that, by endocytosis, became the modern mitochondrion or chloroplast. In 2006, researchers suggested that the transition from RNA to DNA genomes first occurred in the viral world. A DNA-based virus may have provided storage for an ancient host that had previously used RNA to store its genetic information (such host is called ribocell or ribocyte). Viruses may initially have adopted DNA as a way to resist RNA-degrading enzymes in the host cells. Hence, the contribution from such a new component may have been as significant as the contribution from chloroplasts or mitochondria. Following this hypothesis, archaea, bacteria, and eukaryotes each obtained their DNA informational system from a different virus. In the original paper it was also an RNA cell at the origin of eukaryotes, but eventu
https://en.wikipedia.org/wiki/Chronux
Chronux is an open-source software package developed for the loading, visualization and analysis of a variety of modalities / formats of neurobiological time series data. Usage of this tool enables neuroscientists to perform a variety of analysis on multichannel electrophysiological data such as LFP (local field potentials), EEG, MEG, Neuronal spike times and also on spatiotemporal data such as FMRI and dynamic optical imaging data. The software consists of a set of MATLAB routines interfaced with C libraries that can be used to perform the tasks that constitute a typical study of neurobiological data. These include local regression and smoothing, spike sorting and spectral analysis - including multitaper spectral analysis, a powerful nonparametric method to estimate power spectrum. The package also includes some GUIs for time series visualization and analysis. Chronux is GNU GPL v2 licensed (and MATLAB is proprietary). The most recent version of Chronux is version 2.12. History From 1996 to 2001, the Marine Biological Laboratory (MBL) at Woods Hole, Massachusetts, USA hosted a workshop on the analysis of neural data. This workshop then evolved into the special topics course on neuroinformatics which is held at the MBL in the last two weeks of August every year. The popularity of these pedagogical efforts and the need for wider dissemination of sophisticated time-series analysis tools in the wider neuroscience community led the Mitra Lab at Cold Spring Harbor Laboratory to initiate an NIH funded effort to develop software tools for neural data analysis in the form of the Chronux package. Chronux is the result of efforts of a number of people, the chief among whom are Hemant Bokil, Peter Andrews, Samar Mehta, Ken Harris, Catherine Loader, Partha Mitra, Hiren Maniar, Ravi Shukla, Ramesh Yadav, Hariharan Nalatore and Sumanjit Kaur. Important contributions were also made by Murray Jarvis, Bijan Pesaran and S.Gopinath. Chronux welcome contributions from interested ind
https://en.wikipedia.org/wiki/Quantization%20%28signal%20processing%29
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms. The difference between an input value and its quantized value (such as round-off error) is referred to as quantization error. A device or algorithmic function that performs quantization is called a quantizer. An analog-to-digital converter is an example of a quantizer. Example For example, rounding a real number to the nearest integer value forms a very basic type of quantizer – a uniform one. A typical (mid-tread) uniform quantizer with a quantization step size equal to some value can be expressed as , where the notation denotes the floor function. Alternatively, the same quantizer may be expressed in terms of the ceiling function, as . (The notation denotes the ceiling function). The essential property of a quantizer is having a countable-set of possible output-values members smaller than the set of possible input values. The members of the set of output values may have integer, rational, or real values. For simple rounding to the nearest integer, the step size is equal to 1. With or with equal to any other integer value, this quantizer has real-valued inputs and integer-valued outputs. When the quantization step size (Δ) is small relative to the variation in the signal being quantized, it is relatively simple to show that the mean squared error produced by such a rounding operation will be approximately . Mean squared error is also called the quantization noise power. Adding one bit to the quantizer ha
https://en.wikipedia.org/wiki/Magnetoreception
Magnetoreception is a sense which allows an organism to detect the Earth's magnetic field. Animals with this sense include some arthropods, molluscs, and vertebrates (fish, amphibians, reptiles, birds, and mammals). The sense is mainly used for orientation and navigation, but it may help some animals to form regional maps. Experiments on migratory birds provide evidence that they make use of a cryptochrome protein in the eye, relying on the quantum radical pair mechanism to perceive magnetic fields. This effect is extremely sensitive to weak magnetic fields, and readily disturbed by radio-frequency interference, unlike a conventional iron compass. Birds have iron-containing materials in their upper beaks. There is some evidence that this provides a magnetic sense, mediated by the trigeminal nerve, but the mechanism is unknown. Cartilaginous fish including sharks and stingrays can detect small variations in electric potential with their electroreceptive organs, the ampullae of Lorenzini. These appear to be able to detect magnetic fields by induction. There is some evidence that these fish use magnetic fields in navigation. History Biologists have long wondered whether migrating animals such as birds and sea turtles have an inbuilt magnetic compass, enabling them to navigate using the Earth's magnetic field. Until late in the 20th century, evidence for this was essentially only behavioural: many experiments demonstrated that animals could indeed derive information from the magnetic field around them, but gave no indication of the mechanism. In 1972, Roswitha and Wolfgang Wiltschko showed that migratory birds responded to the direction and inclination (dip) of the magnetic field. In 1977, M. M. Walker and colleagues identified iron-based (magnetite) magnetoreceptors in the snouts of rainbow trout. In 2003, G. Fleissner and colleagues found iron-based receptors in the upper beaks of homing pigeons, both seemingly connected to the animal's trigeminal nerve. Resear
https://en.wikipedia.org/wiki/Audio%20signal%20processing
Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound power level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation. History The motivation for audio signal processing began at the beginning of the 20th century with inventions like the telephone, phonograph, and radio that allowed for the transmission and storage of audio signals. Audio processing was necessary for early radio broadcasting, as there were many problems with studio-to-transmitter links. The theory of signal processing and its application to audio was largely developed at Bell Labs in the mid 20th century. Claude Shannon and Harry Nyquist's early work on communication theory, sampling theory and pulse-code modulation (PCM) laid the foundations for the field. In 1957, Max Mathews became the first person to synthesize audio from a computer, giving birth to computer music. Major developments in digital audio coding and audio data compression include differential pulse-code modulation (DPCM) by C. Chapin Cutler at Bell Labs in 1950, linear predictive coding (LPC) by Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966, adaptive DPCM (ADPCM) by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973, discrete cosine transform (DCT) coding by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, and modified discrete cosine transform (MDCT) coding by J. P. Princen, A. W. Johnson and A. B. Bradley at the University of Surrey in 1987. LPC is the basis for p
https://en.wikipedia.org/wiki/InfinityDB
InfinityDB is an all-Java embedded database engine and client/server DBMS with an extended java.util.concurrent.ConcurrentNavigableMap interface (a subinterface of java.util.Map) that is deployed in handheld devices, on servers, on workstations, and in distributed settings. The design is based on a proprietary lockless, concurrent, B-tree architecture that enables client programmers to reach high levels of performance without risk of failures. A new Client/Server version 5.0 is in alpha testing, wrapping the established embedded version to provide shared access via a secure, remote server. In the embedded system, data is stored to and retrieved from a single embedded database file using the InfnityDB API that allows direct access to the variable length item spaces. Database client programmers can construct traditional relations as well as specialized models that directly satisfy the needs of the dependent application. There is no limit to the number of items, database size, or JVM size, so InfinityDB can function in both the smallest environment that provides random access storage and can be scaled to large settings. Traditional relations and specialized models can be directed to the same database file. InfinityDB can be optimized for standard relations as well as all other types of data, allowing client applications to perform at a minimum of one million operations per second on a virtual, 8-core system. AirConcurrentMap, is an in-memory map that implements the Java ConcurrentMap interface, but internally it uses a multi-core design so that its performance and memory make it the fastest Java Map when ordering is performed and it holds medium to large numbers of entries. AirConcurrentMap iteration is faster than any Java Map iterators, regardless of the specific map type. Map API InfinityDB can be accessed as an extended standard java.util.concurrent.ConcurrentNavigableMap, or via a low-level 'ItemSpace' API. The ConcurrentNavigableMap interface is a subinterf
https://en.wikipedia.org/wiki/Autotroph
An autotroph is an organism that produces complex organic compounds (such as carbohydrates, fats, and proteins) using carbon from simple substances such as carbon dioxide, generally using energy from light (photosynthesis) or inorganic chemical reactions (chemosynthesis). They convert an abiotic source of energy (e.g. light) into energy stored in organic compounds, which can be used by other organisms (e.g. heterotrophs). Autotrophs do not need a living source of carbon or energy and are the producers in a food chain, such as plants on land or algae in water (in contrast to heterotrophs as consumers of autotrophs or other heterotrophs). Autotrophs can reduce carbon dioxide to make organic compounds for biosynthesis and as stored chemical fuel. Most autotrophs use water as the reducing agent, but some can use other hydrogen compounds such as hydrogen sulfide. The primary producers can convert the energy in the light (phototroph and photoautotroph) or the energy in inorganic chemical compounds (chemotrophs or chemolithotrophs) to build organic molecules, which is usually accumulated in the form of biomass and will be used as carbon and energy source by other organisms (e.g. heterotrophs and mixotrophs). The photoautotrophs are the main primary producers, converting the energy of the light into chemical energy through photosynthesis, ultimately building organic molecules from carbon dioxide, an inorganic carbon source. Examples of chemolithotrophs are some archaea and bacteria (unicellular organisms) that produce biomass from the oxidation of inorganic chemical compounds, these organisms are called chemoautotrophs, and are frequently found in hydrothermal vents in the deep ocean. Primary producers are at the lowest trophic level, and are the reasons why Earth sustains life to this day. Most chemoautotrophs are lithotrophs, using inorganic electron donors such as hydrogen sulfide, hydrogen gas, elemental sulfur, ammonium and ferrous oxide as reducing agents and hyd
https://en.wikipedia.org/wiki/Arbitrarily%20large
In mathematics, the phrases arbitrarily large, arbitrarily small and arbitrarily long are used in statements to make clear of the fact that an object is large, small and long with little limitation or restraint, respectively. The use of "arbitrarily" often occurs in the context of real numbers (and its subsets thereof), though its meaning can differ from that of "sufficiently" and "infinitely". Examples The statement " is non-negative for arbitrarily large ." is a shorthand for: "For every real number , is non-negative for some value of greater than ." In the common parlance, the term "arbitrarily long" is often used in the context of sequence of numbers. For example, to say that there are "arbitrarily long arithmetic progressions of prime numbers" does not mean that there exists any infinitely long arithmetic progression of prime numbers (there is not), nor that there exists any particular arithmetic progression of prime numbers that is in some sense "arbitrarily long". Rather, the phrase is used to refer to the fact that no matter how large a number is, there exists some arithmetic progression of prime numbers of length at least . Similar to arbitrarily large, one can also define the phrase " holds for arbitrarily small real numbers", as follows: In other words: However small a number, there will be a number smaller than it such that holds. Arbitrarily large vs. sufficiently large vs. infinitely large While similar, "arbitrarily large" is not equivalent to "sufficiently large". For instance, while it is true that prime numbers can be arbitrarily large (since there are infinitely many of them due to Euclid's theorem), it is not true that all sufficiently large numbers are prime. As another example, the statement " is non-negative for arbitrarily large ." could be rewritten as: However, using "sufficiently large", the same phrase becomes: Furthermore, "arbitrarily large" also does not mean "infinitely large". For example, although prime number
https://en.wikipedia.org/wiki/Harmonic%20%28mathematics%29
In mathematics, a number of concepts employ the word harmonic. The similarity of this terminology to that of music is not accidental: the equations of motion of vibrating strings, drums and columns of air are given by formulas involving Laplacians; the solutions to which are given by eigenvalues corresponding to their modes of vibration. Thus, the term "harmonic" is applied when one is considering functions with sinusoidal variations, or solutions of Laplace's equation and related concepts. Mathematical terms whose names include "harmonic" include: Projective harmonic conjugate Cross-ratio Harmonic analysis Harmonic conjugate Harmonic form Harmonic function Harmonic mean Harmonic mode Harmonic number Harmonic series Alternating harmonic series Harmonic tremor Spherical harmonics Mathematical terminology Harmonic analysis
https://en.wikipedia.org/wiki/Integrated%20fluidic%20circuit
Integrated fluidic circuit (IFC) is a type of integrated circuit based on fluidics. See also Microfluidics Biotechnology Fluid mechanics Integrated circuits
https://en.wikipedia.org/wiki/Stein%27s%20example
In decision theory and estimation theory, Stein's example (also known as Stein's phenomenon or Stein's paradox) is the observation that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955. An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent. If one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse. Formal statement The following is the simplest form of the paradox, the special case in which the number of observations is equal to the number of parameters to be estimated. Let be a vector consisting of unknown parameters. To estimate these parameters, a single measurement is performed for each parameter , resulting in a vector of length . Suppose the measurements are known to be independent, Gaussian random variables, with mean and variance 1, i.e., . Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate. Under these conditions, it is intuitive and common to use each measurement as an estimate of its corresponding parameter. This so-called "ordinary" decision rule can be written as , which is the maximum likelihood estimator (MLE). The quality of such an estimator is measured by its risk function. A commonly used risk function is the mean squared error, defined as . Surprisingly, it turns out that the "ordinary" decision rule is suboptimal (inadmissible) in terms of mean
https://en.wikipedia.org/wiki/Lists%20of%20physics%20equations
In physics, there are equations in every field to relate physical quantities to each other and perform calculations. Entire handbooks of equations can only summarize most of the full subject, else are highly specialized within a certain field. Physics is derived of formulae only. General scope Variables commonly used in physics Continuity equation Constitutive equation Specific scope Defining equation (physical chemistry) List of equations in classical mechanics Table of thermodynamic equations List of equations in wave theory List of relativistic equations List of equations in fluid mechanics List of electromagnetism equations List of equations in gravitation List of photonics equations List of equations in quantum mechanics List of equations in nuclear and particle physics See also List of equations Operator (physics) Laws of science Units and nomenclature Physical constant Physical quantity SI units SI derived unit SI electromagnetism units List of common physics notations
https://en.wikipedia.org/wiki/Node%20%28physics%29
A node is a point along a standing wave where the wave has minimum amplitude. For instance, in a vibrating guitar string, the ends of the string are nodes. By changing the position of the end node through frets, the guitarist changes the effective length of the vibrating string and thereby the note played. The opposite of a node is an anti-node, a point where the amplitude of the standing wave is at maximum. These occur midway between the nodes. Explanation Standing waves result when two sinusoidal wave trains of the same frequency are moving in opposite directions in the same space and interfere with each other. They occur when waves are reflected at a boundary, such as sound waves reflected from a wall or electromagnetic waves reflected from the end of a transmission line, and particularly when waves are confined in a resonator at resonance, bouncing back and forth between two boundaries, such as in an organ pipe or guitar string. In a standing wave the nodes are a series of locations at equally spaced intervals where the wave amplitude (motion) is zero (see animation above). At these points the two waves add with opposite phase and cancel each other out. They occur at intervals of half a wavelength (λ/2). Midway between each pair of nodes are locations where the amplitude is maximum. These are called the antinodes. At these points the two waves add with the same phase and reinforce each other. In cases where the two opposite wave trains are not the same amplitude, they do not cancel perfectly, so the amplitude of the standing wave at the nodes is not zero but merely a minimum. This occurs when the reflection at the boundary is imperfect. This is indicated by a finite standing wave ratio (SWR), the ratio of the amplitude of the wave at the antinode to the amplitude at the node. In resonance of a two dimensional surface or membrane, such as a drumhead or vibrating metal plate, the nodes become nodal lines, lines on the surface where the surface is
https://en.wikipedia.org/wiki/Heartbeat%20%28computing%29
In computer science, a heartbeat is a periodic signal generated by hardware or software to indicate normal operation or to synchronize other parts of a computer system. Heartbeat mechanism is one of the common techniques in mission critical systems for providing high availability and fault tolerance of network services by detecting the network or systems failures of nodes or daemons which belongs to a network cluster—administered by a master server—for the purpose of automatic adaptation and rebalancing of the system by using the remaining redundant nodes on the cluster to take over the load of failed nodes for providing constant services. Usually a heartbeat is sent between machines at a regular interval in the order of seconds; a heartbeat message. If the endpoint does not receive a heartbeat for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed. Heartbeat messages are typically sent non-stop on a periodic or recurring basis from the originator's start-up until the originator's shutdown. When the destination identifies a lack of heartbeat messages during an anticipated arrival period, the destination may determine that the originator has failed, shutdown, or is generally no longer available. Heartbeat protocol A heartbeat protocol is generally used to negotiate and monitor the availability of a resource, such as a floating IP address, and the procedure involves sending network packets to all the nodes in the cluster to verify its reachability. Typically when a heartbeat starts on a machine, it will perform an election process with other machines on the heartbeat network to determine which machine, if any, owns the resource. On heartbeat networks of more than two machines, it is important to take into account partitioning, where two halves of the network could be functioning but not able to communicate with each other. In a situation such as this, it is important that the resource is only owned by o
https://en.wikipedia.org/wiki/Switched%20Multi-megabit%20Data%20Service
Switched Multi-megabit Data Service (SMDS) was a connectionless service used to connect LANs, MANs and WANs to exchange data, in early 1990s. In Europe, the service was known as Connectionless Broadband Data Service (CBDS). SMDS was specified by Bellcore, and was based on the IEEE 802.6 metropolitan area network (MAN) standard, as implemented by Bellcore, and used cell relay transport, Distributed Queue Dual Bus layer-2 switching arbitrator, and standard SONET or G.703 as access interfaces. It is a switching service that provides data transmission in the range between 1.544 Mbit/s (T1 or DS1) to 45 Mbit/s (T3 or DS3). SMDS was developed by Bellcore as an interim service until Asynchronous Transfer Mode matured. SMDS was notable for its initial introduction of the 53-byte cell and cell switching approaches, as well as the method of inserting 53-byte cells onto G.703 and SONET. In the mid-1990s, SMDS was replaced, largely by Frame Relay.
https://en.wikipedia.org/wiki/Lipidology
Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease. History Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition. Clinical lipidology The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins. A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL. Another factor of CVD that is often overlooked involves the
https://en.wikipedia.org/wiki/Glossary%20of%20mathematical%20symbols
A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics. The most basic symbols are the decimal digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9), and the letters of the Latin alphabet. The decimal digits are used for representing numbers through the Hindu–Arabic numeral system. Historically, upper-case letters were used for representing points in geometry, and lower-case letters were used for variables and constants. Letters are used for representing many other sorts of mathematical objects. As the number of these sorts has remarkably increased in modern mathematics, the Greek alphabet and some Hebrew letters are also used. In mathematical formulas, the standard typeface is italic type for Latin letters and lower-case Greek letters, and upright type for upper case Greek letters. For having more symbols, other typefaces are also used, mainly boldface , script typeface (the lower-case script face is rarely used because of the possible confusion with the standard face), German fraktur , and blackboard bold (the other letters are rarely used in this face, or their use is unconventional). The use of Latin and Greek letters as symbols for denoting mathematical objects is not described in this article. For such uses, see Variable (mathematics) and List of mathematical constants. However, some symbols that are described here have the same shape as the letter from which they are derived, such as and . These letters alone are not sufficient for the needs of mathematicians, and many other symbols are used. Some take their origin in punctuation marks and diacritics traditionally used in typography; others by deforming letter forms, as in the cases of and . Others, such as
https://en.wikipedia.org/wiki/Jorge%20Luis%20Borges%20and%20mathematics
Jorge Luis Borges and mathematics concerns several modern mathematical concepts found in certain essays and short stories of Argentinian author Jorge Luis Borges (1899-1986), including concepts such as set theory, recursion, chaos theory, and infinite sequences, although Borges' strongest links to mathematics are through Georg Cantor's theory of infinite sets, outlined in "The Doctrine of Cycles" (La doctrina de los ciclos). Some of Borges' most popular works such as "The Library of Babel" (La Biblioteca de Babel), "The Garden of Forking Paths" (El Jardín de Senderos que se Bifurcan), "The Aleph" (El Aleph), an allusion to Cantor's use of the Hebrew letter aleph () to denote cardinality of transfinite sets, and "The Approach to Al-Mu'tasim" (El acercamiento a Almotásim) illustrate his use of mathematics. According to Argentinian mathematician Guillermo Martínez, Borges at least had a knowledge of mathematics at the level of first courses in algebra and analysis at a university – covering logic, paradoxes, infinity, topology and probability theory. He was also aware of the contemporary debates on the foundations of mathematics. Infinity and cardinality His 1939 essay "Avatars of the Tortoise" (Avatares de la Tortuga) is about infinity, and he opens by describing the book he would like to write on infinity: “five or seven years of metaphysical, theological, and mathematical training would prepare me (perhaps) for properly planning that book.” In Borges' 1941 story, "The Library of Babel", the narrator declares that the collection of books of a fixed number of orthographic symbols and pages is unending. However, since the permutations of twenty-five orthographic symbols is finite, the library has to be periodic and self-repeating. In his 1975 short story "The Book of Sand" (El Libro de Arena), he deals with another form of infinity; one whose elements are a dense set, that is, for any two elements, we can always find another between them. This concept was a
https://en.wikipedia.org/wiki/BasicX
BasicX is a free programming language designed specifically for NetMedia's BX-24 microcontroller and based on the BASIC programming language. It is used in the design of robotics projects such as the Robodyssey Systems Mouse robot. Further reading Odom, Chris D. BasicX and Robotics. Robodyssey Systems LLC, External links NetMedia Home Page BasicX Free Downloads Sample Code , programmed in BasicX Videos, Sample Code, and Tutorials from the author of BasicX and Robotics BASIC compilers Embedded systems
https://en.wikipedia.org/wiki/List%20of%20works%20by%20Nikolay%20Bogolyubov
List of some published works of Nikolay Bogolyubov in chronological order: 1924 N. N. Bogolyubov (1924). On the behavior of solutions of linear differential equations at infinity (). 1934 1937 N. N. Bogoliubov and N. M. Krylov (1937). "La theorie generalie de la mesure dans son application a l'etude de systemes dynamiques de la mecanique non-lineaire" (in French). Ann. Math. II 38: 65–113. Zbl. 16.86. 1945 1946 N. N. Bogoliubov (1946). "Kinetic Equations" (in Russian). Journal of Experimental and Theoretical Physics 16 (8): 691–702. N. N. Bogoliubov (1946). "Kinetic Equations" (in English). Journal of Physics 10 (3): 265–274. 1947 N. N. Bogoliubov, K. P. Gurov (1947). "Kinetic Equations in Quantum Mechanics" (in Russian). Journal of Experimental and Theoretical Physics 17 (7): 614–628. N. N. Bogoliubov (1947). "К теории сверхтекучести" (in Russian). Известия АН СССР, физика, 1947, 11, № 1, 77. N. N. Bogoliubov (1947). "On the Theory of Superfluidity" (in English). Journal of Physics 11 (1): 23–32. 1948 N. N. Bogoliubov (1948). "Equations of Hydrodynamics in Statistical Mechanics" (in Ukrainian). Sbornik Trudov Instituta Matematiki AN USSR 10: 41—59. 1949 N. N. Bogoliubov (1967—1970): Lectures on Quantum Statistics. Problems of Statistical Mechanics of Quantum Systems. New York, Gordon and Breach. 1955 1957 (1st edition) (3rd edition) N. N. Bogoliubov, O. S. Parasyuk (1957). "Uber die Multiplikation der Kausalfunktionen in der Quantentheorie der Felder" (in German). Acta Mathematica 97: 227–266. . 1958 N. N. Bogoliubov (1958). On a New Method in the Theory of Superconductivity. Journal of Experimental and Theoretical Physics 34 (1): 58. 1965 N. N. Bogolubov, B. V. Struminsky, A. N. Tavkhelidze (1965). On composite models in the theory of elementary particles. JINR Preprint D-1968, Dubna. External links Complete list Mathematics-related lists Bibliographies by writer
https://en.wikipedia.org/wiki/Agarose%20gel%20electrophoresis
Agarose gel electrophoresis is a method of gel electrophoresis used in biochemistry, molecular biology, genetics, and clinical chemistry to separate a mixed population of macromolecules such as DNA or proteins in a matrix of agarose, one of the two main components of agar. The proteins may be separated by charge and/or size (isoelectric focusing agarose electrophoresis is essentially size independent), and the DNA and RNA fragments by length. Biomolecules are separated by applying an electric field to move the charged molecules through an agarose matrix, and the biomolecules are separated by size in the agarose gel matrix. Agarose gel is easy to cast, has relatively fewer charged groups, and is particularly suitable for separating DNA of size range most often encountered in laboratories, which accounts for the popularity of its use. The separated DNA may be viewed with stain, most commonly under UV light, and the DNA fragments can be extracted from the gel with relative ease. Most agarose gels used are between 0.7–2% dissolved in a suitable electrophoresis buffer. Properties of agarose gel Agarose gel is a three-dimensional matrix formed of helical agarose molecules in supercoiled bundles that are aggregated into three-dimensional structures with channels and pores through which biomolecules can pass. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state. The melting temperature is different from the gelling temperature, depending on the sources, agarose gel has a gelling temperature of 35–42 °C and a melting temperature of 85–95 °C. Low-melting and low-gelling agaroses made through chemical modifications are also available. Agarose gel has large pore size and good gel strength, making it suitable as an anticonvection medium for the electrophoresis of DNA and large protein molecules. The pore size of a 1% gel has been estimated from 100 nm to 200–500 nm, and its gel strength allows gels as dilute
https://en.wikipedia.org/wiki/Ishango%20bone
The Ishango bone, discovered at the "Fisherman Settlement" of Ishango in the Democratic Republic of Congo, is a bone tool and possible mathematical device that dates to the Upper Paleolithic era. The curved bone is dark brown in color, about 10 centimeters in length, and features a sharp piece of quartz affixed to one end, perhaps for engraving. Because the bone has been narrowed, scraped, polished, and engraved to a certain extent, it is no longer possible to determine what animal the bone belonged to, although it is assumed to belong to a mammal. The ordered engravings have led many to speculate the meaning behind these marks, including interpretations like mathematical significance or astrological relevance. It is thought by some to be a tally stick, as it features a series of what has been interpreted as tally marks carved in three columns running the length of the tool, though it has also been suggested that the scratches might have been to create a better grip on the handle or for some other non-mathematical reason. Others argue that the marks on the object are non-random and that it was likely a kind of counting tool and used to perform simple mathematical procedures. Other speculations include the engravings on the bone serving as a lunar calendar. Dating to 20,000 years before present, it is regarded as the oldest mathematical tool to humankind, with the possible exception of the approximately 40,000-year-old Lebombo bone from southern Africa. History Archaeological discovery The Ishango bone was found in 1950 by Belgian Jean de Heinzelin de Braucourt while exploring what was then the Belgian Congo. It was discovered in the area of Ishango near the Semliki River. Lake Edward empties into the Semliki which forms part of the headwaters of the Nile River (now on the border between modern-day Uganda and D.R. Congo). Some archaeologists believe the prior inhabitants of Ishango were a "pre-sapiens species". However, the most recent inhabitants, who gave the a
https://en.wikipedia.org/wiki/Millennium%20Mathematics%20Project
The Millennium Mathematics Project (MMP) was set up within the University of Cambridge in England as a joint project between the Faculties of Mathematics and Education in 1999. The MMP aims to support maths education for pupils of all abilities from ages 5 to 19 and promote the development of mathematical skills and understanding, particularly through enrichment and extension activities beyond the school curriculum, and to enhance the mathematical understanding of the general public. The project was directed by John Barrow from 1999 until September 2020. Programmes The MMP includes a range of complementary programmes: The NRICH website publishes free mathematics education enrichment material for ages 5 to 19. NRICH material focuses on problem-solving, building core mathematical reasoning and strategic thinking skills. In the academic year 2004/5 the website attracted over 1.7 million site visits (more than 49 million hits). Plus Magazine is a free online maths magazine for age 15+ and the general public. In 2004/5, Plus attracted over 1.3 million website visits (more than 31 million hits). The website won the Webby award in 2001 for the best Science site on the Internet. The Motivate video-conferencing project links university mathematicians and scientists to primary and secondary schools in areas of the UK from Jersey and Belfast to Glasgow and inner-city London, with international links to Pakistan, South Africa, India and Singapore. The project has also developed a Hands On Maths Roadshow presenting creative methods of exploring mathematics, and in 2004 took on the running of Simon Singh's Enigma schools workshops, exploring maths through cryptography and codebreaking. Both are taken to primary and secondary schools and public venues such as shopping centres across the UK and Ireland. James Grime is the Enigma Project Officer and gives talks in schools and to the general public about the history and mathematics of code breaking - including the demonstration of
https://en.wikipedia.org/wiki/Bond%20Bridge
Bond Bridge is a Wi-Fi device that communicates with infra-red or RF controlled devices, such as ceiling fans, shades, and fireplaces. These devices often come with a battery powered remote control. The bond bridge receives commands form a network port, and it forwards the commands to the remote controlled device by simulating the signals the remote control would produce. The photo shows a bond bridge in the dining room of Baywood Court, a senior community. The bond bridge controls ceiling fans in the dining room. Broadlink MR4 is a competing product.
https://en.wikipedia.org/wiki/Ecological%20facilitation
Ecological facilitation or probiosis describes species interactions that benefit at least one of the participants and cause harm to neither. Facilitations can be categorized as mutualisms, in which both species benefit, or commensalisms, in which one species benefits and the other is unaffected. This article addresses both the mechanisms of facilitation and the increasing information available concerning the impacts of facilitation on community ecology. Categories There are two basic categories of facilitative interactions: Mutualism is an interaction between species that is beneficial to both. A familiar example of a mutualism is the relationship between flowering plants and their pollinators. The plant benefits from the spread of pollen between flowers, while the pollinator receives some form of nourishment, either from nectar or the pollen itself. Commensalism is an interaction in which one species benefits and the other species is unaffected. Epiphytes (plants growing on other plants, usually trees) have a commensal relationship with their host plant because the epiphyte benefits in some way (e.g., by escaping competition with terrestrial plants or by gaining greater access to sunlight) while the host plant is apparently unaffected. Strict categorization, however, is not possible for some complex species interactions. For example, seed germination and survival in harsh environments is often higher under so-called nurse plants than on open ground. A nurse plant is one with an established canopy, beneath which germination and survival are more likely due to increased shade, soil moisture, and nutrients. Thus, the relationship between seedlings and their nurse plants is commensal. However, as the seedlings grow into established plants, they are likely to compete with their former benefactors for resources. Mechanisms The beneficial effects of species on one another are realized in various ways, including refuge from physical stress, predation, and competi
https://en.wikipedia.org/wiki/Chemosynthesis
In biochemistry, chemosynthesis is the biological conversion of one or more carbon-containing molecules (usually carbon dioxide or methane) and nutrients into organic matter using the oxidation of inorganic compounds (e.g., hydrogen gas, hydrogen sulfide) or ferrous ions as a source of energy, rather than sunlight, as in photosynthesis. Chemoautotrophs, organisms that obtain carbon from carbon dioxide through chemosynthesis, are phylogenetically diverse. Groups that include conspicuous or biogeochemically important taxa include the sulfur-oxidizing Gammaproteobacteria, the Campylobacterota, the Aquificota, the methanogenic archaea, and the neutrophilic iron-oxidizing bacteria. Many microorganisms in dark regions of the oceans use chemosynthesis to produce biomass from single-carbon molecules. Two categories can be distinguished. In the rare sites where hydrogen molecules (H2) are available, the energy available from the reaction between CO2 and H2 (leading to production of methane, CH4) can be large enough to drive the production of biomass. Alternatively, in most oceanic environments, energy for chemosynthesis derives from reactions in which substances such as hydrogen sulfide or ammonia are oxidized. This may occur with or without the presence of oxygen. Many chemosynthetic microorganisms are consumed by other organisms in the ocean, and symbiotic associations between chemosynthesizers and respiring heterotrophs are quite common. Large populations of animals can be supported by chemosynthetic secondary production at hydrothermal vents, methane clathrates, cold seeps, whale falls, and isolated cave water. It has been hypothesized that anaerobic chemosynthesis may support life below the surface of Mars, Jupiter's moon Europa, and other planets. Chemosynthesis may have also been the first type of metabolism that evolved on Earth, leading the way for cellular respiration and photosynthesis to develop later. Hydrogen sulfide chemosynthesis process Giant tube worms
https://en.wikipedia.org/wiki/Whitewater%20Interactive%20System%20Development%20with%20Object%20Models
Wisdom (Whitewater Interactive System Development with Object Models) is a software development process and method to design software-intensive interactive systems. It is based on object modelling, and focuses human-computer interaction (HCI) in order to model the software architecture of the system i.e. it is architecture-centric. The focus on HCI while being architecture-centric places Wisdom as a pioneer method within human-centered software engineering. Wisdom was conceived by Nuno Nunes and first published in the years 1999-2000 in order to close the gaps of existing software engineering methods regarding the user interface design. Notably, the Wisdom method identifies for each use case the tasks of the user, the interaction spaces of the user interface, and the system responsibilities that support that user activity, which are complemented with the data entities used in each case, completing a usable software architecture, an MVC model. The Wisdom model clarifies the relation between the human and the computer-based system, allows rationalization over the software artifacts that must be implemented, therefore facilitating effort affection for a software development team. From Wisdom, other relevant contributions were derived targeting the enhancement of software development based on the Wisdom model, such as: CanonSketch, Hydra Framework Cruz's Another relevant contribution is related to effort estimation of software development, the iUCP method, which is based in traditional UCP method leveling the estimation based on the predicted user interface design. A comparison study was carried out using both methods, revealing that there is positive effect in the usage of iUCP when compared to UCP when considering the user interface design, a recurrent situation in nowadays software systems development.
https://en.wikipedia.org/wiki/Conway%20polyhedron%20notation
In geometry, Conway polyhedron notation, invented by John Horton Conway and promoted by George W. Hart, is used to describe polyhedra based on a seed polyhedron modified by various prefix operations. Conway and Hart extended the idea of using operators, like truncation as defined by Kepler, to build related polyhedra of the same symmetry. For example, represents a truncated cube, and , parsed as , is (topologically) a truncated cuboctahedron. The simplest operator dual swaps vertex and face elements; e.g., a dual cube is an octahedron: . Applied in a series, these operators allow many higher order polyhedra to be generated. Conway defined the operators (ambo), (bevel), (dual), (expand), (gyro), (join), (kis), (meta), (ortho), (snub), and (truncate), while Hart added (reflect) and (propellor). Later implementations named further operators, sometimes referred to as "extended" operators. Conway's basic operations are sufficient to generate the Archimedean and Catalan solids from the Platonic solids. Some basic operations can be made as composites of others: for instance, ambo applied twice is the expand operation (), while a truncation after ambo produces bevel (). Polyhedra can be studied topologically, in terms of how their vertices, edges, and faces connect together, or geometrically, in terms of the placement of those elements in space. Different implementations of these operators may create polyhedra that are geometrically different but topologically equivalent. These topologically equivalent polyhedra can be thought of as one of many embeddings of a polyhedral graph on the sphere. Unless otherwise specified, in this article (and in the literature on Conway operators in general) topology is the primary concern. Polyhedra with genus 0 (i.e. topologically equivalent to a sphere) are often put into canonical form to avoid ambiguity. Operators In Conway's notation, operations on polyhedra are applied like functions, from right to left. For example, a
https://en.wikipedia.org/wiki/MPLAB%20devices
The MPLAB series of devices are programmers and debuggers for Microchip PIC and dsPIC microcontrollers, developed by Microchip Technology. The ICD family of debuggers has been produced since the release of the first Flash-based PIC microcontrollers, and the latest ICD 3 currently supports all current PIC and dsPIC devices. It is the most popular combination debugging/programming tool from Microchip. The REAL ICE emulator is similar to the ICD, with the addition of better debugging features, and various add-on modules that expand its usage scope. The ICE is a family of discontinued in-circuit emulators for PIC and dsPIC devices, and is currently superseded by the REAL ICE. MPLAB ICD The MPLAB ICD is the first in-circuit debugger product by Microchip, and is currently discontinued and superseded by ICD 2. The ICD connected to the engineer's PC via RS-232, and connected to the device via ICSP. The ICD supported devices within the PIC16C and PIC16F families, and supported full speed execution, or single step interactive debugging. Only one hardware breakpoint was supported by the ICD. MPLAB ICD 2 The MPLAB ICD 2 is a discontinued in-circuit debugger and programmer by Microchip, and is currently superseded by ICD 3. The ICD 2 connects to the engineer's PC via USB or RS-232, and connects to the device via ICSP. The ICD 2 supports most PIC and dsPIC devices within the PIC10, PIC12, PIC16, PIC18, dsPIC, rfPIC and PIC32 families, and supports full speed execution, or single step interactive debugging. At breakpoints, data and program memory can be read and modified using the MPLAB IDE. The ICD 2 firmware is field upgradeable using the MPLAB IDE. The ICD 2 can be used to erase, program or reprogram PIC MCU program memory, while the device is installed on target hardware, using ICSP. Target device voltages from 2.0V to 6.0V are supported. MPLAB ICD 3 The MPLAB ICD 3 is an in-circuit debugger and programmer by Microchip, and is the latest in the ICD series. The ICD 3
https://en.wikipedia.org/wiki/Broadcast%2C%20unknown-unicast%20and%20multicast%20traffic
Broadcast, unknown-unicast and multicast traffic (BUM traffic) is network traffic transmitted using one of three methods of sending data link layer network traffic to a destination of which the sender does not know the network address. This is achieved by sending the network traffic to multiple destinations on an Ethernet network. As a concept related to computer networking, it includes three types of Ethernet modes: broadcast, unicast and multicast Ethernet. BUM traffic refers to that kind of network traffic that will be forwarded to multiple destinations or that cannot be addressed to the intended destination only. Overview Broadcast traffic is used to transmit a message to any reachable destination in the network without the need to know any information about the receiving party. When broadcast traffic is received by a network switch it is replicated to all ports within the respective VLAN except the one from which the traffic comes from. Unknown-unicast traffic happens when a switch receives unicast traffic intended to be delivered to a destination that is not in its forwarding information base. In this case the switch marks the frame for flooding and sends it to all forwarding ports within the respective VLAN. Forwarding this type of traffic can create unnecessary traffic that leads to poor network performance or even a complete loss of network service. This flooding of packets is known as a unicast flooding. Multicast traffic allows a host to contact a subset of hosts or devices joined into a group. This causes the message to be broadcast when no group management mechanism is present. Flooding BUM frames is required in transparent bridging and in a data center context this does not scale well causing poor performance. BUM traffic control Throttling One issue that may arise is that some network devices cannot handle high rates of broadcast, unknown-unicast or multicast traffic. In such cases, it is possible to limit the BUM traffic for specific ports in
https://en.wikipedia.org/wiki/Active%20and%20passive%20transformation
Geometric transformations can be distinguished into two types: active or alibi transformations which change the physical position of a set of points relative to a fixed frame of reference or coordinate system (alibi meaning "being somewhere else at the same time"); and passive or alias transformations which leave points fixed but change the frame of reference or coordinate system relative to which they are described (alias meaning "going under a different name"). By transformation, mathematicians usually refer to active transformations, while physicists and engineers could mean either. For instance, active transformations are useful to describe successive positions of a rigid body. On the other hand, passive transformations may be useful in human motion analysis to observe the motion of the tibia relative to the femur, that is, its motion relative to a (local) coordinate system which moves together with the femur, rather than a (global) coordinate system which is fixed to the floor. In three-dimensional Euclidean space, any proper rigid transformation, whether active or passive, can be represented as a screw displacement, the composition of a translation along an axis and a rotation about that axis. The terms active transformation and passive transformation were first introduced in 1957 by Valentine Bargmann for describing Lorentz transformations in special relativity. Example As an example, let the vector , be a vector in the plane. A rotation of the vector through an angle θ in counterclockwise direction is given by the rotation matrix: which can be viewed either as an active transformation or a passive transformation (where the above matrix will be inverted), as described below. Spatial transformations in the Euclidean space R3 In general a spatial transformation may consist of a translation and a linear transformation. In the following, the translation will be omitted, and the linear transformation will be represented by a 3×3 matrix . Active transfo
https://en.wikipedia.org/wiki/List%20of%20triangle%20inequalities
In geometry, triangle inequalities are inequalities involving the parameters of triangles, that hold for every triangle, or for every triangle meeting certain conditions. The inequalities give an ordering of two different values: they are of the form "less than", "less than or equal to", "greater than", or "greater than or equal to". The parameters in a triangle inequality can be the side lengths, the semiperimeter, the angle measures, the values of trigonometric functions of those angles, the area of the triangle, the medians of the sides, the altitudes, the lengths of the internal angle bisectors from each angle to the opposite side, the perpendicular bisectors of the sides, the distance from an arbitrary point to another point, the inradius, the exradii, the circumradius, and/or other quantities. Unless otherwise specified, this article deals with triangles in the Euclidean plane. Main parameters and notation The parameters most commonly appearing in triangle inequalities are: the side lengths a, b, and c; the semiperimeter s = (a + b + c) / 2 (half the perimeter p); the angle measures A, B, and C of the angles of the vertices opposite the respective sides a, b, and c (with the vertices denoted with the same symbols as their angle measures); the values of trigonometric functions of the angles; the area T of the triangle; the medians ma, mb, and mc of the sides (each being the length of the line segment from the midpoint of the side to the opposite vertex); the altitudes ha, hb, and hc (each being the length of a segment perpendicular to one side and reaching from that side (or possibly the extension of that side) to the opposite vertex); the lengths of the internal angle bisectors ta, tb, and tc (each being a segment from a vertex to the opposite side and bisecting the vertex's angle); the perpendicular bisectors pa, pb, and pc of the sides (each being the length of a segment perpendicular to one side at its midpoint and reaching to one of the other sides); t
https://en.wikipedia.org/wiki/Commutative%20property
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says something like or , the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction, that do not have it (for example, ); such operations are not commutative, and so are referred to as noncommutative operations. The idea that simple operations, such as the multiplication and addition of numbers, are commutative was for many years implicitly assumed. Thus, this property was not named until the 19th century, when mathematics started to become formalized. A similar property exists for binary relations; a binary relation is said to be symmetric if the relation applies regardless of the order of its operands; for example, equality is symmetric as two equal mathematical objects are equal regardless of their order. Mathematical definitions A binary operation on a set S is called commutative if In other words, an operation is commutative if every two elements commute. An operation that does not satisfy the above property is called noncommutative. One says that commutes with or that and commute under if That is, a specific pair of elements may commute even if the operation is (strictly) noncommutative. Examples Commutative operations Addition and multiplication are commutative in most number systems, and, in particular, between natural numbers, integers, rational numbers, real numbers and complex numbers. This is also true in every field. Addition is commutative in every vector space and in every algebra. Union and intersection are commutative operations on sets. "And" and "or" are commutative logical operations. Noncommutative operations Some noncommutative binary operations: Division, subtraction, and exponentiat
https://en.wikipedia.org/wiki/Stochastic%20resonance
Stochastic resonance (SR) is a phenomenon in which a signal that is normally too weak to be detected by a sensor, can be boosted by adding white noise to the signal, which contains a wide spectrum of frequencies. The frequencies in the white noise corresponding to the original signal's frequencies will resonate with each other, amplifying the original signal while not amplifying the rest of the white noise – thereby increasing the signal-to-noise ratio, which makes the original signal more prominent. Further, the added white noise can be enough to be detectable by the sensor, which can then filter it out to effectively detect the original, previously undetectable signal. This phenomenon of boosting undetectable signals by resonating with added white noise extends to many other systems – whether electromagnetic, physical or biological – and is an active area of research. Stochastic resonance was first proposed by the Italian physicists Roberto Benzi, Alfonso Sutera and Angelo Vulpiani in 1981, and the first application they proposed (together with Giorgio Parisi) was in the context of climate dynamics. Technical description Stochastic resonance (SR) is observed when noise added to a system changes the system's behaviour in some fashion. More technically, SR occurs if the signal-to-noise ratio of a nonlinear system or device increases for moderate values of noise intensity. It often occurs in bistable systems or in systems with a sensory threshold and when the input signal to the system is "sub-threshold." For lower noise intensities, the signal does not cause the device to cross threshold, so little signal is passed through it. For large noise intensities, the output is dominated by the noise, also leading to a low signal-to-noise ratio. For moderate intensities, the noise allows the signal to reach threshold, but the noise intensity is not so large as to swamp it. Thus, a plot of signal-to-noise ratio as a function of noise intensity contains a peak. Strictly sp
https://en.wikipedia.org/wiki/Sociology%20of%20food
The sociology of food is the study of food as it relates to the history, progression, and future development of society, encompassing its production, preparation, consumption, and distribution, its medical, ritual, spiritual, ethical and cultural applications, and related environmental and labor issues. The aspect of food distribution in our society can be examined through the analysis of the changes in the food supply chain. Globalization in particular, has significant effects on the food supply chain by enabling scale effect in the food distribution industry. Food distribution Impact from scale effects Scale effects resulting from centralized acquisition purchase centres in the food supply chain favor large players such as big retailers or distributors in the food distribution market. This is due to the fact that they can utilize their strong market power and financial advantage over smaller players. Having both strong market power and greater access to the financial credit market meant that they can impose barriers to entry and cement their position in the food distribution market. This would result in a food distribution chain that is characterized by large players on one end and small players choosing niche markets to operate in on the other end. The existence of smaller players in specialized food distribution markets could be attributed to their shrinking market share and their inability to compete with the larger players due to the scale effects. Through this mechanism, globalization has displaced smaller role players. Another mechanism troubling the specialized food distribution markets is the ability of distribution chains to possess their own brand. Stores with their own brand are able to combat price wars between competitors by lowering the price of their own brand, thus making consumers more likely to purchase goods from them. Early history and culture Since the beginning of mankind, food was important simply for the purpose of nourishment. As prim
https://en.wikipedia.org/wiki/Variable%20structure%20system
A variable structure system, or VSS, is a discontinuous nonlinear system of the form where is the state vector, is the time variable, and is a piecewise continuous function. Due to the piecewise continuity of these systems, they behave like different continuous nonlinear systems in different regions of their state space. At the boundaries of these regions, their dynamics switch abruptly. Hence, their structure varies over different parts of their state space. The development of variable structure control depends upon methods of analyzing variable structure systems, which are special cases of hybrid dynamical systems. See also Variable structure control Sliding mode control Hybrid system Nonlinear control Robust control Optimal control H-bridge – A topology that combines four switches forming the four legs of an "H". Can be used to drive a motor (or other electrical device) forward or backward when only a single supply is available. Often used in actuator sliding-mode control systems. Switching amplifier – Uses switching-mode control to drive continuous outputs Delta-sigma modulation – Another (feedback) method of encoding a continuous range of values in a signal that rapidly switches between two states (i.e., a kind of specialized sliding-mode control) Pulse-density modulation – A generalized form of delta-sigma modulation Pulse-width modulation – Another modulation scheme that produces continuous motion through discontinuous switching
https://en.wikipedia.org/wiki/Zero-forcing%20precoding
Zero-forcing (or null-steering) precoding is a method of spatial signal processing by which a multiple antenna transmitter can null the multiuser interference in a multi-user MIMO wireless communication system. When the channel state information is perfectly known at the transmitter, the zero-forcing precoder is given by the pseudo-inverse of the channel matrix. Mathematical description In a multiple antenna downlink system which comprises transmit antenna access points and single receive antenna users, such that , the received signal of user is described as where is the vector of transmitted symbols, is the noise signal, is the channel vector and is some linear precoding vector. Here is the matrix transpose, is the square root of transmit power, and is the message signal with zero mean and variance . The above signal model can be more compactly re-written as where is the received signal vector, is channel matrix, is the precoding matrix, is a diagonal power matrix, and is the transmit signal. A zero-forcing precoder is defined as a precoder where intended for user is orthogonal to every channel vector associated with users where . That is, Thus the interference caused by the signal meant for one user is effectively nullified for rest of the users via zero-forcing precoder. From the fact that each beam generated by zero-forcing precoder is orthogonal to all the other user channel vectors, one can rewrite the received signal as The orthogonality condition can be expressed in matrix form as where is some diagonal matrix. Typically, is selected to be an identity matrix. This makes the right Moore-Penrose pseudo-inverse of given by Given this zero-forcing precoder design, the received signal at each user is decoupled from each other as Quantify the feedback amount Quantify the amount of the feedback resource required to maintain at least a given throughput performance gap between zero-forcing with perfect feedback and wi
https://en.wikipedia.org/wiki/Adult
An adult is a human or other animal that has reached full growth. The biological definition of the word means an animal reaching sexual maturity and thus capable of reproduction. In the human context, the term adult has meanings associated with social and legal concepts. In contrast to a non-adult or "minor", a legal adult is a person who has attained the age of majority and is therefore regarded as independent, self-sufficient, and responsible. They may also be regarded as a "major". The typical age of attaining legal adulthood is 18 to 21, although definition may vary by legal rights, country, and psychological development. Human adulthood encompasses psychological adult development. Definitions of adulthood are often inconsistent and contradictory; a person may be biologically an adult, and have adult behavior, but still be treated as a child if they are under the legal age of majority. Conversely, one may legally be an adult but possess none of the maturity and responsibility that may define an adult character. In different cultures there are events that relate passing from being a child to becoming an adult or coming of age. This often encompasses the passing a series of tests to demonstrate that a person is prepared for adulthood, or reaching a specified age, sometimes in conjunction with demonstrating preparation. Most modern societies determine legal adulthood based on reaching a legally specified age without requiring a demonstration of physical maturity or preparation for adulthood. Biological adulthood Historically and cross-culturally, adulthood has been determined primarily by the start of puberty (the appearance of secondary sex characteristics such as menstruation and the development of breasts in women, ejaculation, the development of facial hair, and a deeper voice in men, and pubic hair in both sexes). In the past, a person usually moved from the status of child directly to the status of adult, often with this shift being marked by some type of
https://en.wikipedia.org/wiki/Vagrancy%20%28biology%29
Vagrancy is a phenomenon in biology whereby an individual animal (usually a bird) appears well outside its normal range; they are known as vagrants. The term accidental is sometimes also used. There are a number of poorly understood factors which might cause an animal to become a vagrant, including internal causes such as navigatory errors (endogenous vagrancy) and external causes such as severe weather (exogenous vagrancy). Vagrancy events may lead to colonisation and eventually to speciation. Birds In the Northern Hemisphere, adult birds (possibly inexperienced younger adults) of many species are known to continue past their normal breeding range during their spring migration and end up in areas further north (such birds are termed spring overshoots). In autumn, some young birds, instead of heading to their usual wintering grounds, take "incorrect" courses and migrate through areas which are not on their normal migration path. For example, Siberian passerines which normally winter in Southeast Asia are commonly found in Northwest Europe, e.g. Arctic warblers in Britain. This is reverse migration, where the birds migrate in the opposite direction to that expected (say, flying north-west instead of south-east). The causes of this are unknown, but genetic mutation or other anomalies relating to the bird's magnetic sensibilities is suspected. Other birds are sent off course by storms, such as some North American birds blown across the Atlantic Ocean to Europe. Birds can also be blown out to sea, become physically exhausted, land on a ship and end up being carried to the ship's destination. While many vagrant birds do not survive, if sufficient numbers wander to a new area they can establish new populations. Many isolated oceanic islands are home to species that are descended from landbirds blown out to sea, Hawaiian honeycreepers and Darwin's finches being prominent examples. Insects Vagrancy in insects is recorded from many groups—it is particularly well-stu
https://en.wikipedia.org/wiki/Test%20vector
In computer science and engineering, a test vector is a set of inputs provided to a system in order to test that system. In software development, test vectors are a methodology of software testing and software verification and validation. Rationale In computer science and engineering, a system acts as a computable function. An example of a specific function could be where is the output of the system and is the input; however, most systems' inputs are not one-dimensional. When the inputs are multi-dimensional, we could say that the system takes the form ; however, we can generalize this equation to a general form where is the result of the system's execution, belongs to the set of computable functions, and is an input vector. While testing the system, various test vectors must be used to examine the system's behavior with differing inputs. Example For example, consider a login page with two input fields: a username field and a password field. In that case, the login system can be described as: with and , with designating login successful, and designating login failure, respectively. Making things more generic, we can suggest that the function takes input as a 2-dimensional vector and outputs a one-dimensional vector (scalar). This can be written in the following way:- with In this case, is called the input vector, and is called the output vector. In order to test the login page, it is necessary to pass some sample input vectors . In this context is called a test vector. See also Automatic test pattern generation
https://en.wikipedia.org/wiki/IBM%20System/360%20architecture
The IBM System/360 architecture is the model independent architecture for the entire S/360 line of mainframe computers, including but not limited to the instruction set architecture. The elements of the architecture are documented in the IBM System/360 Principles of Operation and the IBM System/360 I/O Interface Channel to Control Unit Original Equipment Manufacturers' Information manuals. Features The System/360 architecture provides the following features: 16 32-bit general-purpose registers 4 64-bit floating-point registers 64-bit processor status register (PSW), which includes a 24-bit instruction address 24-bit (16 MB) byte-addressable memory space Big-endian byte/word order A standard instruction set, including fixed-point binary arithmetic and logical instructions, present on all System/360 models (except the Model 20, see below). A commercial instruction set, adding decimal arithmetic instructions, is optional on some models, as is a scientific instruction set, which adds floating-point instructions. The universal instruction set includes all of the above plus the storage protection instructions and is standard for some models. The Model 44 provides a few unique instructions for data acquisition and real-time processing and is missing the storage-to-storage instructions. However, IBM offered a Commercial Instruction Set" feature that ran in bump storage and simulated the missing instructions. The Model 20 offers a stripped-down version of the standard instruction set, limited to eight general registers with halfword (16-bit) instructions only, plus the commercial instruction set, and unique instructions for input/output. The Model 67 includes some instructions to handle 32-bit addresses and "dynamic address translation", with additional privileged instructions to provide virtual memory. Memory Memory (storage) in System/360 is addressed in terms of 8-bit bytes. Various instructions operate on larger units called halfword (2 bytes), fullword (4