source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Equivalence%20partitioning
Equivalence partitioning or equivalence class partitioning (ECP) is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived. In principle, test cases are designed to cover each partition at least once. This technique tries to define test cases that uncover classes of errors, thereby reducing the total number of test cases that must be developed. An advantage of this approach is reduction in the time required for testing software due to lesser number of test cases. Equivalence partitioning is typically applied to the inputs of a tested component, but may be applied to the outputs in rare cases. The equivalence partitions are usually derived from the requirements specification for input attributes that influence the processing of the test object. The fundamental concept of ECP comes from equivalence class which in turn comes from equivalence relation. A software system is in effect a computable function implemented as an algorithm in some implementation programming language. Given an input test vector some instructions of that algorithm get covered, ( see code coverage for details ) others do not. This gives the interesting relationship between input test vectors:- is an equivalence relation between test vectors if and only if the coverage foot print of the vectors are exactly the same, that is, they cover the same instructions, at same step. This would evidently mean that the relation cover would partition the domain of the test vector into multiple equivalence class. This partitioning is called equivalence class partitioning of test input. If there are equivalent classes, only vectors are sufficient to fully cover the system. The demonstration can be done using a function written in C: int safe_add( int a, int b ) { int c = a + b; if ( a > 0 && b > 0 && c <= 0 ) { fprintf ( stderr, "Overflow (positive)!\n" ); } if ( a < 0 && b
https://en.wikipedia.org/wiki/Boundary-value%20analysis
Boundary-value analysis is a software testing technique in which tests are designed to include representatives of boundary values in a range. The idea comes from the boundary. Given that we have a set of test vectors to test the system, a topology can be defined on that set. Those inputs which belong to the same equivalence class as defined by the equivalence partitioning theory would constitute the basis. Given that the basis sets are neighbors, there would exist a boundary between them. The test vectors on either side of the boundary are called boundary values. In practice this would require that the test vectors can be ordered, and that the individual parameters follows some kind of order (either partial order or total order). Formal definition Formally the boundary values can be defined as below: Let the set of the test vectors be . Let's assume that there is an ordering relation defined over them, as . Let be two equivalent classes. Assume that test vector and . If or then the classes are in the same neighborhood and the values are boundary values. In plainer English, values on the minimum and maximum edges of an equivalence partition are tested. The values could be input or output ranges of a software component, can also be the internal implementation. Since these boundaries are common locations for errors that result in software faults they are frequently exercised in test cases. Application The expected input and output values to the software component should be extracted from the component specification. The values are then grouped into sets with identifiable boundaries. Each set, or partition, contains values that are expected to be processed by the component in the same way. Partitioning of test data ranges is explained in the equivalence partitioning test case design technique. It is important to consider both valid and invalid partitions when designing test cases. The demonstration can be done using a function written in Java. class Safe
https://en.wikipedia.org/wiki/Indentation%20hardness
Indentation hardness tests are used in mechanical engineering to determine the hardness of a material to deformation. Several such tests exist, wherein the examined material is indented until an impression is formed; these tests can be performed on a macroscopic or microscopic scale. When testing metals, indentation hardness correlates roughly linearly with tensile strength, but it is an imperfect correlation often limited to small ranges of strength and hardness for each indentation geometry. This relation permits economically important nondestructive testing of bulk metal deliveries with lightweight, even portable equipment, such as hand-held Rockwell hardness testers. Material hardness Different techniques are used to quantify material characteristics at smaller scales. Measuring mechanical properties for materials, for instance, of thin films, cannot be done using conventional uniaxial tensile testing. As a result, techniques testing material "hardness" by indenting a material with a very small impression have been developed to attempt to estimate these properties. Hardness measurements quantify the resistance of a material to plastic deformation. Indentation hardness tests compose the majority of processes used to determine material hardness, and can be divided into three classes: macro, micro and nanoindentation tests. Microindentation tests typically have forces less than . Hardness, however, cannot be considered to be a fundamental material property. Classical hardness testing usually creates a number which can be used to provide a relative idea of material properties. As such, hardness can only offer a comparative idea of the material's resistance to plastic deformation since different hardness techniques have different scales. The equation based definition of hardness is the pressure applied over the contact area between the indenter and the material being tested. As a result hardness values are typically reported in units of pressure, although this
https://en.wikipedia.org/wiki/Interference%20fit
An interference fit, also known as a pressed fit or friction fit, is a form of fastening between two tightfitting mating parts that produces a joint which is held together by friction after the parts are pushed together. Depending on the amount of interference, parts may be joined using a tap from a hammer or forced together using a hydraulic press. Critical components that must not sustain damage during joining may also be cooled significantly below room temperature to shrink one of the components before fitting. This method allows the components to be joined without force and produces a shrink fit interference when the component returns to normal temperature. Interference fits are commonly used with aircraft fasteners to improve the fatigue life of a joint. These fits, though applicable to shaft and hole assembly, are more often used for bearing-housing or bearing-shaft assembly. This is referred to as a 'press-in' mounting. Tightness of fit The tightness of fit is controlled by amount of interference; the allowance (planned difference from nominal size). Formulas exist to compute allowance that will result in various strengths of fit such as loose fit, light interference fit, and interference fit. The value of the allowance depends on which material is being used, how big the parts are, and what degree of tightness is desired. Such values have already been worked out in the past for many standard applications, and they are available to engineers in the form of tables, obviating the need for re-derivation. As an example, a shaft made of 303 stainless steel will form a tight fit with allowance of . A slip fit can be formed when the bore diameter is wider than the rod; or, if the rod is made 12–20μm under the given bore diameter. An example: The allowance per inch of diameter usually ranges from (0.1–0.25%), (0.15%) being a fair average. Ordinarily the allowance per inch decreases as the diameter increases; thus the total allowance for a diameter of m
https://en.wikipedia.org/wiki/Fuzzing
In programming and software development, fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to expose corner cases that have not been properly dealt with. For the purpose of security, input that crosses a trust boundary is often the most useful. For example, it is more important to fuzz code that handles the upload of a file by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user. History The term "fuzz" originates from a fall 1988 class project in the graduate Advanced Operating Systems class (CS736), taught by Prof. Barton Miller at the University of Wisconsin, whose results were subsequently published in 1990. To fuzz test a UNIX utility meant to automatically generate random input and command-line parameters for the utility. The project was designed to test the reliability of UNIX command line programs by executing a large number of random inputs in quick succession until they crashed. Miller's team was able to crash 25 to 33 percent of the utilities that they tested. They then debugged each of the crashes to determine the cause and categorized each detected failure. To allow other researchers to conduct similar experiments with other software, the source code of the tools, the test procedures, and the raw result data were made publicly available. This early fuzzing would now be called b
https://en.wikipedia.org/wiki/Jaakko%20Hintikka
Kaarlo Jaakko Juhani Hintikka (12 January 1929 – 12 August 2015) was a Finnish philosopher and logician. Hintikka is regarded as the founder of formal epistemic logic and of game semantics for logic. Life and career Hintikka was born in Helsingin maalaiskunta (now Vantaa). In 1953, he received his doctorate from the University of Helsinki for a thesis entitled Distributive Normal Forms in the Calculus of Predicates. He was a student of Georg Henrik von Wright. Hintikka was a Junior Fellow at Harvard University (1956-1969), and held several professorial appointments at the University of Helsinki, the Academy of Finland, Stanford University, Florida State University and finally Boston University from 1990 until his death. He was the prolific author or co-author of over 30 books and over 300 scholarly articles, Hintikka contributed to mathematical logic, philosophical logic, the philosophy of mathematics, epistemology, language theory, and the philosophy of science. His works have appeared in over nine languages. Hintikka edited the academic journal Synthese from 1962 to 2002, and was a consultant editor for more than ten journals. He was the first vice-president of the Fédération Internationale des Sociétés de Philosophie, the vice-president of the Institut International de Philosophie (1993–1996), as well as a member of the American Philosophical Association, the International Union of History and Philosophy of Science, Association for Symbolic Logic, and a member of the governing board of the Philosophy of Science Association. In 2005, he won the Rolf Schock Prize in logic and philosophy "for his pioneering contributions to the logical analysis of modal concepts, in particular the concepts of knowledge and belief". In 1985, he was president of the Florida Philosophical Association. He was a member of the Norwegian Academy of Science and Letters. On May 26, 2000, Hintikka received an honorary doctorate from the Faculty of History and Philosophy at Uppsala Univ
https://en.wikipedia.org/wiki/Aerodynamic%20heating
Aerodynamic heating is the heating of a solid body produced by its high-speed passage through air. In science and engineering, an understanding of aerodynamic heating is necessary for predicting the behaviour of meteoroids which enter the earth's atmosphere, to ensure spacecraft safely survive atmospheric reentry, and for the design of high-speed aircraft and missiles. Aircraft The effects of aerodynamic heating on the temperature of the skin, and subsequent heat transfer into the structure, the cabin, the equipment bays and the electrical, hydraulic and fuel systems, have to be incorporated in the design of supersonic and hypersonic aircraft and missiles. One of the main concerns caused by aerodynamic heating arises in the design of the wing. For subsonic speeds, two main goals of wing design are minimizing weight and maximizing strength. Aerodynamic heating, which occurs at supersonic and hypersonic speeds, adds an additional consideration in wing structure analysis. An idealized wing structure is made up of spars, stringers, and skin segments. In a wing that normally experiences subsonic speeds, there must be a sufficient number of stringers to withstand the axial and bending stresses induced by the lift force acting on the wing. In addition, the distance between the stringers must be small enough that the skin panels do not buckle, and the panels must be thick enough to withstand the shear stress and shear flow present in the panels due to the lifting force on the wing. However, the weight of the wing must be made as small as possible, so the choice of material for the stringers and the skin is an important factor. At supersonic speeds, aerodynamic heating adds another element to this structural analysis. At normal speeds, spars and stringers experience a load called Delta P, which is a function of the lift force, first and second moments of inertia, and length of the spar. When there are more spars and stringers, the Delta P in each member is reduced, and th
https://en.wikipedia.org/wiki/Tristan%20Louis
Tristan Louis (born February 28, 1971) is a French-born American author, entrepreneur and internet activist. Early work Louis was born in Digne-les-Bains, Alpes-de-Haute-Provence. In 1994 and 1995, as publisher of iWorld, part of the Mecklermedia group of Internet online media companies, he first became involved in online politics on Usenet, particularly the newsgroup alt.internet.media-coverage, during debate over the Communications Decency Act and activism against it. In a joint effort with the EFF and the Voters Telecommunications Watch, iWorld and Mecklermedia publicly endorsed a national day of protest; turning the background of web pages around the world to black. The protest received national news coverage and was a catalyst in the planning for a lawsuit (Reno v. American Civil Liberties Union) which went to the United States Supreme Court and reaffirmed First Amendment protection for Internet publishers. After leaving iWorld, Louis contributed to many publications as a freelance writer, including a popular line of introductions to the internet, and helped co-found several start-ups, including Earthweb and Net Quotient, a consulting group. At Earthweb, Louis reprised his role of editor, hoping to reproduce the early success of iWorld and helping launch the company on the stock market. From 1999 to early 2000, Louis joined the short-lived dot-com Boo.com; when the company failed, he wrote a detailed analysis of the challenges the company had faced; offering some context in terms of running large scale websites, which was circulated widely. In January 2006, Louis participated in Microsoft Search Champs v4 in Seattle. In 2011, Louis returned to startups, launching Keepskor, a branded app company which was acquired in 2014. Since 2017, Louis serves as president and CEO of Casebook PBC, an organization focused on building a SaaS platform for social services. Wall Street career Throughout the 2000s, Louis worked in several roles on Wall Street, most notabl
https://en.wikipedia.org/wiki/CRC-based%20framing
CRC-based framing is a kind of frame synchronization used in Asynchronous Transfer Mode (ATM) and other similar protocols. The concept of CRC-based framing was developed by StrataCom, Inc. in order to improve the efficiency of a pre-standard Asynchronous Transfer Mode (ATM) link protocol. This technology was ultimately used in the principal link protocols of ATM itself and was one of the most significant developments of StrataCom. An advanced version of CRC-based framing was used in the ITU-T SG15 G.7041 Generic Framing Procedure (GFP), which itself is used in several packet link protocols. Overview of CRC-based framing The method of CRC-Based framing re-uses the header cyclic redundancy check (CRC), which is present in ATM and other similar protocols, to provide framing on the link with no additional overhead. In ATM, this field is known as the Header Error Control/Check (HEC) field. It consists of the remainder of the division of the 32 bits of the header (taken as the coefficients of a polynomial over the field with two elements) by the polynomial . The pattern 01010101 is XORed with the 8-bit remainder before being inserted in the last octet of the header. Constantly checked as data is transmitted, this scheme is able to correct single-bit errors and detect many multiple-bit errors. For a tutorial and an example of computing the CRC see mathematics of cyclic redundancy checks. The header CRC/HEC is needed for another purpose within an ATM system, to improve the robustness in cell delivery. Using this same CRC/HEC field for the second purpose of link framing provided a significant improvement in link efficiency over what other methods of framing, because no additional bits were required for this second purpose. A receiver utilizing CRC-based framing bit-shifts along the received bit stream until it finds a bit position where the header CRC is correct for a number of times. The receiver then declares that it has found the frame. A hysteresis function is appli
https://en.wikipedia.org/wiki/Bay%20Networks
Bay Networks, Inc., was a network hardware vendor formed through the merger of Santa Clara, California, based SynOptics Communications and Billerica, Massachusetts based Wellfleet Communications on July 6, 1994. SynOptics was an important early innovator of Ethernet products, having developed a pre-standard twisted pair 10Mbit/s Ethernet product and a modular Ethernet hub product that dominated the enterprise networking market. Wellfleet was an important competitor to Cisco Systems in the router market, ultimately commanding up to a 20% market share of the network router business worldwide. The combined company was renamed Bay Networks as a nod to the legacy that SynOptics was based in the San Francisco area and Wellfleet was based in the Boston area, two cities well known for their bays. Acquisitions Bay Networks expanded its product line both through internal development and acquisition, acquiring the following companies during the course of its existence: Centillion Networks, Inc. (May, 1995) - Provided Asynchronous Transfer Mode switching and Token Ring technology. Xylogics, Inc. (December, 1995) - Remote access technologies. Performance Technology (March, 1996) - LAN-to-WAN access technology. ARMON Networking, Ltd. (April, 1996) - RMON and RMON2 network management technology. LANcity Corporation (October, 1996) - Cable modem technology. Penril Datability Networks (November, 1996) - Dial-up modems and remote access products based on Digital Signal Processing technology. NetICs, Inc. (December, 1996) - ASIC-based Fast Ethernet switching technology. ISOTRO Network Management, Inc. (April, 1997) - DNS and DHCP technologies. Rapid City Communications (June, 1997) - Gigabit Ethernet switching and routing technology. New Oak Communications (January, 1998) - Provided VPN technology to Bay Networks product line. Netsation Corp. (February, 1998) - Technology was used to augment Bay Networks Optivity network management system. NetServe GmbH (July, 1998) - Vo
https://en.wikipedia.org/wiki/SynOptics
SynOptics Communications was a Santa Clara, California-based early computer network equipment vendor from 1985 until 1994. SynOptics popularized the concept of the modular Ethernet hub and high-speed Ethernet networking over copper twisted-pair and fiber optic cables. History SynOptics Communications was founded in 1985 by Andrew K. Ludwick and Ronald V. Schmidt, both of whom worked at Xerox's Palo Alto Research Center (PARC). The most significant product that Synoptics produced was LattisNet (originally named AstraNet) in 1987. This meant that unshielded twisted-pair cabling already installed in office buildings could be re-utilized for computer networking instead of special coaxial cables. The star network topology made the network much easier to manage and maintain. Together these two innovations directly led to the ubiquity of Ethernet networks. Before the final standard version of what is known today as the 10BASE-T protocol, there were several different methods and standards for running Ethernet over twisted-pair cabling at various speeds, such as StarLAN. LattisNet was similar to the final 10BASE-T protocol except that it had slightly different voltage and signal characteristics. Synoptics updated their product line to the 10BASE-T specification once it was published. Through the late 1980s and into the early 1990s, SynOptics produced a series of innovative products including early 10BASE-2 hubs, pre-standard (LattisNet), and 100BASE-TX products. The company was the market leader in Ethernet LAN hubs over rivals 3Com and Cabletron. Despite intense competition that drove down prices, Synoptics' annual revenue grew to a high of $700 million in 1993. To move away from the rapidly commoditizing Layer 1/2 Ethernet equipment market and grow their market share in the increasingly lucrative and more profitable Layer 3 networking arena, SynOptics merged with Billerica, Massachusetts based Wellfleet Communications on July 6, 1994, in a US$ 2.7 Billion dollar de
https://en.wikipedia.org/wiki/Multiple-effect%20evaporator
In chemical engineering, a multiple-effect evaporator is an apparatus for efficiently using the heat from steam to evaporate water. Water is boiled in a sequence of vessels, each held at a lower pressure than the last. Because the boiling temperature of water decreases as pressure decreases, the vapor boiled off in one vessel can be used to heat the next, and only the first vessel (at the highest pressure) requires an external source of heat. The multiple-effect evaporator was invented by the American (Louisiana Creole) engineer Norbert Rillieux. Although he may have designed the apparatus during the 1820s and constructed a prototype in 1834, he did not build the first industrially practical evaporator until 1845. Originally designed for concentrating sugar in sugar cane juice, it has since become widely used in all industrial applications where large volumes of water must be evaporated, such as salt production and water desalination. Multiple-effect evaporation commonly uses sensible heat in the condensate to preheat liquor to be flashed. In practice the design liquid flow paths can be somewhat complicated in order to extract the most recoverable heat and to obtain the highest evaporation rates from the equipment. While in theory, evaporators may be built with an arbitrarily large number of stages, evaporators with more than four stages are rarely practical except in certain applications. Multiple-effect evaporation plants in sugar beet factories have up to eight effects; sextuple-effect evaporators are common in the recovery of black liquor in the kraft process for making wood pulp. See also Marine use of compound evaporators Multi-stage flash distillation Multi-effect distillation References Evaporators Industrial equipment
https://en.wikipedia.org/wiki/WTX%20%28form%20factor%29
WTX (for Workstation Technology Extended) was a motherboard form factor specification introduced by Intel at the IDF in September 1998, for its use at high-end, multiprocessor, multiple-hard-disk servers and workstations. The specification had support from major OEMs (Compaq, Dell, Fujitsu, Gateway, Hewlett-Packard, IBM, Intergraph, NEC, Siemens Nixdorf, and UMAX) and motherboard manufacturers (Acer, Asus, Supermicro and Tyan) and was updated (1.1) in February 1999. , the specification has been discontinued and the URL www.wtx.org no longer hosts a website and has not been owned by Intel since at least 2004. This form factor was geared specifically towards the needs of high-end systems, and included specifications for a WTX power supply unit (PSU) using two WTX-specific 24-pin and 22-pin Molex connectors. The WTX specification was created to standardize a new motherboard and chassis form factor, fix the relative processor location, and allow for high volume airflow through a portion of the chassis where the processors are positioned. This allowed for standard form factor motherboards and chassis to be used to integrate processors with more demanding thermal management requirements. Bigger than ATX, maximum WTX motherboard size was . This was intended to provide more room in order to accommodate higher numbers of integrated components. WTX computer cases were backwards compatible with ATX motherboards (but not vice versa), and sometimes came equipped with ATX power supplies. See also eATX: a version of ATX which has a form factor of . SWTX: Server Workstation Technology Extended External links Intel's 1998 WTX case definition WTX Form Factor definition WTX Power Supply connectors IBM PC compatibles Motherboard form factors
https://en.wikipedia.org/wiki/Percent%20sign
The percent sign (sometimes per cent sign in British English) is the symbol used to indicate a percentage, a number or ratio as a fraction of 100. Related signs include the permille (per thousand) sign and the permyriad (per ten thousand) sign (also known as a basis point), which indicate that a number is divided by one thousand or ten thousand, respectively. Higher proportions use parts-per notation. Correct style Form and spacing English style guides prescribe writing the percent sign following the number without any space between (e.g. 50%). However, the International System of Units and ISO 31-0 standard prescribe a space between the number and percent sign, in line with the general practice of using a non-breaking space between a numerical value and its corresponding unit of measurement. Other languages have other rules for spacing in front of the percent sign: In Czech and in Slovak, the percent sign is spaced with a non-breaking space if the number is used as a noun. In Czech, no space is inserted if the number is used as an adjective (e.g. “a 50% increase”), whereas Slovak uses a non-breaking space in this case as well. In Finnish, the percent sign is always spaced, and a case suffix can be attached to it using the colon (e.g. 50 %:n kasvu 'an increase of 50%'). In French, the percent sign must be spaced with a non-breaking space. According to the Real Academia Española, in Spanish, the percent sign should be spaced now, despite the fact that it is not the linguistic norm. Despite that, in North American Spanish (Mexico and the US), several style guides and institutions either recommend the percent sign be written following the number without any space between or do so in their own publications in accordance with common usage in that region. In Russian, the percent sign is rarely spaced, contrary to the guidelines of the GOST 8.417-2002 state standard. In Chinese, the percent sign is almost never spaced, probably because Chinese does not use sp
https://en.wikipedia.org/wiki/Mound-building%20termites
Mound-building termites are a group of termite species that live in mounds which are made of a combination of soil, termite saliva and dung. These termites live in Africa, Australia and South America. The mounds sometimes have a diameter of . Most of the mounds are in well-drained areas. Termite mounds usually outlive the colonies themselves. If the inner tunnels of the nest are exposed it is usually dead. Sometimes other colonies, of the same or different species, occupy a mound after the original builders' deaths. Mound structure The structure of the mounds can be very complicated. Inside the mound is an extensive system of tunnels and conduits that serves as a ventilation system for the underground nest. In order to get good ventilation, the termites will construct several shafts leading down to the cellar located beneath the nest. The mound is built above the subterranean nest. The nest itself is a spheroidal structure consisting of numerous gallery chambers. They come in a wide variety of shapes and sizes. Some, like Odontotermes termites build open chimneys or vent holes into their mounds, while others build completely enclosed mounds like Macrotermes. The Amitermes (Magnetic termites) mounds are created tall, thin, wedge-shaped, usually oriented north-south. Ventilation in mounds The extensive system of tunnels and conduits have long been considered to help control climate inside the mound. The termite mound is able to regulate temperature, humidity and respiratory gas distribution. An early proposition suggested a thermosiphon mechanism. The heat created due to the metabolism of termites imparts sufficient buoyancy to the nest air to push it up into the mound and eventually to the mound’s porous surface where heat and gases exchange with the atmosphere through the porous walls. The density of air near the surface rises due to heat exchange and is forced below the nest and eventually through the nest again. This model was proposed for mounds with capped c
https://en.wikipedia.org/wiki/Perverse%20sheaf
The mathematical term perverse sheaves refers to a certain abelian category associated to a topological space X, which may be a real or complex manifold, or a more general topologically stratified space, usually singular. This concept was introduced in the thesis of Zoghman Mebkhout, gaining more popularity after the (independent) work of Joseph Bernstein, Alexander Beilinson, and Pierre Deligne (1982) as a formalisation of the Riemann-Hilbert correspondence, which related the topology of singular spaces (intersection homology of Mark Goresky and Robert MacPherson) and the algebraic theory of differential equations (microlocal calculus and holonomic D-modules of Joseph Bernstein, Masaki Kashiwara and Takahiro Kawai). It was clear from the outset that perverse sheaves are fundamental mathematical objects at the crossroads of algebraic geometry, topology, analysis and differential equations. They also play an important role in number theory, algebra, and representation theory. The properties characterizing perverse sheaves already appeared in the 75's paper of Kashiwara on the constructibility of solutions of holonomic D-modules. Preliminary remarks The name perverse sheaf comes through rough translation of the French "faisceaux pervers". The justification is that perverse sheaves are complexes of sheaves which have several features in common with sheaves: they form an abelian category, they have cohomology, and to construct one, it suffices to construct it locally everywhere. The adjective "pervers" originates in the intersection homology theory, and its origin was explained by . The Beilinson–Bernstein–Deligne definition of a perverse sheaf proceeds through the machinery of triangulated categories in homological algebra and has a very strong algebraic flavour, although the main examples arising from Goresky–MacPherson theory are topological in nature because the simple objects in the category of perverse sheaves are the intersection cohomology complexes. This mo
https://en.wikipedia.org/wiki/Harmonic%20%28mathematics%29
In mathematics, a number of concepts employ the word harmonic. The similarity of this terminology to that of music is not accidental: the equations of motion of vibrating strings, drums and columns of air are given by formulas involving Laplacians; the solutions to which are given by eigenvalues corresponding to their modes of vibration. Thus, the term "harmonic" is applied when one is considering functions with sinusoidal variations, or solutions of Laplace's equation and related concepts. Mathematical terms whose names include "harmonic" include: Projective harmonic conjugate Cross-ratio Harmonic analysis Harmonic conjugate Harmonic form Harmonic function Harmonic mean Harmonic mode Harmonic number Harmonic series Alternating harmonic series Harmonic tremor Spherical harmonics Mathematical terminology Harmonic analysis
https://en.wikipedia.org/wiki/Atmel%20AVR%20instruction%20set
The Atmel AVR instruction set is the machine language for the Atmel AVR, a modified Harvard architecture 8-bit RISC single chip microcontroller which was developed by Atmel in 1996. The AVR was one of the first microcontroller families to use on-chip flash memory for program storage. Processor registers There are 32 general-purpose 8-bit registers, R0–R31. All arithmetic and logic operations operate on those registers; only load and store instructions access RAM. A limited number of instructions operate on 16-bit register pairs. The lower-numbered register of the pair holds the least significant bits and must be even-numbered. The last three register pairs are used as pointer registers for memory addressing. They are known as X (R27:R26), Y (R29:R28) and Z (R31:R30). Postincrement and predecrement addressing modes are supported on all three. Y and Z also support a six-bit positive displacement. Instructions which allow an immediate value are limited to registers R16–R31 (8-bit operations) or to register pairs R25:R24–R31:R30 (16-bit operations ADIW and SBIW). Some variants of the MUL operation are limited to eight registers, R16 through R23. Special purpose registers In addition to these 32 general-purpose registers, the CPU has a few special-purpose registers: PC: 16- or 22-bit program counter SP: 8- or 16-bit stack pointer SREG: 8-bit status register RAMPX, RAMPY, RAMPZ, RAMPD and EIND: 8-bit segment registers that are prepended to 16-bit addresses in order to form 24-bit addresses; only available in parts with large address spaces. Status register The status register bits are: C Carry flag. This is a borrow flag on subtracts. The INC and DEC instructions do not modify the carry flag, so they may be used to loop over multi-byte arithmetic operations. Z Zero flag. Set to 1 when an arithmetic result is zero. N Negative flag. Set to a copy of the most significant bit of an arithmetic result. V Overflow flag. Set in case of two's complement overflow.
https://en.wikipedia.org/wiki/Shunt%20%28electrical%29
A shunt is a device that is designed to provide a low-resistance path for an electrical current in a circuit. It is typically used to divert current away from a system or component in order to prevent overcurrent. Electrical shunts are commonly used in a variety of applications including power distribution systems, electrical measurement systems, automotive and marine applications. Defective device bypass One example is in miniature Christmas lights which are wired in series. When the filament burns out in one of the incandescent light bulbs, the full line voltage appears across the burnt out bulb. A shunt resistor, which has been connected in parallel across the filament before it burnt out, will then short out to bypass the burnt filament and allow the rest of the string to light. If too many lights burn out however, a shunt will also burn out, requiring the use of a multimeter to find the point of failure. Photovoltaics In photovoltaics, the term is widely used to describe an unwanted short circuit between the front and back surface contacts of a solar cell, usually caused by wafer damage. Lightning arrester A gas-filled tube can also be used as a shunt, particularly in a lightning arrester. Neon and other noble gases have a high breakdown voltage, so that normally current will not flow across it. However, a direct lightning strike (such as on a radio tower antenna) will cause the shunt to arc and conduct the massive amount of electricity to ground, protecting transmitters and other equipment. Another older form of lightning arrester employs a simple narrow spark gap, over which an arc will jump when a high voltage is present. While this is a low cost solution, its high triggering voltage offers almost no protection for modern solid-state electronic devices powered by the protected circuit. Electrical noise bypass Capacitors are used as shunts to redirect high-frequency noise to ground before it can propagate to the load or other circuit components.
https://en.wikipedia.org/wiki/Acorn%20Online%20Media%20Set%20Top%20Box
The Acorn Online Media Set Top Box was produced by the Online Media division of Acorn Computers Ltd for the Cambridge Cable and Online Media Video on Demand trial and launched early 1996. Part of this trial involved a home-shopping system in partnership with Parcelforce. The hardware was trialled by NatWest bank, as exhibited at the 1995 Acorn World trade show. Specification STB1 The STB1 was a customised Risc PC based system, with a Wild Vision Movie Magic expansion card in a podule slot, and a network card based on Asynchronous Transfer Mode. Memory: 4 MiB RAM Processor: ARM 610 processor at 33 MHz; approx 28.7 MIPS Operating system: RISC OS 3.50 held in 4 MiB ROM STB20 The STB20 was a new PCB based around the ARM7500 System On Chip. Memory: Processor: ARM7500 processor Operating system: RISC OS 3.61, a version specific for this STB, held in 4 MiB ROM. STB22 By this time Online Media had been restructured back into Acorn Computers, so the STB22 is branded as 'Acorn'. Memory: Processor: Operating system: a development of RISC OS held in 4 MiB ROM References External links The Full Acorn Machine List: STB Computer-related introductions in 1996 Online Media Set Top Box Legacy systems Set-top box
https://en.wikipedia.org/wiki/Acorn%20Network%20Computer
The Acorn Network Computer was a network computer (a type of thin client) designed and manufactured by Acorn Computers Ltd. It was the implementation of the Network Computer Reference Profile that Oracle Corporation commissioned Acorn to specify for network computers (for more detail on the history, see Acorn's Network Computer). Sophie Wilson of Acorn led the effort. It was launched in August 1996. The NCOS operating system used in this first implementation was based on RISC OS and ran on ARM hardware. Manufacturing obligations were achieved through a contract with Fujitsu subsidiary D2D. In 1997, Acorn offered its designs at no cost to licensees of . Hardware models Original model The NetStation was available in two versions, one with a modem for home use via a television, and a version with an Ethernet card for use in businesses and schools with VGA monitors and an on-site BSD Unix fileserver based on RiscBSD, an early ARM port of NetBSD. Both versions were upgradable, as the modem and Ethernet cards were replaceable "podules" (Acorn-format Eurocards). The home version was trialled in 1997/98 in conjunction with BT. The and both used the and supported PAL, NTSC and SVGA displays. They had identical specifications. The used a StrongARM SA-110 200 MHz processor. The ARM7500-based DeskLite was launched in 1998. StrongARM Acorn continued to produce ARM-based designs, demonstrating its first StrongARM prototype in May 1996, and the 6 months later. This evolved into the CoNCord, launched in late 1997. New markets Further designs included the Set-top Box NC (), the , and the . Later versions The second generation Network Computer operating system was no longer based on RISC OS. NC Desktop, from Oracle subsidiary Network Computer Inc., instead combined NetBSD and the X Window System, featuring desktop windows whose contents were typically described using HTML, reminiscent of (but not entirely equivalent to) the use of Display PostScript in NeXTStep. The pr
https://en.wikipedia.org/wiki/CARDboard%20Illustrative%20Aid%20to%20Computation
CARDIAC (CARDboard Illustrative Aid to Computation) is a learning aid developed by David Hagelbarger and Saul Fingerman for Bell Telephone Laboratories in 1968 to teach high school students how computers work. The kit consists of an instruction manual and a die-cut cardboard "computer". The computer "operates" by means of pencil and sliding cards. Any arithmetic is done in the head of the person operating the computer. The computer operates in base 10 and has 100 memory cells which can hold signed numbers from 0 to ±999. It has an instruction set of 10 instructions which allows CARDIAC to add, subtract, test, shift, input, output and jump. Hardware The “CPU” of the computer consists of 4 slides that move various numbers and arrows to have the flow of the real CPU (the user's brain) move the right way. They have one flag (+/-), affected by the result in the accumulator. Memory consists of the other half of the cardboard cutout. There are 100 cells. Cell 0 is “ROM”, always containing a numeric "1"; cells 1 to 98 are “RAM”; available for instructions and data; and cell 99 can best be described as “EEPROM”. Memory cells hold signed decimal numbers from 0 to ±999 and are written with a pencil. Cells are erased with an eraser. A “bug” is provided to act as a program counter, and is placed in a hole beside the current memory cell. Programming CARDIAC has a 10 instruction machine language. An instruction is three decimal digits (the sign is ignored) in the form OAA. The first digit is the op code (O); the second and third digits are an address (AA). Addressing is one of accumulator to memory absolute, absolute memory to accumulator, input to absolute memory and absolute memory to output. High level languages have never been developed for CARDIAC as they would defeat one of the purposes of the device: to introduce concepts of assembly language programming. Programs are hand assembled and then are penciled into the appropriate memory cells. Instruction Set Operat
https://en.wikipedia.org/wiki/Law%20of%20specific%20nerve%20energies
The law of specific nerve energies, first proposed by Johannes Peter Müller in 1835, is that the nature of perception is defined by the pathway over which the sensory information is carried. Hence, the origin of the sensation is not important. Therefore, the difference in perception of seeing, hearing, and touch is not caused by differences in the stimuli themselves but by the different nervous structures that these stimuli excite. For example, pressing on the eye elicits sensations of flashes of light because the neurons in the retina send a signal to the occipital lobe. Despite the sensory input's being mechanical, the experience is visual. Quotation Here is Müller's statement of the law, from Handbuch der Physiologie des Menschen für Vorlesungen, 2nd Ed., translated by Edwin Clarke and Charles Donald O'Malley: The same cause, such as electricity, can simultaneously affect all sensory organs, since they are all sensitive to it; and yet, every sensory nerve reacts to it differently; one nerve perceives it as light, another hears its sound, another one smells it; another tastes the electricity, and another one feels it as pain and shock. One nerve perceives a luminous picture through mechanical irritation, another one hears it as buzzing, another one senses it as pain. . . He who feels compelled to consider the consequences of these facts cannot but realize that the specific sensibility of nerves for certain impressions is not enough, since all nerves are sensitive to the same cause but react to the same cause in different ways. . . (S)ensation is not the conduction of a quality or state of external bodies to consciousness, but the conduction of a quality or state of our nerves to consciousness, excited by an external cause. Clarification As the above quotation shows, Müller's law seems to differ from the modern statement of the law in one key way. Müller attributed the quality of an experience to some specific quality of the energy in the nerves. For example,
https://en.wikipedia.org/wiki/Molecular%20ecology
Molecular ecology is a field of evolutionary biology that is concerned with applying molecular population genetics, molecular phylogenetics, and more recently genomics to traditional ecological questions (e.g., species diagnosis, conservation and assessment of biodiversity, species-area relationships, and many questions in behavioral ecology). It is virtually synonymous with the field of "Ecological Genetics" as pioneered by Theodosius Dobzhansky, E. B. Ford, Godfrey M. Hewitt, and others. These fields are united in their attempt to study genetic-based questions "out in the field" as opposed to the laboratory. Molecular ecology is related to the field of conservation genetics. Methods frequently include using microsatellites to determine gene flow and hybridization between populations. The development of molecular ecology is also closely related to the use of DNA microarrays, which allows for the simultaneous analysis of the expression of thousands of different genes. Quantitative PCR may also be used to analyze gene expression as a result of changes in environmental conditions or different responses by differently adapted individuals. Molecular ecology uses molecular genetic data to answer ecological question related to biogeography, genomics, conservation genetics, and behavioral ecology. Studies mostly use data based on deoxyribonucleic acid sequences (DNA). This approach has been enhanced over a number of years to allow researchers to sequence thousands of genes from a small amount of starting DNA. Allele sizes are another way researchers are able to compare individuals and populations which allows them to quantify the genetic diversity within a population and the genetic similarities among populations. Bacterial diversity Molecular ecological techniques are used to study in situ questions of bacterial diversity. Many microorganisms are not easily obtainable as cultured strains in the laboratory, which would allow for identification and characterization. I
https://en.wikipedia.org/wiki/Killing%20horizon
In physics, a Killing horizon is a geometrical construct used in general relativity and its generalizations to delineate spacetime boundaries without reference to the dynamic Einstein field equations. Mathematically a Killing horizon is a null hypersurface defined by the vanishing of the norm of a Killing vector field (both are named after Wilhelm Killing). It can also be defined as a null hypersurface generated by a Killing vector, which in turn is null at that surface. After Hawking showed that quantum field theory in curved spacetime (without reference to the Einstein field equations) predicted that a black hole formed by collapse will emit thermal radiation, it became clear that there is an unexpected connection between spacetime geometry (Killing horizons) and thermal effects for quantum fields. In particular, there is a very general relationship between thermal radiation and spacetimes that admit a one-parameter group of isometries possessing a bifurcate Killing horizon, which consists of a pair of intersecting null hypersurfaces that are orthogonal to the Killing field. Flat spacetime In Minkowski space-time, in pseudo-Cartesian coordinates with signature an example of Killing horizon is provided by the Lorentz boost (a Killing vector of the space-time) The square of the norm of is Therefore, is null only on the hyperplanes of equations that, taken together, are the Killing horizons generated by . Black hole Killing horizons Exact black hole metrics such as the Kerr–Newman metric contain Killing horizons, which can coincide with their ergospheres. For this spacetime, the corresponding Killing horizon is located at In the usual coordinates, outside the Killing horizon, the Killing vector field is timelike, whilst inside it is spacelike. Furthermore, considering a particular linear combination of and , both of which are Killing vector fields, gives rise to a Killing horizon that coincides with the event horizon. Associated with a Killi
https://en.wikipedia.org/wiki/Iverson%20bracket
In mathematics, the Iverson bracket, named after Kenneth E. Iverson, is a notation that generalises the Kronecker delta, which is the Iverson bracket of the statement . It maps any statement to a function of the free variables in that statement. This function is defined to take the value 1 for the values of the variables for which the statement is true, and takes the value 0 otherwise. It is generally denoted by putting the statement inside square brackets: In other words, the Iverson bracket of a statement is the indicator function of the set of values for which the statement is true. The Iverson bracket allows using capital-sigma notation without restriction on the summation index. That is, for any property of the integer , one can rewrite the restricted sum in the unrestricted form . With this convention, does not need to be defined for the values of for which the Iverson bracket equals ; that is, a summand must evaluate to 0 regardless of whether is defined. The notation was originally introduced by Kenneth E. Iverson in his programming language APL, though restricted to single relational operators enclosed in parentheses, while the generalisation to arbitrary statements, notational restriction to square brackets, and applications to summation, was advocated by Donald Knuth to avoid ambiguity in parenthesized logical expressions. Properties There is a direct correspondence between arithmetic on Iverson brackets, logic, and set operations. For instance, let A and B be sets and any property of integers; then we have Examples The notation allows moving boundary conditions of summations (or integrals) as a separate factor into the summand, freeing up space around the summation operator, but more importantly allowing it to be manipulated algebraically. Double-counting rule We mechanically derive a well-known sum manipulation rule using Iverson brackets: Summation interchange The well-known rule is likewise easily derived: Counting For instance, the
https://en.wikipedia.org/wiki/Method%20of%20exhaustion
The method of exhaustion () is a method of finding the area of a shape by inscribing inside it a sequence of polygons whose areas converge to the area of the containing shape. If the sequence is correctly constructed, the difference in area between the nth polygon and the containing shape will become arbitrarily small as n becomes large. As this difference becomes arbitrarily small, the possible values for the area of the shape are systematically "exhausted" by the lower bound areas successively established by the sequence members. The method of exhaustion typically required a form of proof by contradiction, known as reductio ad absurdum. This amounts to finding an area of a region by first comparing it to the area of a second region, which can be "exhausted" so that its area becomes arbitrarily close to the true area. The proof involves assuming that the true area is greater than the second area, proving that assertion false, assuming it is less than the second area, then proving that assertion false, too. History The idea originated in the late 5th century BC with Antiphon, although it is not entirely clear how well he understood it. The theory was made rigorous a few decades later by Eudoxus of Cnidus, who used it to calculate areas and volumes. It was later reinvented in China by Liu Hui in the 3rd century AD in order to find the area of a circle. The first use of the term was in 1647 by Gregory of Saint Vincent in Opus geometricum quadraturae circuli et sectionum. The method of exhaustion is seen as a precursor to the methods of calculus. The development of analytical geometry and rigorous integral calculus in the 17th-19th centuries subsumed the method of exhaustion so that it is no longer explicitly used to solve problems. An important alternative approach was Cavalieri's principle, also termed the method of indivisibles which eventually evolved into the infinitesimal calculus of Roberval, Torricelli, Wallis, Leibniz, and others. Euclid Euclid used the
https://en.wikipedia.org/wiki/Soil%20food%20web
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals. Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground. Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs. Above ground food webs In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients. Methodology The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
https://en.wikipedia.org/wiki/Spitting
Spitting is the act of forcibly ejecting saliva or other substances from the mouth. The act is often done to get rid of unwanted or foul-tasting substances in the mouth, or to get rid of a large buildup of mucus. Spitting of small saliva droplets can also happen unintentionally during talking, especially when articulating ejective and implosive consonants. Spitting in public is considered rude and a social taboo in many parts of the world including the West, while in some other parts of the world it is considered more socially acceptable. Spitting upon another person, especially onto the face, is a global sign of anger, hatred, disrespect or contempt. It can represent a "symbolical regurgitation" or an act of intentional contamination. In the Western world Social attitudes towards spitting have changed greatly in Western Europe since the Middle Ages. Then, frequent spitting was part of everyday life, and at all levels of society, it was thought ill-mannered to suck back saliva to avoid spitting. By the early 1700s, spitting had become seen as something which should be concealed, and by 1859 it had progressed to being described by at least one etiquette guide as "at all times a disgusting habit." Sentiments against spitting gradually transitioned from being included in adult conduct books to so obvious as to only appear in guides for children to not be included in conduct literature even for children "because most [Western] children have the spitting ban internalized well before learning how to read." Spittoons (also known as cuspidors) were used openly during the 19th century to provide an acceptable outlet for spitters. Spittoons became far less common after the influenza epidemic of 1918, and their use has since virtually disappeared, though each justice of the Supreme Court of the United States continues to be provided with a personal one. In the first half of the 20th century the National Association for the Study and Prevention of Tuberculosis, the prec
https://en.wikipedia.org/wiki/Zygospore
A zygospore is a diploid reproductive stage in the life cycle of many fungi and protists. Zygospores are created by the nuclear fusion of haploid cells. In fungi, zygospores are formed in zygosporangia after the fusion of specialized budding structures, from mycelia of the same (in homothallic fungi) or different mating types (in heterothallic fungi), and may be chlamydospores. In many eukaryotic algae, including many species of the Chlorophyta, zygospores are formed by the fusion of unicellular gametes of different mating types. A zygospore remains dormant while it waits for environmental cues, such as light, moisture, heat, or chemicals secreted by plants. When the environment is favorable, the zygospore germinates, meiosis occurs, and haploid vegetative cells are released. In fungi, a sporangium is produced at the end of a sporangiophore that sheds spores. A fungus that forms zygospores is called a zygomycete, indicating that the class is characterized by this evolutionary development. References External links Definition of zygospore A more detailed description of a zygospore Fungal morphology and anatomy Mycology
https://en.wikipedia.org/wiki/Gapless%20playback
Gapless playback is the uninterrupted playback of consecutive audio tracks, such that relative time distances in the original audio source are preserved over track boundaries on playback. For this to be useful, other artifacts (than timing-related ones) at track boundaries should not be severed either. Gapless playback is common with compact discs, gramophone records, or tapes, but is not always available with other formats that employ compressed digital audio. The absence of gapless playback is a source of annoyance to listeners of music where tracks are meant to segue into each other, such as some classical music (opera in particular), progressive rock, concept albums, electronic music, and live recordings with audience noise between tracks. Causes of gaps Playback latency Various software, firmware, and hardware components may add up to a substantial delay associated with starting playback of a track. If not accounted for, the listener is left waiting in silence as the player fetches the next file (see harddisk access time), updates metadata, decodes the whole first block, before having any data to feed the hardware buffer. The gap can be as much as half a second or more — very noticeable in "continuous" music such as certain classical or dance genres. In extreme cases, the hardware is even reset between tracks, creating a very short "click". To account for the whole chain of delays, the start of the next track should ideally be readily decoded before the currently playing track finishes. The two decoded pieces of audio must be fed to the hardware continuously over the transition, as if the tracks were concatenated in software. Many older audio players on personal computers do not implement the required buffering to play gapless audio. Some of these rely on third-party gapless audio plug-ins to buffer output. Most recent players and newer versions of old players now support gapless playback directly. Compression artifacts Lossy audio compression schemes th
https://en.wikipedia.org/wiki/Marine%20ecoregion
A marine ecoregion is an ecoregion, or ecological region, of the oceans and seas identified and defined based on biogeographic characteristics. Introduction A more complete definition describes them as “Areas of relatively homogeneous species composition, clearly distinct from adjacent systems” dominated by “a small number of ecosystems and/or a distinct suite of oceanographic or topographic features”. Ecologically they “are strongly cohesive units, sufficiently large to encompass ecological or life history processes for most sedentary species.” Marine Ecoregions of the World—MEOW The global classification system Marine Ecoregions of the World—MEOW was devised by an international team, including major conservation organizations, academic institutions and intergovernmental organizations. The system covers coastal and continental shelf waters of the world, and does not include deep ocean waters. The MEOW system integrated the biogeographic regionalization systems in use at national or continental scale, like Australia's Integrated Marine and Coastal Regionalisation of Australia and the Nature Conservancy’s system in the Americas, although it often uses different names for the subdivisions. This system has a strong biogeographic basis, but was designed to aid in conservation activities for marine ecosystems. Its subdivisions include both the seafloor (benthic) and shelf pelagic (neritic) biotas of each marine region. The digital ecoregions layer is available for download as an ArcGIS Shapefile. Subdivisions Ecoregions The Marine Ecoregions of the World classification defines 232 marine ecoregions (e.g. Adriatic Sea, Cortezian, Ningaloo, Ross Sea) for the coastal and shelf waters of the world. Provinces These marine ecoregions form part of a nested system and are grouped into 62 provinces (e.g. the South China Sea, Mediterranean Sea, Central Indian Ocean Islands). Realms The provinces in turn, are grouped into 12 major realms. The latter are considered analogou
https://en.wikipedia.org/wiki/Murashige%20and%20Skoog%20medium
Murashige and Skoog medium (or MSO or MS0 (MS-zero)) is a plant growth medium used in the laboratories for cultivation of plant cell culture. MS0 was invented by plant scientists Toshio Murashige and Folke K. Skoog in 1962 during Murashige's search for a new plant growth regulator. A number behind the letters MS is used to indicate the sucrose concentration of the medium. For example, MS0 contains no sucrose and MS20 contains 20 g/L sucrose. Along with its modifications, it is the most commonly used medium in plant tissue culture experiments in the laboratory. However, according to recent scientific findings, MS medium is not suitable as a nutrient solution for deep water culture or hydroponics. As Skoog's doctoral student, Murashige originally set out to find an as-yet undiscovered growth hormone present in tobacco juice. No such component was discovered; instead, analysis of juiced tobacco and ashed tobacco revealed higher concentrations of specific minerals in plant tissues than were previously known. A series of experiments demonstrated that varying the levels of these nutrients enhanced growth substantially over existing formulations. It was determined that nitrogen in particular enhanced growth of tobacco in tissue culture. Ingredients Major salts (macronutrients) per litre Ammonium nitrate (NH4NO3) 1650 mg/l Calcium chloride (CaCl2 · 2H2O) 440 mg/l Magnesium sulfate (MgSO4 · 7H2O) 180.7 mg/l Monopotassium phosphate (KH2PO4) 170 mg/l Potassium nitrate (KNO3) 1900 mg/l. Minor salts (micronutrients) per litre Boric acid (H3BO3) 6. 2 mg/l Cobalt chloride (CoCl2 · 6H2O) 0.025 mg/l Ferrous sulfate (FeSO4 · 7H2O) 27.8 mg/l Manganese(II) sulfate (MnSO4 · 4H2O) 22.3 mg/l Potassium iodide (KI) 0.83 mg/l Sodium molybdate (Na2MoO4 · 2H2O) 0.25 mg/l Zinc sulfate (ZnSO4·7H2O) 8.6 mg/l Ethylenediaminetetraacetic acid ferric sodium (FeNaEDTA) 36.70 mg/L Copper sulfate (CuSO4 · 5H2O) 0.025 mg/l Vitamins and organic compounds per litre Myo-Inositol 100 mg/l Nicotini
https://en.wikipedia.org/wiki/Sensor%20web
Sensor web is a type of sensor network that heavily utilizes the World Wide Web and is especially suited for environmental monitoring. OGC's Sensor Web Enablement (SWE) framework defines a suite of web service interfaces and communication protocols abstracting from the heterogeneity of sensor (network) communication. Definition The term "sensor web" was first used by Kevin Delin of NASA in 1997, to describe a novel wireless sensor network architecture where the individual pieces could act and coordinate as a whole. In this sense, the term describes a specific type of sensor network: an amorphous network of spatially distributed sensor platforms (pods) that wirelessly communicate with each other. This amorphous architecture is unique since it is both synchronous and router-free, making it distinct from the more typical TCP/IP-like network schemes. A pod as a physical platform for a sensor can be orbital or terrestrial, fixed or mobile and might even have real time accessibility via the Internet. Pod-to-pod communication is both omni-directional and bi-directional where each pod sends out collected data to every other pod in the network. Hence, the architecture allows every pod to know what is going on with every other pod throughout the sensor web at each measurement cycle. The individual pods (nodes) were all hardware equivalent and Delin's architecture did not require special gateways or routing to have each of the individual pieces communicate with one another or with an end user. Delin's definition of a sensor web was an autonomous, stand-alone, sensing entity – capable of interpreting and reacting to the data measured – that does not necessarily require the presence of the World Wide Web to function. As a result, on-the-fly data fusion, such as false-positive identification and plume tracking, can occur within the sensor web itself and the system subsequently reacts as a coordinated, collective whole to the incoming data stream. For example, instead of
https://en.wikipedia.org/wiki/Unipotent
In mathematics, a unipotent element r of a ring R is one such that r − 1 is a nilpotent element; in other words, (r − 1)n is zero for some n. In particular, a square matrix M is a unipotent matrix if and only if its characteristic polynomial P(t) is a power of t − 1. Thus all the eigenvalues of a unipotent matrix are 1. The term quasi-unipotent means that some power is unipotent, for example for a diagonalizable matrix with eigenvalues that are all roots of unity. In the theory of algebraic groups, a group element is unipotent if it acts unipotently in a certain natural group representation. A unipotent affine algebraic group is then a group with all elements unipotent. Definition Definition with matrices Consider the group of upper-triangular matrices with 's along the diagonal, so they are the group of matrices Then, a unipotent group can be defined as a subgroup of some . Using scheme theory the group can be defined as the group scheme and an affine group scheme is unipotent if it is a closed group scheme of this scheme. Definition with ring theory An element x of an affine algebraic group is unipotent when its associated right translation operator, rx, on the affine coordinate ring A[G] of G is locally unipotent as an element of the ring of linear endomorphism of A[G]. (Locally unipotent means that its restriction to any finite-dimensional stable subspace of A[G] is unipotent in the usual ring-theoretic sense.) An affine algebraic group is called unipotent if all its elements are unipotent. Any unipotent algebraic group is isomorphic to a closed subgroup of the group of upper triangular matrices with diagonal entries 1, and conversely any such subgroup is unipotent. In particular any unipotent group is a nilpotent group, though the converse is not true (counterexample: the diagonal matrices of GLn(k)). For example, the standard representation of on with standard basis has the fixed vector . Definition with representation theory If a unipo
https://en.wikipedia.org/wiki/UNIX%20System%20III
UNIX System III (or System 3) is a discontinued version of the Unix operating system released by AT&T's Unix Support Group (USG). AT&T announced System III in late 1981, and it was first released outside of Bell Labs in 1982. UNIX System III was a mix of various AT&T Unix systems: Version 7 Unix, PWB/UNIX 2.0, CB UNIX 3.0, UNIX/RT and UNIX/32V. System III supported the DEC PDP-11 and VAX computers. The system was apparently called System III because it was considered the outside release of UNIX/TS 3.0.1 and CB UNIX 3 which were internally supported Bell Labs Unices; its manual refers to it as UNIX Release 3.0 and there were no Unix versions called System I or System II. There was no official release of UNIX/TS 4.0 (which would have been System IV) either, so System III was succeeded by System V, based on UNIX/TS 5.0. System III introduced new features such as named pipes, the uname system call and command, and the run queue. It also combined various improvements to Version 7 Unix by outside organizations. However, it did not include notable additions made in BSD such as the C shell (csh) and screen editing. Third-party variants of System III include (early versions of) HP-UX, IRIX, IS/3 and PC/IX, PC-UX, PNX, SINIX, Venix and Xenix. References External links System III source code Bell Labs Unices Discontinued operating systems 1982 software
https://en.wikipedia.org/wiki/Stability%20criterion
In control theory, and especially stability theory, a stability criterion establishes when a system is stable. A number of stability criteria are in common use: Circle criterion Jury stability criterion Liénard–Chipart criterion Nyquist stability criterion Routh–Hurwitz stability criterion Vakhitov–Kolokolov stability criterion Barkhausen stability criterion Stability may also be determined by means of root locus analysis. Although the concept of stability is general, there are several narrower definitions through which it may be assessed: BIBO stability Linear stability Lyapunov stability Orbital stability Stability theory
https://en.wikipedia.org/wiki/Mathieu%20function
In mathematics, Mathieu functions, sometimes called angular Mathieu functions, are solutions of Mathieu's differential equation where are real-valued parameters. Since we may add to to change the sign of , it is a usual convention to set . They were first introduced by Émile Léonard Mathieu, who encountered them while studying vibrating elliptical drumheads. They have applications in many fields of the physical sciences, such as optics, quantum mechanics, and general relativity. They tend to occur in problems involving periodic motion, or in the analysis of partial differential equation (PDE) boundary value problems possessing elliptic symmetry. Definition Mathieu functions In some usages, Mathieu function refers to solutions of the Mathieu differential equation for arbitrary values of and . When no confusion can arise, other authors use the term to refer specifically to - or -periodic solutions, which exist only for special values of and . More precisely, for given (real) such periodic solutions exist for an infinite number of values of , called characteristic numbers, conventionally indexed as two separate sequences and , for . The corresponding functions are denoted and , respectively. They are sometimes also referred to as cosine-elliptic and sine-elliptic, or Mathieu functions of the first kind. As a result of assuming that is real, both the characteristic numbers and associated functions are real-valued. and can be further classified by parity and periodicity (both with respect to ), as follows: {| class="wikitable" ! Function !! Parity !! Period |- | | even | |- | | even | |- | | odd | |- | | odd | |} The indexing with the integer , besides serving to arrange the characteristic numbers in ascending order, is convenient in that and become proportional to and as . With being an integer, this gives rise to the classification of and as Mathieu functions (of the first kind) of integral order. For general and , solutions besid
https://en.wikipedia.org/wiki/Doron%20Zeilberger
Doron Zeilberger (דורון ציילברגר, born 2 July 1950) is an Israeli mathematician, known for his work in combinatorics. Education and career He received his doctorate from the Weizmann Institute of Science in 1976, under the direction of Harry Dym, with the thesis "New Approaches and Results in the Theory of Discrete Analytic Functions." He is a Board of Governors Professor of Mathematics at Rutgers University. Contributions Zeilberger has made contributions to combinatorics, hypergeometric identities, and q-series. Zeilberger gave the first proof of the alternating sign matrix conjecture, noteworthy not only for its mathematical content, but also for the fact that Zeilberger recruited nearly a hundred volunteer checkers to "pre-referee" the paper. In 2011, together with Manuel Kauers and Christoph Koutschan, Zeilberger proved the q-TSPP conjecture, which was independently stated in 1983 by George Andrews and David P. Robbins. Zeilberger is an ultrafinitist. He is also known for crediting his computer "Shalosh B. Ekhad" as a co-author ("Shalosh" and "Ekhad" mean "Three" and "One" in Hebrew respectively, referring to his first computer, an AT&T 3B1), and for his provocative opinions. Awards and honors Zeilberger received a Lester R. Ford Award in 1990. Together with Herbert Wilf, Zeilberger was awarded the American Mathematical Society's Leroy P. Steele Prize for Seminal Contributions to Research in 1998 for their development of WZ theory, which has revolutionized the field of hypergeometric summation. In 2004, Zeilberger was awarded the Euler Medal; the citation refers to him as "a champion of using computers and algorithms to do mathematics quickly and efficiently". In 2016 he received, together with Manuel Kauers and Christoph Koutschan, the David P. Robbins Prize of the American Mathematical Society. Zeilberger was a member of the inaugural 2013 class of fellows of the American Mathematical Society. See also MacMahon Master theorem Wilf–Zeilberger pair R
https://en.wikipedia.org/wiki/Reliability%20engineering
Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time. The reliability function is theoretically defined as the probability of success at time t, which is denoted R(t). In practice, it is calculated using different techniques and its value ranges between 0 and 1, where 0 indicates no probability of success while 1 indicates definite success. This probability is estimated from detailed (physics of failure) analysis, previous data sets or through reliability testing and reliability modeling. Availability, testability, maintainability and maintenance are often defined as a part of "reliability engineering" in reliability programs. Reliability often plays the key role in the cost-effectiveness of systems. Reliability engineering deals with the prediction, prevention and management of high levels of "lifetime" engineering uncertainty and risks of failure. Although stochastic parameters define and affect reliability, reliability is not only achieved by mathematics and statistics. "Nearly all teaching and literature on the subject emphasize these aspects, and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods for prediction and measurement." For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is almost impossible to predict its true magnitude in practice, which is massively multivariate, so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability. Reliability engineering relates closely to Quality Engineering, safety engineering and syst
https://en.wikipedia.org/wiki/Cauchy%20problem
A Cauchy problem in mathematics asks for the solution of a partial differential equation that satisfies certain conditions that are given on a hypersurface in the domain. A Cauchy problem can be an initial value problem or a boundary value problem (for this case see also Cauchy boundary condition). It is named after Augustin-Louis Cauchy. Formal statement For a partial differential equation defined on Rn+1 and a smooth manifold S ⊂ Rn+1 of dimension n (S is called the Cauchy surface), the Cauchy problem consists of finding the unknown functions of the differential equation with respect to the independent variables that satisfies subject to the condition, for some value , where are given functions defined on the surface (collectively known as the Cauchy data of the problem). The derivative of order zero means that the function itself is specified. Cauchy–Kowalevski theorem The Cauchy–Kowalevski theorem states that If all the functions are analytic in some neighborhood of the point , and if all the functions are analytic in some neighborhood of the point , then the Cauchy problem has a unique analytic solution in some neighborhood of the point . See also Cauchy boundary condition Cauchy horizon References External links Cauchy problem at MathWorld. Partial differential equations Mathematical problems Boundary value problems de:Anfangswertproblem#Partielle Differentialgleichungen
https://en.wikipedia.org/wiki/Key%20distribution
In symmetric key cryptography, both parties must possess a secret key which they must exchange prior to using any encryption. Distribution of secret keys has been problematic until recently, because it involved face-to-face meeting, use of a trusted courier, or sending the key through an existing encryption channel. The first two are often impractical and always unsafe, while the third depends on the security of a previous key exchange. In public key cryptography, the key distribution of public keys is done through public key servers. When a person creates a key-pair, they keep one key private and the other, known as the public-key, is uploaded to a server where it can be accessed by anyone to send the user a private, encrypted, message. Secure Sockets Layer (SSL) uses Diffie–Hellman key exchange if the client does not have a public-private key pair and a published certificate in the public key infrastructure, and Public Key Cryptography if the user does have both the keys and the credential. Key distribution is an important issue in wireless sensor network (WSN) design. There are many key distribution schemes in the literature that are designed to maintain an easy and at the same time secure communication among sensor nodes. The most accepted method of key distribution in WSNs is key predistribution, where secret keys are placed in sensor nodes before deployment. When the nodes are deployed over the target area, the secret keys are used to create the network. For more info see: key distribution in wireless sensor networks. Storage of keys in the cloud Key distribution and key storage are more problematic in the cloud due to the transitory nature of the agents on it. Secret sharing can be used to store keys at many different servers on the cloud. In secret sharing, a secret is used as a seed to generate a number of distinct secrets, and the pieces are distributed so that some subset of the recipients can jointly authenticate themselves and use the secret info
https://en.wikipedia.org/wiki/Hipcrime%20%28Usenet%29
HipCrime was both to the screenname of a Usenet user and a software application distributed by, and presumably written by, this individual or group. The name derives from a neologism in the John Brunner science fiction novel Stand on Zanzibar. HipCrime's Newsagent HipCrime's Newsagent software is a free and open-source Usenet control client. The program is written in Java and allows the user to auto-cancel any messages on Usenet based on author, subject, organization, message-ID, or path. It also allows the user to replace the body of any message with text of their choosing. The software also monitors any posts you choose and reposts them if they are removed. Additionally, it allows regular users to act as Usenet Administrators and create (or remove) entire newsgroups. CA Inc. has classified this as denial of service software, as well as flooder software, a specific type of denial of service attack. HipCrime's ActiveAgent HipCrime is referred to as "a leading Usenet Terrorist" by James Farmer, maintainer of Spamfaq: Part 3: Understanding NANAE. Andrew Leonard, in his book Bots: The Origin of New Species, also credits HipCrime with creating the earliest web-distributed spambot. This bot, known as HipCrime's ActiveAgent, was a Java applet which allowed anybody with a web browser to send mass volumes of unsolicited e-mail messages. The ActiveAgent has since been expanded into an open-source application (known as MarketCom's MktAgent) and is relied upon heavily by the largest e-mail spam stock pump and dump gangs. See also List of spammers References External links An early version of HipCrime's NewsAgent Gibbering clones the future of Usenet? HipCrime: A History in URLs July 1996 - May 1998 and May - June 1998 Usenet Usenet people Spamming
https://en.wikipedia.org/wiki/Radura
The Radura is the international symbol indicating a food product has been irradiated. The Radura is usually green and resembles a plant in circle. The top half of the circle is dashed. Graphical details and colours vary between countries. Meaning of the word "Radura" The word "Radura" is derived from radurization, in itself a portmanteau combining the initial letters of the word "radiation" with the stem of "durus", the Latin word for hard, lasting. History The inventors of the symbol Radura - knowing this proposal for a new terminology - came from the former Pilot Plant for Food Irradiation, Wageningen, Netherlands, which was the nucleus for the later Gammaster today known as Isotron. The director at the time, R.M. Ulmann, introduced this symbol to the international community. Ulmann in his lecture also provided the interpretation of this symbol: denoting food - as an agricultural product - i.e., a plant (dot and two leaves) in a closed package (the circle) - irradiated from top through the package by penetrating ionizing rays (the breaks in the upper part of the circle). The Radura was originally used in the 1960s exclusively by a pilot plant for food irradiation in Wageningen, Netherlands that owned the copyright. Jan Leemhorst, then president of Gammaster, untiringly propagated the use of this logo internationally. The use of the logo was permitted to everybody adhering to the same rules of quality. The symbol was also widely used by Atomic Energy of South Africa, including the labelling by the term 'radurized' instead of irradiated. By his intervention, the new logo was also included in the Codex Alimentarius Standard on irradiated food as an option to label irradiated food. Today it is found in the Codex Alimentarius Standard on Labelling of Prepacked Food. Usage The symbol Radura was originally used as a symbol of quality for food processed by ionizing radiation. The Dutch pilot plant used the logo as an identification of irradiated products and as a
https://en.wikipedia.org/wiki/Churn%20drill
The churn drill is a large drilling machine that bores large diameter holes in the ground. In mining, they were used to drill into the soft carbonate rocks of lead and zinc hosted regions to extract bulk samples of the ore. Churn drills are also called percussion drills, as they function by lifting and dropping a heavy chisel-like bit which breaks the rock as it falls. Churn drills are most effective in soft- to medium-density rock of relative shallow depth (10–50 metres). History Churn drills were invented as early as 221 BC in Qin dynasty China, capable of reaching a depth of 1500 m. Churn drills in ancient China were built of wood and labor-intensive, but were able to go through solid rock. The churn drill was transmitted to Europe during the 12th century. A churn drill using steam power, based on "the ancient Chinese method of lifting and dropping a rod tipped with a bit," was first built in 1835 by Isaac Singer in the United States, according to The History of Grinding. In America, they were common in the Tri-State areas during the lead and zinc mining in Missouri, Oklahoma, and Kansas. There is an example of one of these machines at the Northern Life Museum in Fort Smith, Northwest Territories, Canada. It was used in 1929–1930 at the Pine Point lead and zinc mine in the Northwest Territories. References External links https://web.archive.org/web/20050310013804/http://www.maden.hacettepe.edu.tr/dmmrt/dmmrt217.html Tools Chinese inventions Mining equipment
https://en.wikipedia.org/wiki/Telop
A TELOP (TELevision OPtical Slide Projector) was the trademark name of a multifunction, four-channel "project-all" slide projector developed by the Gray Research & Development Company for television usage, introduced in 1949. It was best remembered in the industry as an opaque slide projector for title cards. Before Telop In the early days of television, there were two types of slides for broadcast—a transparent slide or transparency, and an opaque slide, or Balop (a genericized trademark of Bausch & Lomb's Balopticon projectors). Transparency slides were prepared as 2-inch square cards mounted in cardboard or glass, or film, surrounded by a half-inch of masking on all four sides. Opaque, "Balop" slides were cards on stock or larger (always maintaining the 4:3 aspect ratio) that were photographed by the use of a Balopticon. Opaque cards were popular as shooting a card on a stand often caused keystoning problems. A fixed size and axis would ensure no geometric distortion. Telop Models The Gray Company introduced the original Telop in 1949 (with additional models, later to be known as the Telop I). The dual projector unit offered both transparent and opaque projection. The standard sizes were used for both transparent and card slides, but they could also be made on 35mm film, on glass, or on film cards. The third and fourth units on the Telop were attachments with a vertical ticker-tape type roll strip that could be typed on and a horizontal unit similar to a small teleprompter used for title "crawls." By 1952, when the Telop II was introduced, CBS and NBC were using Telop machines in combination with TV cameras to permit instant fading from one object to another by superimposition. The Telop II was a smaller version of the original model for new TV stations on a budget and featured two openings rather than the original four. At the same time, Gray also developed a Telojector, a gun-turret style slide projector for slides which had two projectors, f
https://en.wikipedia.org/wiki/List%20of%20contemporary%20Iranian%20scientists%2C%20scholars%2C%20and%20engineers
The following is a list of notable Iranian scholars, scientists and engineers around the world from the contemporary period. For pre-modern era, see List of pre-modern Iranian scientists and scholars. For mathematicians, see List of Iranian mathematicians. A Behnaam Aazhang, professor boob, Rice University Akbar Adibi, electronic engineer, VLSI researcher, professor of engineering, Amirkabir University of Technology Majid Adibzadeh, scholar and Political scientist Haleh Afshar, academic and peer, University of York Masoud Alimohammadi, assassinated quantum field theorist and elementary-particle physicist Abbas Amanat, professor of history and international studies at Yale University Shahram Amiri Anousheh Ansari, the world's first female space tourist, co-founder and chairman of Prodea Systems, Inc., co-founder and former CEO of Telecom Technologies, Inc. (TTI) Farhad Ardalan, physicist, IPM Nima Arkani-Hamed, professor, Institute for Advanced Study Nasser Ashgriz, professor of mechanical and industrial engineering, University of Toronto Touraj Atabaki, professor of social history of the Middle East and Central Asia, Department of History, University of Amsterdam Reza Amrollahi, Physicist and the former president of the Atomic Energy Organization of Iran B Mehdi Bahadori, professor of mechanical engineering Hossein Baharvand, professor of stem cell and developmental biology, and director of Royan Institute for Stem Cell Biology and Technology Shaul Bakhash, historian, George Mason University Asef Bayat, professor of sociology and Middle East Studies at the University of Illinois at Urbana-Champaign Nariman Behravesh, chief economist and executive vice president, Global Insight Mahmoud Behzad, pioneer Iranian biologist Mina Bissell, director of UC Berkeley Life Sciences Division D Touraj Daryaee, Iranologist and historian, University of California, Irvine E Abbas Edalat, professor of computer science and mathematics, Imperial College London
https://en.wikipedia.org/wiki/Barcode%20printer
A barcode printer is a computer peripheral for printing barcode labels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are commonly used to label cartons before shipment, or to label retail items with UPCs or EANs. The most common barcode printers employ one of two different printing technologies. Direct thermal printers use a printhead to generate heat that causes a chemical reaction in specially designed paper that turns the paper black. Thermal transfer printers also use heat, but instead of reacting the paper, the heat melts a waxy or resin substance on a ribbon that runs over the label or tag material. The heat transfers ink from the ribbon to the paper. Direct thermal printers are generally less expensive, but they produce labels that can become illegible if exposed to heat, direct sunlight, or chemical vapors. Barcode printers are designed for different markets. Industrial barcode printers are used in large warehouses and manufacturing facilities. They have large paper capacities, operate faster and have a longer service life. For retail and office environments, desktop barcode printers are most common. See also Computer printer Label printer References External links Printer Computer printers Automatic identification and data capture Packaging machinery
https://en.wikipedia.org/wiki/Intergenic%20region
An intergenic region is a stretch of DNA sequences located between genes. Intergenic regions may contain functional elements and junk DNA. Properties and functions Intergenic regions may contain a number of functional DNA sequences such as promoters and regulatory elements, enhancers, spacers, and (in eukaryotes) centromeres. They may also contain origins of replication, scaffold attachment regions, and transposons and viruses. Non-functional DNA elements such as pseudogenes and repetitive DNA, both of which are types of junk DNA, can also be found in intergenic regions—although they may also be located within genes in introns. As all scientific knowledge is ultimately tentative—and in principle subject to revision given better evidence—it is possible some well-characterized intergenic regions (but also intra-genic regions like introns) may hypothetically contain as yet unidentified functional elements, such as non-coding RNA genes or regulatory sequences. Such discoveries occur from time to time, but the amount of functional DNA discovered usually constitute only a tiny fraction of the overall amount of intergenic/intronic DNA. Intergenic regions in different organisms In humans, intergenic regions comprise about 50% of the genome, whereas this number is much less in bacteria (15%) and yeast (30%). As with most other non-coding DNA, the GC-content of intergenic regions vary considerably among species. For example in Plasmodium falciparum, many intergenic regions have an AT content of 90%. Molecular evolution of intergenic regions Functional elements in intergenic regions will evolve slowly because their sequence is maintained by negative selection. In species with very large genomes, a large percentage of intergenic regions is probably Junk DNA and it will evolve at the neutral rate of evolution. Junk DNA sequences are not maintained by purifying selection but gain-of-function mutations with deleterious fitness effects can occur. Phylostratig
https://en.wikipedia.org/wiki/Field%20encapsulation
In computer programming, field encapsulation involves providing methods that can be used to read from or write to the field rather than accessing the field directly. Sometimes these accessor methods are called getX and setX (where X is the field's name), which are also known as mutator methods. Usually the accessor methods have public visibility while the field being encapsulated is given private visibility - this allows a programmer to restrict what actions another user of the code can perform. Compare the following Java class in which the name field has not been encapsulated: public class NormalFieldClass { public String name; public static void main(String[] args) { NormalFieldClass example1 = new NormalFieldClass(); example1.name = "myName"; System.out.println("My name is " + example1.name); } } with the same example using encapsulation: public class EncapsulatedFieldClass { private String name; public String getName() { return name; } public void setName(String newName) { name = newName; } public static void main(String[] args) { EncapsulatedFieldClass example1 = new EncapsulatedFieldClass(); example1.setName("myName"); System.out.println("My name is " + example1.getName()); } } In the first example a user is free to use the public name variable however they see fit - in the second however the writer of the class retains control over how the private name variable is read and written by only permitting access to the field via its getName and setName methods. Advantages The internal storage format of the data is hidden; in the example, an expectation of the use of restricted character sets could allow data compression through recoding (e.g., of eight bit characters to a six bit code). An attempt to encode characters out of the range of the expected data could then be handled by casting an error in the set routine. In general, the ge
https://en.wikipedia.org/wiki/Chopper%20%28electronics%29
In electronics, a chopper circuit is any of numerous types of electronic switching devices and circuits used in power control and signal applications. A chopper is a device that converts fixed DC input to a variable DC output voltage directly. Essentially, a chopper is an electronic switch that is used to interrupt one signal under the control of another. In power electronics applications, since the switching element is either fully on or fully off, its losses are low and the circuit can provide high efficiency. However, the current supplied to the load is discontinuous and may require smoothing or a high switching frequency to avoid undesirable effects. In signal processing circuits, use of a chopper stabilizes a system against drift of electronic components; the original signal can be recovered after amplification or other processing by a synchronous demodulator that essentially un-does the "chopping" process. Comparison (step down chopper and step up chopper) Comparison between step up and step down chopper: Applications Chopper circuits are used in multiple applications, including: Switched mode power supplies, including DC to DC converters. Speed controllers for DC motors Driving brushless DC torque motors or stepper motors in actuators Class D electronic amplifiers Switched capacitor filters Variable-frequency drives D.C. voltage boosting Battery-operated electric cars Battery chargers Railway traction Lighting and lamp controls Control strategies For all the chopper configurations operating from a fixed DC input voltage, the average value of the output voltage is controlled by periodic opening and closing of the switches used in the chopper circuit. The average output voltage can be controlled by different techniques namely: Pulse-width modulation Frequency modulation Variable frequency, variable pulse width CLC control In pulse-width modulation the switches are turned on at a constant chopping frequency. The total time period of on
https://en.wikipedia.org/wiki/Local%20diffeomorphism
In mathematics, more specifically differential topology, a local diffeomorphism is intuitively a map between Smooth manifolds that preserves the local differentiable structure. The formal definition of a local diffeomorphism is given below. Formal definition Let and be differentiable manifolds. A function is a local diffeomorphism, if for each point there exists an open set containing such that is open in and is a diffeomorphism. A local diffeomorphism is a special case of an immersion where the image of under locally has the differentiable structure of a submanifold of Then and may have a lower dimension than Characterizations A map is a local diffeomorphism if and only if it is a smooth immersion (smooth local embedding) and an open map. The inverse function theorem implies that a smooth map is a local diffeomorphism if and only if the derivative is a linear isomorphism for all points This implies that and must have the same dimension. A map between two connected manifolds of equal dimension () is a local diffeomorphism if and only if it is a smooth immersion (smooth local embedding), or equivalently, if and only if it is a smooth submersion. This is because every smooth immersion is a locally injective function while invariance of domain guarantees that any continuous injective function between manifolds of equal dimensions is necessarily an open map. Discussion For instance, even though all manifolds look locally the same (as for some ) in the topological sense, it is natural to ask whether their differentiable structures behave in the same manner locally. For example, one can impose two different differentiable structures on that make into a differentiable manifold, but both structures are not locally diffeomorphic (see below). Although local diffeomorphisms preserve differentiable structure locally, one must be able to "patch up" these (local) diffeomorphisms to ensure that the domain is the entire (smooth) manifold. Fo
https://en.wikipedia.org/wiki/Flavor%20scalping
Flavor scalping is a term used in the packaging industry to describe the loss of quality of a packaged item due to either its volatile flavors being absorbed by the packaging or the item absorbing undesirable flavors from its packaging. A classic example is the absorption of various plastic flavors when soft drinks are stored in plastic bottles for an extended period. See also Cork tainting References Flavors Packaging
https://en.wikipedia.org/wiki/194%20%28number%29
194 (one hundred [and] ninety-four) is the natural number following 193 and preceding 195. In mathematics 194 is the smallest Markov number that is neither a Fibonacci number nor a Pell number 194 is the smallest number written as the sum of three squares in five ways 194 is the number of irreducible representations of the Monster group 194!! - 1 is prime See also 194 (disambiguation) References Integers
https://en.wikipedia.org/wiki/Theca
In biology, a theca (plural thecae) is a sheath or a covering. Botany In botany, the theca is related to plant's flower anatomy. The theca of an angiosperm consists of a pair of microsporangia that are adjacent to each other and share a common area of dehiscence called the stomium. Any part of a microsporophyll that bears microsporangia is called an anther. Most anthers are formed on the apex of a filament. An anther and its filament together form a typical (or filantherous) stamen, part of the male floral organ. The typical anther is bilocular, i.e. it consists of two thecae. Each theca contains two microsporangia, also known as pollen sacs. The microsporangia produce the microspores, which for seed plants are known as pollen grains. If the pollen sacs are not adjacent, or if they open separately, then no thecae are formed. In Lauraceae, for example, the pollen sacs are spaced apart and open independently. The tissue between the locules and the cells is called the connective and the parenchyma. Both pollen sacs are separated by the stomium. When the anther is dehiscing, it opens at the stomium. The outer cells of the theca form the epidermis. Below the epidermis, the somatic cells form the tapetum. These support the development of microspores into mature pollen grains. However, little is known about the underlying genetic mechanisms, which play a role in male sporo- and gametogenesis. The thecal arrangement of a typical stamen can be as follows: Divergent: both thecae in line, and forming an acute angle with the filament Transverse (or explanate): both thecae exactly in line, at right angles with the filament Oblique: the thecae fixed to each other in an oblique way Parallel: the thecae fixed to each other in a parallel way Zoology In biology, the theca of follicle can also refer to the site of androgen production in females. The theca of the spinal cord is called the thecal sac, and intrathecal injections are made there or in the subarachnoid space o
https://en.wikipedia.org/wiki/Neural%20network
A neural network is a neural circuit of biological neurons, sometimes also called a biological neural network, or a network of artificial neurons or nodes in the case of an artificial neural network. Artificial neural networks are used for solving artificial intelligence (AI) problems; they model connections of biological neurons as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1. These artificial networks may be used for predictive modeling, adaptive control and applications where they can be trained via a dataset. Self-learning resulting from experience can occur within networks, which can derive conclusions from a complex and seemingly unrelated set of information. Overview A biological neural network is composed of a group of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses and other connections are possible. Apart from electrical signalling, there are other forms of signalling that arise from neurotransmitter diffusion. Artificial intelligence, cognitive modelling, and neural networks are information processing paradigms inspired by how biological neural systems process data. Artificial intelligence and cognitive modelling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agent
https://en.wikipedia.org/wiki/Write%20once%2C%20compile%20anywhere
Write once, compile anywhere (WOCA) is a philosophy taken by a compiler and its associated software libraries or by a software library/software framework which refers to a capability of writing a computer program that can be compiled on all platforms without the need to modify its source code. As opposed to Sun's write once, run anywhere slogan, cross-platform compatibility is implemented only at the source code level, rather than also at the compiled binary code level. Introduction There are many languages that follow the WOCA philosophy, such as C++, Pascal (see Free Pascal), Ada, Cobol, or C, on condition that they don't use functions beyond those provided by the standard library. Languages like Go go even further in as far that no system specific things are used, it should just work, and for system-specific elements a system of platform-specific files is used. A computer program may also use cross-platform libraries, which provide an abstraction layer hiding the differences between various platforms, for things like sockets and GUI, ensuring the portability of the written source code. This is, for example, supported by Qt (C++) or the Lazarus (Pascal) IDE via its LCL and corresponding widgetsets. Today, we have very powerful desktop computers as well as computers in our phones, which often have sophisticated applications such as word processing, Database management, and spreadsheets, that can allow people with no programming experience to, sort, extract, and manipulate their data. and create documents (such as PDF files) showing their now organized information, or printing it out. Before 2000, some of these were not available, and prior to 1980, almost none of them were. From the start of computer automation in the early 1960s, if you wanted a report from data you had, or needed to print up invoices, payroll checks, purchase orders, and other paperwork businesses, schools and governments generated, you typed them up on a physical typewriter, possibly using p
https://en.wikipedia.org/wiki/Qutrit
A qutrit (or quantum trit) is a unit of quantum information that is realized by a 3-level quantum system, that may be in a superposition of three mutually orthogonal quantum states. The qutrit is analogous to the classical radix-3 trit, just as the qubit, a quantum system described by a superposition of two orthogonal states, is analogous to the classical radix-2 bit. There is ongoing work to develop quantum computers using qutrits and qudits with multiple states. Representation A qutrit has three orthonormal basis states or vectors, often denoted , , and in Dirac or bra–ket notation. These are used to describe the qutrit as a superposition state vector in the form of a linear combination of the three orthonormal basis states: , where the coefficients are complex probability amplitudes, such that the sum of their squares is unity (normalization): The qubit's orthonormal basis states span the two-dimensional complex Hilbert space , corresponding to spin-up and spin-down of a spin-1/2 particle. Qutrits require a Hilbert space of higher dimension, namely the three-dimensional spanned by the qutrit's basis , which can be realized by a three-level quantum system. An n-qutrit register can represent 3n different states simultaneously, i.e., a superposition state vector in 3n-dimensional complex Hilbert space. Qutrits have several peculiar features when used for storing quantum information. For example, they are more robust to decoherence under certain environmental interactions. In reality, manipulating qutrits directly might be tricky, and one way to do that is by using an entanglement with a qubit. Qutrit quantum gates The quantum logic gates operating on single qutrits are unitary matrices and gates that act on registers of qutrits are unitary matrices (the elements of the unitary groups U(3) and U(3n) respectively). The rotation operator gates for SU(3) are , where is the ath Gell-Mann matrix, and is a real value (with period ). The Lie algebra of the
https://en.wikipedia.org/wiki/Sentient%20Networks
Sentient Networks, Inc., was an American networking hardware company that manufactured of Asynchronous Transfer Mode (ATM) and Frame Relay concentrators and switches for central offices. Founded in 1995 in Sarasota, Florida, the company soon after moved to San Jose, California. It was acquired by Cisco Systems in 1999. History Sentient Networks was founded in 1995 by Nimish Shah, who previously an employee of Loral Data Systems in Sarasota, Florida, before the latter subsidiary was shut down. He founded Sentient in Tampa Bay, Florida; the company soon saw venture capital backing from companies such as Sequoia Capital Accel Partners, AT&T, and Ameritech. By late 1996, the company employed 35 and had leased a 6,000-square-foot building in Sarasota for its headquarters and a 1,000-square-foot facility in San Jose, California, as a regional office. In September 1996, the company moved its entire operations to a 26,000-square-foot facility in San Jose. Sentient made products for Internet service providers, interexchange carriers, and Regional Bell Operating Companies. It developed the industry's highest-density ATM Circuit Emulation Service (CES) gateway. The company's first successful product was the Ultimate 1000 network switch, which incorporated the company's new proprietary Any Service/Any Port architecture, allowing individual parts to be configured as ATM or Frame Relay switches. Released in 1996, it was purportedly the first product to comply fully with the ATM Forum's "Anchorage Accord" specification. Sentient negotiated an OEM agreement with DSC Communitations in 1997 for their Ultimate 1000. Cisco Systems, also of San Jose, announced their intent to acquire on April 8, 1999. Sentient at that point had 102 employees; its CEO was Greg McAdoo. Sentient's employees joined the Multi-Service Switching business unit (MSSBU) of Cisco, which was the part of Cisco created by their acquisition of StrataCom. The acquisition was finalized June 25, 1999, with Sentient
https://en.wikipedia.org/wiki/Chocolate%20chip
Chocolate chips or chocolate morsels are small chunks of sweetened chocolate, used as an ingredient in a number of desserts (notably chocolate chip cookies and muffins), in trail mix and less commonly in some breakfast foods such as pancakes. They are often manufactured as teardrop-shaped volumes with flat circular bases; another variety of chocolate chips have the shape of rectangular or square blocks. They are available in various sizes, usually less than in diameter. Origin Chocolate chips were created with the invention of chocolate chip cookies in 1937 when Ruth Graves Wakefield of the Toll House Inn in the town of Whitman, Massachusetts added cut-up chunks of a semi-sweet Nestlé chocolate bar to a cookie recipe. (The Nestlé brand Toll House cookies is named for the inn.) The cookies were a huge success, and Wakefield reached an agreement in 1939 with Nestlé to add her recipe to the chocolate bar's packaging in exchange for a lifetime supply of chocolate. Initially, Nestlé included a small chopping tool with the chocolate bars. In 1941, Nestlé and at least one of its competitors started selling the chocolate in "chip" (or "morsel") form. Types Originally, chocolate chips were made of semi-sweet chocolate, but today there are many flavors. These include bittersweet, peanut butter, butterscotch, mint chocolate, white chocolate, dark chocolate, milk chocolate, and white and dark swirled chips. Uses Chocolate chips can be used in cookies, pancakes, waffles, cakes, pudding, muffins, crêpes, pies, hot chocolate, and various pastries. They are also found in many other retail food products such as granola bars, ice cream, and trail mix. Baking and melting Chocolate chips can also be melted and used in sauces and other recipes. The chips melt best at temperatures between . The melting process starts at , when the cocoa butter starts melting in the chips. The cooking temperature must never exceed for milk chocolate and white chocolate, or for dark chocolate, or
https://en.wikipedia.org/wiki/AAL1gator
The AAL1gator is a semiconductor device that implements the Circuit Emulation Service. It was developed between 1994 and 1998 and became a run-away success. It also played a role in the acquisition of four companies. The name was based on the fact that the AAL1gator implements the ATM AAL-1 standard. Development of the AAL1gator The AAL1gator was developed by Network Synthesis, Inc. under contract from Integrated Telecom Technology (IgT). It was the first semiconductor solution to implement the Circuit Emulation Service standard from the ATM Forum. It implemented 8 DS1/E1 lines worth of CES and had 256 channels. It flexibly converted the PDH DS1 signal into Asynchronous Transfer Mode cells. The AAL1gator was principally designed by the Network Synthesis CEO, Brian Holden and a consultant, Ed Lennox. Brian Holden was also involved in the ATM Forum standardization effort for the Circuit Emulation Service. Additional design efforts came from Andy Annudurai, Ravi Sajwan, and Imran Chaudhri (who also came up with the name). Chee Hu did most of the work on getting the "C" version to work at speed and to be manufacturable. Denis Smetana did most of the work on the "D" version and on the later 32 DS1 version. Jim Jacobson of OnStream Networks was the Beta Customer. Patents on the AAL1gator Two U.S. patents were issued on the AAL1gator's calendar-based transmit scheduler, one on the original product and an even better one on the "D" version enhancements designed by Denis Smetana. The scheduler implemented several intricate methods of minimizing the jitter caused by the scheduling of the 256 channels. The AAL1gator also could have gotten another patent on its method of queuing the SRTS samples, but the designers were too busy to get the application in. Functions of the AAL1gator The AAL1gator could flexibly map individual DS0s or groups of DS0s into 256 ATM VCs. It also had a high speed mode which mapped a single DS-3 into ATM. Additionally, it had a high p
https://en.wikipedia.org/wiki/Mathcad
Mathcad is computer software for the verification, validation, documentation and re-use of mathematical calculations in engineering and science, notably mechanical, chemical, electrical, and civil engineering. Released in 1986 on DOS, it introduced live editing (WYSIWYG) of typeset mathematical notation in an interactive notebook, combined with automatic computations. It was originally developed by Mathsoft, and since 2006 has been a product of Parametric Technology Corporation. History Mathcad was conceived and developed by Allen Razdow and Josh Bernoff at Mathsoft founded by David Blohm and Razdow. It was released in 1986. It was the first system to support WYSIWYG editing and recalculation of mathematical calculations mixed with text. It was also the first to check the consistency of engineering units through the full calculation. Other equation solving systems existed at the time, but did not provide a notebook interface: Software Arts' TK Solver was released in 1982, and Borland's Eureka: The Solver was released in 1987. Mathcad was acquired by Parametric Technology in April 2006. Mathcad was named "Best of '87" and "Best of '88" by PC Magazines editors. Overview Mathcad's central interface is an interactive notebook in which equations and expressions are created and manipulated in the same graphical format in which they are presented (WYSIWYG). This approach was adopted by systems such as Mathematica, Maple, Macsyma, MATLAB, and Jupyter. Mathcad today includes some of the capabilities of a computer algebra system, but remains oriented towards ease of use and documentation of numerical engineering applications. Mathcad is part of a broader product development system developed by PTC, addressing analytical steps in systems engineering. It integrates with PTC's Creo Elements/Pro, Windchill, and Creo Elements/View. Its live feature-level integration with Creo Elements/Pro enables Mathcad analytical models to be directly used in driving CAD geometry, and its
https://en.wikipedia.org/wiki/Dictyate
The dictyate or dictyotene is a prolonged resting phase in oogenesis. It occurs in the stage of meiotic prophase I in ootidogenesis. It starts late in fetal life and is terminated shortly before ovulation by the LH surge. Thus, although the majority of oocytes are produced in female fetuses before birth, these pre-eggs remain arrested in the dictyate stage until puberty commences and the cells complete ootidogenesis. In both mouse and human, oocyte DNA of older individuals has substantially more double-strand breaks than that of younger individuals. The dictyate appears to be an adaptation for efficiently removing damages in germ line DNA by homologous recombinational repair. Prophase arrested oocytes have a high capability for efficient repair of DNA damages. DNA repair capability appears to be a key quality control mechanism in the female germ line and a critical determinant of fertility. Translation halt There are a lot of mRNAs that have been transcribed but not translated during dictyate. Shortly before ovulation, the oocyte of interest activates these mRNA strains. Biochemistry mechanism Translation of mRNA in dictyate is partly explained by molecules binding to sites on the mRNA strain, which results in that initiation factors of translation can not bind to that site. Two such molecules, that impedes initiation factors, are CPEB and maskin, which bind to CPE (cytoplasmic polyadenylation element). When these two molecules remain together, then maskin binds the initiation factor eIF-4E, and thus eIF4E can no longer interact with the other initiation factors and no translation occurs. On the other hand, dissolution of the CPEB/maskin complex leads to eIF-4E binding to the initiation factor eIF-4G, and thus translation starts, which contributes to the end of dictyate and further maturation of the oocyte. See also Oogenesis Immature ovum Embryo Zygote References Developmental biology
https://en.wikipedia.org/wiki/Electronics%20%28magazine%29
Electronics is a discontinued American trade journal that covers the radio industry and subsequent industries from 1930 to 1995. Its first issue is dated April 1930. The periodical was published with the title Electronics until 1984, when it was changed temporarily to ElectronicsWeek, but was then reverted to the original title Electronics in 1985. The ISSN for the corresponding periods are: for the 1930–1984 issues, for the 1984–1985 issues with title ElectronicsWeek, and for the 1985–1995 issues. It was published by McGraw-Hill until 1988, when it was sold to the Dutch company VNU. VNU sold its American electronics magazines to Penton Publishing the next year. Generally a bimonthly magazine, its frequency and page count varied with the state of the industry, until its end in 1995. More than its principal rival Electronic News, it balanced its appeal to managerial and technical interests (at the time of its 1992 makeover, it described itself as a magazine for managers). The magazine is best known for publishing the April 19, 1965 article by Intel co-founder Gordon Moore, in which he outlined what came to be known as Moore's Law. Intel's hunt for Moore's original article On April 11, 2005, Intel posted a reward for an original, pristine copy of the Electronics Magazine where Moore's article was first published. The hunt was started in part because Moore lost his personal copy after loaning it out. Intel asked a favor of Silicon Valley neighbor and auction website eBay, having a notice posted on the website. Intel's spokesman explained, "We're kind of hopeful that it will start a bit of a scavenger hunt for the engineering community of Silicon Valley, and hopefully somebody has it tucked away in a box in the corner of their garage. We think it's an important piece of history, and we'd love to have an original copy." It soon became apparent to librarians that their copies of the article were in danger of being stolen, so many libraries (including Duke Universi
https://en.wikipedia.org/wiki/Square%E2%80%93cube%20law
The square–cube law (or cube–square law) is a mathematical principle, applied in a variety of scientific fields, which describes the relationship between the volume and the surface area as a shape's size increases or decreases. It was first described in 1638 by Galileo Galilei in his Two New Sciences as the "...ratio of two volumes is greater than the ratio of their surfaces". This principle states that, as a shape grows in size, its volume grows faster than its surface area. When applied to the real world, this principle has many implications which are important in fields ranging from mechanical engineering to biomechanics. It helps explain phenomena including why large mammals like elephants have a harder time cooling themselves than small ones like mice, and why building taller and taller skyscrapers is increasingly difficult. Description The square–cube law can be stated as follows: Represented mathematically: where is the original surface area and is the new surface area. where is the original volume, is the new volume, is the original length and is the new length. For example, a cube with a side length of 1 meter has a surface area of 6 m2 and a volume of 1 m3. If the sides of the cube were multiplied by 2, its surface area would be multiplied by the square of 2 and become 24 m2. Its volume would be multiplied by the cube of 2 and become 8 m3. The original cube (1 m sides) has a surface area to volume ratio of 6:1. The larger (2 m sides) cube has a surface area to volume ratio of (24/8) 3:1. As the dimensions increase, the volume will continue to grow faster than the surface area. Thus the square–cube law. This principle applies to all solids. Applications Engineering When a physical object maintains the same density and is scaled up, its volume and mass are increased by the cube of the multiplier while its surface area increases only by the square of the same multiplier. This would mean that when the larger version of the object is acceler
https://en.wikipedia.org/wiki/Mindat.org
Mindat.org is a non-commercial interactive online database covering minerals across the world. Originally created by Jolyon Ralph as a private project in 1993, it was launched as a community-editable website in October 2000. it is operated by the Hudson Institute of Mineralogy. History Mindat was started in 1993 as a personal database project by Jolyon Ralph. He then developed further versions as a Microsoft Windows application before launching a community-editable database website on 10 October 2000. After further development taking to the Internet stage, Mindat.org became an outreach program of the Hudson Institute of Mineralogy, a 501(c)(3) not-for-profit educational foundation incorporated in the state of New York. To address the increasing open data needs from individual researchers and organizations, Mindat.org has started to build and maintain an open data API for data query and access, and the efforts have received support from the National Science Foundation. Description Mindat claims to be the largest mineral database and mineralogical reference website on the Internet. It is used by professional mineralogists, geologists, and amateur mineral collectors alike, and is referenced in many publications. The database covers a variety of topics: scientific articles, field trip reports, mining history, advice for collectors, book reviews, mineral entries, localities, and photographs. Much of the information is from published literature, but registered editors may add and revise information and references. Editors are vetted for their expertise, in order to ensure accuracy. References have to be provided in the proper format, and editors own the copyright of data that they have contributed. The data is organized into mineral and locality pages, with links that allow for easy navigation among the pages. The pages about minerals include individual minerals and rocks. Naming conventions adhere to the various standards and definitions as published by the In
https://en.wikipedia.org/wiki/R%C3%A9nyi%20entropy
In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions. The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of can be calculated explicitly because it is an automorphic function with respect to a particular subgroup of the modular group. In theoretical computer science, the min-entropy is used in the context of randomness extractors. Definition The Rényi entropy of order , where and , is defined as It is further defined at as Here, is a discrete random variable with possible outcomes in the set and corresponding probabilities for . The resulting unit of information is determined by the base of the logarithm, e.g. shannon for base 2, or nat for base e. If the probabilities are for all , then all the Rényi entropies of the distribution are equal: . In general, for all discrete random variables , is a non-increasing function in . Applications often exploit the following relation between the Rényi entropy and the p-norm of the vector of probabilities: . Here, the discrete probability distribution is interpreted as a vector in with and . The Rényi entropy for any is Schur concave. Special cases As approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for , the Rényi entropy is just the logarithm of the size of the support of . The lim
https://en.wikipedia.org/wiki/Small-signal%20model
Small-signal modeling is a common analysis technique in electronics engineering used to approximate the behavior of electronic circuits containing nonlinear devices with linear equations. It is applicable to electronic circuits in which the AC signals (i.e., the time-varying currents and voltages in the circuit) are small relative to the DC bias currents and voltages. A small-signal model is an AC equivalent circuit in which the nonlinear circuit elements are replaced by linear elements whose values are given by the first-order (linear) approximation of their characteristic curve near the bias point. Overview Many of the electrical components used in simple electric circuits, such as resistors, inductors, and capacitors are linear. Circuits made with these components, called linear circuits, are governed by linear differential equations, and can be solved easily with powerful mathematical frequency domain methods such as the Laplace transform. In contrast, many of the components that make up electronic circuits, such as diodes, transistors, integrated circuits, and vacuum tubes are nonlinear; that is the current through them is not proportional to the voltage, and the output of two-port devices like transistors is not proportional to their input. The relationship between current and voltage in them is given by a curved line on a graph, their characteristic curve (I-V curve). In general these circuits don't have simple mathematical solutions. To calculate the current and voltage in them generally requires either graphical methods or simulation on computers using electronic circuit simulation programs like SPICE. However in some electronic circuits such as radio receivers, telecommunications, sensors, instrumentation and signal processing circuits, the AC signals are "small" compared to the DC voltages and currents in the circuit. In these, perturbation theory can be used to derive an approximate AC equivalent circuit which is linear, allowing the AC beh
https://en.wikipedia.org/wiki/Homaro%20Cantu
Homaro "Omar" Cantu Jr. (September 23, 1976 – April 14, 2015) was an American chef and inventor known for his use of molecular gastronomy. As a child, Cantu was fascinated with science and engineering. While working in a fast food restaurant, he discovered the similarities between science and cooking and decided to become a chef. In 1999, he was hired by his idol, Chicago chef Charlie Trotter. In 2003, Cantu became the first chef of Moto, which he later purchased. Through Moto, Cantu explored his unusual ideas about cooking including edible menus, carbonated fruit, and food cooked with a laser. Initially seen as a novelty only, Moto eventually earned critical praise and, in 2012, a Michelin star. Cantu's second restaurant, iNG, and his coffee house, Berrista, focused on the use of "miracle berries" to make sour food taste sweet. He was working on opening a brewery called Crooked Fork at the time of his suicide in 2015. In addition to being a chef, Cantu was a media personality, appearing regularly on TV shows, and an inventor. In 2010, he produced and co-hosted a show called Future Food. Through his media appearances, he advocated for an end to world hunger and thought his edible paper creation and the miracle berry could play a significant role in that goal. Cantu volunteered his time and money to a variety of charities and patented several food gadgets. Early life Cantu was born in Tacoma, Washington, on September 23, 1976. His father was a fabrication engineer and Cantu developed a passion for science and engineering at a young age. He disassembled the family lawn mower three times to learn how it worked, and his "Christmas gifts would wind up in a million pieces." A self-described problem child, Cantu grew up in Portland, Oregon. From the age of six to nine, he was homeless. He would later credit the homelessness for his inspiration to make food and become a social entrepreneur. At the age of twelve, Cantu was nearly jailed for starting a large fire near hi
https://en.wikipedia.org/wiki/Holy%20trinity%20%28cooking%29
The "holy trinity" in Cajun cuisine and Louisiana Creole cuisine is the base for several dishes in the regional cuisines of Louisiana and consists of onions, bell peppers and celery. The preparation of Cajun/Creole dishes such as crawfish étouffée, gumbo, and jambalaya all start from this base. Variants use garlic, parsley, or shallots in addition to the three trinity ingredients. The addition of garlic to the holy trinity is sometimes referred to as adding "the pope." The holy trinity is the Cajun and Louisiana Creole variant of mirepoix; traditional mirepoix is two parts onions, one part carrots, and one part celery, whereas the holy trinity is typically one or two parts onions, one part green bell pepper, and one part celery. It is also an evolution of the Spanish sofrito, which contains onion, garlic, bell peppers, and tomatoes. Origin of the name The name is an allusion to the Christian doctrine of the Trinity. The term is first attested in 1981 and was probably popularized by Paul Prudhomme. See also Mirepoix Sofrito Soffritto Epis References Cajun cuisine Food ingredients
https://en.wikipedia.org/wiki/Branko%20Gr%C3%BCnbaum
Branko Grünbaum (; 2 October 1929 – 14 September 2018) was a Croatian-born mathematician of Jewish descent and a professor emeritus at the University of Washington in Seattle. He received his Ph.D. in 1957 from Hebrew University of Jerusalem in Israel. Life Grünbaum was born in Osijek, then part of the Kingdom of Yugoslavia, on 2 October 1929. His father was Jewish and his mother was Catholic, so during World War II the family survived the Holocaust by living at his Catholic grandmother's home. After the war, as a high school student, he met Zdenka Bienenstock, a Jew who had lived through the war hidden in a convent while the rest of her family were killed. Grünbaum became a student at the University of Zagreb, but grew disenchanted with the communist ideology of the Socialist Federal Republic of Yugoslavia, applied for emigration to Israel, and traveled with his family and Zdenka to Haifa in 1949. In Israel, Grünbaum found a job in Tel Aviv, but in 1950 returned to the study of mathematics, at the Hebrew University of Jerusalem. He earned a master's degree in 1954 and in the same year married Zdenka, who continued as a master's student in chemistry. He served a tour of duty as an operations researcher in the Israeli Air Force beginning in 1955, and he and Zdenka had the first of their two sons in 1956. He completed his Ph.D. in 1957; his dissertation concerned convex geometry and was supervised by Aryeh Dvoretzky. After finishing his military service in 1958, Grünbaum and his family came to the US so that Grünbaum could become a postdoctoral researcher at the Institute for Advanced Study. He then became a visiting researcher at the University of Washington in 1960. He agreed to return to Israel as a lecturer at the Hebrew University, but his plans were disrupted by the Israeli authorities determining that he was not a Jew (because his mother was not Jewish) and annulling his marriage; he and Zdenka remarried in Seattle before their return. Grünbaum remained a
https://en.wikipedia.org/wiki/Negamax
Negamax search is a variant form of minimax search that relies on the zero-sum property of a two-player game. This algorithm relies on the fact that to simplify the implementation of the minimax algorithm. More precisely, the value of a position to player A in such a game is the negation of the value to player B. Thus, the player on move looks for a move that maximizes the negation of the value resulting from the move: this successor position must by definition have been valued by the opponent. The reasoning of the previous sentence works regardless of whether A or B is on move. This means that a single procedure can be used to value both positions. This is a coding simplification over minimax, which requires that A selects the move with the maximum-valued successor while B selects the move with the minimum-valued successor. It should not be confused with negascout, an algorithm to compute the minimax or negamax value quickly by clever use of alpha–beta pruning discovered in the 1980s. Note that alpha–beta pruning is itself a way to compute the minimax or negamax value of a position quickly by avoiding the search of certain uninteresting positions. Most adversarial search engines are coded using some form of negamax search. Negamax base algorithm NegaMax operates on the same game trees as those used with the minimax search algorithm. Each node and root node in the tree are game states (such as game board configuration) of a two player game. Transitions to child nodes represent moves available to a player who is about to play from a given node. The negamax search objective is to find the node score value for the player who is playing at the root node. The pseudocode below shows the negamax base algorithm, with a configurable limit for the maximum search depth: function negamax(node, depth, color) is if depth = 0 or node is a terminal node then return color × the heuristic value of node value := −∞ for each child of node do v
https://en.wikipedia.org/wiki/Naimark%27s%20problem
Naimark's problem is a question in functional analysis asked by . It asks whether every C*-algebra that has only one irreducible -representation up to unitary equivalence is isomorphic to the -algebra of compact operators on some (not necessarily separable) Hilbert space. The problem has been solved in the affirmative for special cases (specifically for separable and Type-I C*-algebras). used the -Principle to construct a C*-algebra with generators that serves as a counterexample to Naimark's Problem. More precisely, they showed that the existence of a counterexample generated by elements is independent of the axioms of Zermelo–Fraenkel set theory and the Axiom of Choice (). Whether Naimark's problem itself is independent of remains unknown. See also List of statements undecidable in Gelfand–Naimark Theorem References Conjectures C*-algebras Independence results Unsolved problems in mathematics
https://en.wikipedia.org/wiki/Bass%20reflex
A bass reflex system (also known as a ported, vented box or reflex port) is a type of loudspeaker enclosure that uses a port (hole) or vent cut into the cabinet and a section of tubing or pipe affixed to the port. This port enables the sound from the rear side of the diaphragm to increase the efficiency of the system at low frequencies as compared to a typical sealed- or closed-box loudspeaker or an infinite baffle mounting. A reflex port is the distinctive feature of this popular enclosure type. The design approach enhances the reproduction of the lowest frequencies generated by the woofer or subwoofer. The port generally consists of one or more tubes or pipes mounted in the front (baffle) or rear face of the enclosure. Depending on the exact relationship between driver parameters, the enclosure volume (and filling if any), and the tube cross-section and length, the efficiency can be substantially improved over the performance of a similarly sized sealed-box enclosure. Explanation Unlike closed-box loudspeakers, which are nearly airtight, a bass reflex system has an opening called a port or vent cut into the cabinet, generally consisting of a pipe or duct (typically circular or rectangular cross section). The air mass in this opening resonates with the "springiness" of the air inside the enclosure in exactly the same fashion as the air in a bottle resonates when a current of air is directed across the opening. Another metaphor often used is to think of the air like a spring or rubber band. The frequency at which the box/port system resonates, known as the Helmholtz resonance, depends upon the effective length and cross sectional area of the duct, the internal volume of the enclosure, and the speed of sound in air. In the early years of ported speakers, speaker designers had to do extensive experimentation to determine the ideal diameter of the port and length of the port tube or pipe; however, more recently, there are numerous tables and computer programs that
https://en.wikipedia.org/wiki/Catalan%20vault
The Catalan vault (), also called thin-tile vault, Catalan turn, Catalan arch, boveda ceiling (Spanish bóveda 'vault'), or timbrel vault, is a type of low brickwork arch forming a vaulted ceiling that often supports a floor above. It is constructed by laying a first layer of light bricks lengthwise "in space", without centering or formwork, and has a much gentler curve than most other methods of construction. Of Roman origin, it is a traditional form in regions around the Mediterranean including Catalonia (where it is widely used), and has spread around the world in more recent times through the work of Catalan architects such as Antoni Gaudí and Josep Puig i Cadafalch, and the Valencian architect Rafael Guastavino. A study on the stability of the Catalan vault is kept at the archive of the Institute of Catalan Studies, where it is said to have been entrusted by Josep Puig i Cadafalch. Though it is popularly called the Catalan vault, this construction method is found throughout the Mediterranean and the invention of the term "Catalan vault" occurred in 1904 at an architectural congress in Madrid. The technique was brought to New Spain (colonial Mexico), and is still used in parts of contemporary Mexico. In the United States Valencian architect and builder Rafael Guastavino introduced the technique to the United States in the 1880s, where it is called Guastavino tile. It is used in many major buildings across the United States, including the Boston Public Library, the New York Grand Central Terminal, and many others. See also List of architectural vaults References External links Ramage, Michael. "Construction of a Vault". details the process of constructing a six-foot by six-foot vault. Architecture in Spain Catalan architecture Spanish Colonial architecture in Mexico Arches and vaults Ceilings Brick buildings and structures Building engineering
https://en.wikipedia.org/wiki/Online%20school
An online school (virtual school, e-school, or cyber-school) teaches students entirely or primarily online or through the Internet. It has been defined as "education that uses one or more technologies to deliver instruction to students who are separated from the instructor and to support regular and substantive interaction between the students. Online education exists all around the world and is used for all levels of education (K-12 High school/secondary school, college, or graduate school). This type of learning enables the individuals to earn transferable credits, take recognized examinations, and advance to the next level of education over the Internet. Virtual education is most commonly used in high school and college. 30-year-old students or older tend to study online programs at higher rates. This group represents 41% of the online education population, while 35.5% of students ages 24–29 and 24.5% of students ages 15–23 participate in virtual education. Virtual education is becoming increasingly used worldwide. There are currently more than 4,700 colleges and universities that provide online courses to their students. In 2015, more than 6 million students were taking at least one course online, this number grew by 3.9% from the previous year. 29.7% of all higher education students are taking at least one distance course. The total number of students studying on a campus exclusively dropped by 931,317 people between the years 2012 and 2015. Experts say that because the number of students studying at the college level is growing, there will also be an increase in the number of students enrolled in distance learning. Instructional models vary, ranging from distance learning types which provide study materials for independent self-paced study, to live, interactive classes where students communicate with a teacher in a class group lesson. Class sizes range widely from a small group of 6 pupils or students to hundreds in a virtual school. The courses that are i
https://en.wikipedia.org/wiki/Balanced%20set
In linear algebra and related areas of mathematics a balanced set, circled set or disk in a vector space (over a field with an absolute value function ) is a set such that for all scalars satisfying The balanced hull or balanced envelope of a set is the smallest balanced set containing The balanced core of a set is the largest balanced set contained in Balanced sets are ubiquitous in functional analysis because every neighborhood of the origin in every topological vector space (TVS) contains a balanced neighborhood of the origin and every convex neighborhood of the origin contains a balanced convex neighborhood of the origin (even if the TVS is not locally convex). This neighborhood can also be chosen to be an open set or, alternatively, a closed set. Definition Let be a vector space over the field of real or complex numbers. Notation If is a set, is a scalar, and then let and and for any let denote, respectively, the open ball and the closed ball of radius in the scalar field centered at where and Every balanced subset of the field is of the form or for some Balanced set A subset of is called a or balanced if it satisfies any of the following equivalent conditions: Definition: for all and all scalars satisfying for all scalars satisfying where For every is a (if ) or (if ) dimensional vector subspace of If then the above equality becomes which is exactly the previous condition for a set to be balanced. Thus, is balanced if and only if for every is a balanced set (according to any of the previous defining conditions). For every 1-dimensional vector subspace of is a balanced set (according to any defining condition other than this one). For every there exists some such that or If is a convex set then this list may be extended to include: for all scalars satisfying If then this list may be extended to include: is symmetric (meaning ) and Balanced hull The of a subset of denoted by is
https://en.wikipedia.org/wiki/Absolutely%20convex%20set
In mathematics, a subset C of a real or complex vector space is said to be absolutely convex or disked if it is convex and balanced (some people use the term "circled" instead of "balanced"), in which case it is called a disk. The disked hull or the absolute convex hull of a set is the intersection of all disks containing that set. Definition A subset of a real or complex vector space is called a and is said to be , , and if any of the following equivalent conditions is satisfied: is a convex and balanced set. for any scalars and if then for all scalars and if then for any scalars and if then for any scalars if then The smallest convex (respectively, balanced) subset of containing a given set is called the convex hull (respectively, the balanced hull) of that set and is denoted by (respectively, ). Similarly, the , the , and the of a set is defined to be the smallest disk (with respect to subset inclusion) containing The disked hull of will be denoted by or and it is equal to each of the following sets: which is the convex hull of the balanced hull of ; thus, In general, is possible, even in finite dimensional vector spaces. the intersection of all disks containing Sufficient conditions The intersection of arbitrarily many absolutely convex sets is again absolutely convex; however, unions of absolutely convex sets need not be absolutely convex anymore. If is a disk in then is absorbing in if and only if Properties If is an absorbing disk in a vector space then there exists an absorbing disk in such that If is a disk and and are scalars then and The absolutely convex hull of a bounded set in a locally convex topological vector space is again bounded. If is a bounded disk in a TVS and if is a sequence in then the partial sums are Cauchy, where for all In particular, if in addition is a sequentially complete subset of then this series converges in to some point of The convex balanced hull
https://en.wikipedia.org/wiki/Music%20Construction%20Set
Will Harvey's Music Construction Set (MCS) is a music composition notation program designed by Will Harvey for the Apple II and published by Electronic Arts in 1983. Harvey wrote the original Apple II version in assembly language when he was 15 and in high school. MCS was conceived as a tool to add music to his previously published game, an abstract shooter called Lancaster for the Apple II. Music Construction Set was ported to the Atari 8-bit family, Commodore 64, IBM PC (as a booter), and the Atari ST. Two years later, in 1986, Will Harvey released a port for the 16-bit Apple IIGS, utilizing its advanced sound. Also that year, a redesigned version for the Amiga and Macintosh was released as Deluxe Music Construction Set. Overview With MCS, a user can create musical composition via a graphical user interface, a novel concept at the time of its release. Users can drag and drop notes right onto the staff, play back their creations through the computer's speakers, and print them out. The program comes with a few popular songs as samples. Most versions of this program require the users to use a joystick to create their songs, note by note. The original Apple II version supports the Mockingboard expansion card for higher fidelity sound output. In addition, use of the Mockingboard allows the musical staff to scroll along with the music as notes are played. Without it, the Apple II can not update the display while playback is in progress. Ports Electronic Arts ported MCS from the original Apple II version to the Atari 8-bit family, IBM PC, and the Commodore 64. The Atari 8-bit and Commodore 64 versions use the multi-channel audio hardware of those systems. The IBM PC version allows output audio via the IBM PC Model 5150's cassette port, so 4-voice music can be sent to a stereo system. It also takes advantage of the 3-voice sound chip built into the IBM PCjr and Tandy 1000. The Apple IIGS version was done by the original programmer, Will Harvey, in 1986. This port ta
https://en.wikipedia.org/wiki/Disproportionation
In chemistry, disproportionation, sometimes called dismutation, is a redox reaction in which one compound of intermediate oxidation state converts to two compounds, one of higher and one of lower oxidation states. The reverse of disproportionation, such as when a compound in an intermediate oxidation state is formed from precursors of lower and higher oxidation states, is called comproportionation, also known as synproportionation. More generally, the term can be applied to any desymmetrizing reaction where two molecules of one type react to give one each of two different types: 2A -> A' + A'' This expanded definition is not limited to redox reactions, but also includes some molecular autoionization reactions, such as the self-ionization of water. History The first disproportionation reaction to be studied in detail was: 2 Sn^2+ -> Sn^4+ + Sn This was examined using tartrates by Johan Gadolin in 1788. In the Swedish version of his paper he called it . Examples Mercury(I) chloride disproportionates upon UV-irradiation: Hg2Cl2 -> HgCl2 + Hg Phosphorous acid disproportionates upon heating to give phosphoric acid and phosphine: 4 H3PO3 -> 3 H3PO4 + PH3 Desymmetrizing reactions are sometimes referred to as disproportionation, as illustrated by the thermal degradation of bicarbonate: 2 HCO3- -> CO3^{2}- + H2CO3 The oxidation numbers remain constant in this acid-base reaction. Another variant on disproportionation is radical disproportionation, in which two radicals form an alkene and an alkane. {2CH3-\underset{^\bullet}CH2 -> {H2C=CH2} + H3C-CH3} Disproportionation of sulfur intermediates by microorganisms are widely observed in sediments. 4 S^0 + 4 H2O -> 3 H2S + SO4^{2}- + 2 H+ 3 S^0 + 2 FeOOH -> SO4^{2}- + 2FeS + 2 H+ 4 SO3^{2}- + 2 H+ -> H2S + SO4^{2}- Chlorine gas reacts with dilute sodium hydroxide to form sodium chloride, sodium chlorate and water. The ionic equation for this reaction is as follows:3Cl2 + 6 OH- -> 5 Cl- + ClO3- + 3 H2O The chlori
https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz%20lattice%20basis%20reduction%20algorithm
The Lenstra–Lenstra–Lovász (LLL) lattice basis reduction algorithm is a polynomial time lattice reduction algorithm invented by Arjen Lenstra, Hendrik Lenstra and László Lovász in 1982. Given a basis with n-dimensional integer coordinates, for a lattice L (a discrete subgroup of Rn) with , the LLL algorithm calculates an LLL-reduced (short, nearly orthogonal) lattice basis in time where is the largest length of under the Euclidean norm, that is, . The original applications were to give polynomial-time algorithms for factorizing polynomials with rational coefficients, for finding simultaneous rational approximations to real numbers, and for solving the integer linear programming problem in fixed dimensions. LLL reduction The precise definition of LLL-reduced is as follows: Given a basis define its Gram–Schmidt process orthogonal basis and the Gram-Schmidt coefficients for any . Then the basis is LLL-reduced if there exists a parameter in such that the following holds: (size-reduced) For . By definition, this property guarantees the length reduction of the ordered basis. (Lovász condition) For k = 2,3,..,n . Here, estimating the value of the parameter, we can conclude how well the basis is reduced. Greater values of lead to stronger reductions of the basis. Initially, A. Lenstra, H. Lenstra and L. Lovász demonstrated the LLL-reduction algorithm for . Note that although LLL-reduction is well-defined for , the polynomial-time complexity is guaranteed only for in . The LLL algorithm computes LLL-reduced bases. There is no known efficient algorithm to compute a basis in which the basis vectors are as short as possible for lattices of dimensions greater than 4. However, an LLL-reduced basis is nearly as short as possible, in the sense that there are absolute bounds such that the first basis vector is no more than times as long as a shortest vector in the lattice, the second basis vector is likewise within of the second successive minimum, and so o
https://en.wikipedia.org/wiki/Cross-entropy
In information theory, the cross-entropy between two probability distributions and over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set if a coding scheme used for the set is optimized for an estimated probability distribution , rather than the true distribution . Definition The cross-entropy of the distribution relative to a distribution over a given set is defined as follows: , where is the expected value operator with respect to the distribution . The definition may be formulated using the Kullback–Leibler divergence , divergence of from (also known as the relative entropy of with respect to ). where is the entropy of . For discrete probability distributions and with the same support this means The situation for continuous distributions is analogous. We have to assume that and are absolutely continuous with respect to some reference measure (usually is a Lebesgue measure on a Borel σ-algebra). Let and be probability density functions of and with respect to . Then and therefore NB: The notation is also used for a different concept, the joint entropy of and . Motivation In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value out of a set of possibilities can be seen as representing an implicit probability distribution over , where is the length of the code for in bits. Therefore, cross-entropy can be interpreted as the expected message-length per datum when a wrong distribution is assumed while the data actually follows a distribution . That is why the expectation is taken over the true probability distribution and not . Indeed the expected message-length under the true distribution is Estimation There are many situations where cross-entropy needs to be measured but the distribution of is unknown. An example is language modeling, where a model is created based o
https://en.wikipedia.org/wiki/Tomahawk%20%28geometry%29
The tomahawk is a tool in geometry for angle trisection, the problem of splitting an angle into three equal parts. The boundaries of its shape include a semicircle and two line segments, arranged in a way that resembles a tomahawk, a Native American axe. The same tool has also been called the shoemaker's knife, but that name is more commonly used in geometry to refer to a different shape, the arbelos (a curvilinear triangle bounded by three mutually tangent semicircles). Description The basic shape of a tomahawk consists of a semicircle (the "blade" of the tomahawk), with a line segment the length of the radius extending along the same line as the diameter of the semicircle (the tip of which is the "spike" of the tomahawk), and with another line segment of arbitrary length (the "handle" of the tomahawk) perpendicular to the diameter. In order to make it into a physical tool, its handle and spike may be thickened, as long as the line segment along the handle continues to be part of the boundary of the shape. Unlike a related trisection using a carpenter's square, the other side of the thickened handle does not need to be made parallel to this line segment. In some sources a full circle rather than a semicircle is used, or the tomahawk is also thickened along the diameter of its semicircle, but these modifications make no difference to the action of the tomahawk as a trisector. Trisection To use the tomahawk to trisect an angle, it is placed with its handle line touching the apex of the angle, with the blade inside the angle, tangent to one of the two rays forming the angle, and with the spike touching the other ray of the angle. One of the two trisecting lines then lies on the handle segment, and the other passes through the center point of the semicircle. If the angle to be trisected is too sharp relative to the length of the tomahawk's handle, it may not be possible to fit the tomahawk into the angle in this way, but this difficulty may be worked around by re
https://en.wikipedia.org/wiki/Recursive%20filter
In signal processing, a recursive filter is a type of filter which reuses one or more of its outputs as an input. This feedback typically results in an unending impulse response (commonly referred to as infinite impulse response (IIR)), characterised by either exponentially growing, decaying, or sinusoidal signal output components. However, a recursive filter does not always have an infinite impulse response. Some implementations of moving average filter are recursive filters but with a finite impulse response. Non-recursive Filter Example: y[n] = 0.5x[n − 1] + 0.5x[n]. Recursive Filter Example: y[n] = 0.5y[n − 1] + 0.5x[n]. Examples of recursive filters Kalman filter Signal processing Weblinks IIR Filter Design auf Google Play Store
https://en.wikipedia.org/wiki/Cascade%20Communications
Cascade Communications Corporation was a manufacturer of communications equipment based in Westford, Massachusetts. History Cascade was founded by Gururaj Deshpande in 1990, and was led by CEO Dan Smith, VP of Sales Mike Champa and CFO Paul Blondin. Cascade made a compact Frame Relay and Asynchronous Transfer Mode communication switches that were sold to telecommunication service providers worldwide. Frame Relay service was the primary data service used by companies in the mid-1990s to create secure internal communication networks between separate sites, and Cascade's equipment carried an estimated 70% of the world's Internet traffic during this time. Their most important direct competitor was StrataCom, which was acquired by Cisco Systems in 1996 for US $4B. In 1997, Ascend Communications acquired Cascade Communications for US $3.7 Billion, to move into ATM and Frame Relay markets. Ascend was later acquired by Lucent Technologies in 1999 in one of the largest mergers in communications equipment history (US $24 Billion). The Cascade portion of Ascend's business was more interesting to Lucent than the modem termination business that comprised the rest of Ascend. Legacy Both Desh Deshpande and CEO Dan Smith profited handsomely from the acquisition, as did hundreds of Cascade employees. In addition, Cascade was notable for invigorating the telecommunications startup culture in Massachusetts in the mid 1990s. Cascade alumni were founders or key contributors to many other financially successful Boston area telecom companies in the late 1990s. Deshpande and Smith went on to found Sycamore Networks and VP of Sales Mike Champa went on to found Omnia Communications and later was the CEO of Winphoria Networks. References 1997 mergers and acquisitions American companies established in 1990 American companies disestablished in 1997 Companies based in Massachusetts Computer companies established in 1990 Computer companies disestablished in 1997 Defunct computer compa
https://en.wikipedia.org/wiki/Second%20moment%20of%20area
The second moment of area, or second area moment, or quadratic moment of area and also known as the area moment of inertia, is a geometrical property of an area which reflects how its points are distributed with regard to an arbitrary axis. The second moment of area is typically denoted with either an (for an axis that lies in the plane of the area) or with a (for an axis perpendicular to the plane). In both cases, it is calculated with a multiple integral over the object in question. Its dimension is L (length) to the fourth power. Its unit of dimension, when working with the International System of Units, is meters to the fourth power, m4, or inches to the fourth power, in4, when working in the Imperial System of Units or the US customary system. In structural engineering, the second moment of area of a beam is an important property used in the calculation of the beam's deflection and the calculation of stress caused by a moment applied to the beam. In order to maximize the second moment of area, a large fraction of the cross-sectional area of an I-beam is located at the maximum possible distance from the centroid of the I-beam's cross-section. The planar second moment of area provides insight into a beam's resistance to bending due to an applied moment, force, or distributed load perpendicular to its neutral axis, as a function of its shape. The polar second moment of area provides insight into a beam's resistance to torsional deflection, due to an applied moment parallel to its cross-section, as a function of its shape. Different disciplines use the term moment of inertia (MOI) to refer to different moments. It may refer to either of the planar second moments of area (often or with respect to some reference plane), or the polar second moment of area (, where r is the distance to some reference axis). In each case the integral is over all the infinitesimal elements of area, dA, in some two-dimensional cross-section. In physics, moment of inertia is strict
https://en.wikipedia.org/wiki/Alpha%20%28finance%29
Alpha is a measure of the active return on an investment, the performance of that investment compared with a suitable market index. An alpha of 1% means the investment's return on investment over a selected period of time was 1% better than the market during that same period; a negative alpha means the investment underperformed the market. Alpha, along with beta, is one of two key coefficients in the capital asset pricing model used in modern portfolio theory and is closely related to other important quantities such as standard deviation, R-squared and the Sharpe ratio. In modern financial markets, where index funds are widely available for purchase, alpha is commonly used to judge the performance of mutual funds and similar investments. As these funds include various fees normally expressed in percent terms, the fund has to maintain an alpha greater than its fees in order to provide positive gains compared with an index fund. Historically, the vast majority of traditional funds have had negative alphas, which has led to a flight of capital to index funds and non-traditional hedge funds. It is also possible to analyze a portfolio of investments and calculate a theoretical performance, most commonly using the capital asset pricing model (CAPM). Returns on that portfolio can be compared with the theoretical returns, in which case the measure is known as Jensen's alpha. This is useful for non-traditional or highly focused funds, where a single stock index might not be representative of the investment's holdings. Definition in capital asset pricing model The alpha coefficient () is a parameter in the single-index model (SIM). It is the intercept of the security characteristic line (SCL), that is, the coefficient of the constant in a market model regression. where the following inputs are: : the realized return (on the portfolio), : the market return, : the risk-free rate of return, and : the beta of the portfolio. It can be shown that in an efficient market, th
https://en.wikipedia.org/wiki/Legendre%20sieve
In mathematics, the Legendre sieve, named after Adrien-Marie Legendre, is the simplest method in modern sieve theory. It applies the concept of the Sieve of Eratosthenes to find upper or lower bounds on the number of primes within a given set of integers. Because it is a simple extension of Eratosthenes' idea, it is sometimes called the Legendre–Eratosthenes sieve. Legendre's identity The central idea of the method is expressed by the following identity, sometimes called the Legendre identity: where A is a set of integers, P is a product of distinct primes, is the Möbius function, and is the set of integers in A divisible by d, and S(A, P) is defined to be: i.e. S(A, P) is the count of numbers in A with no factors common with P. Note that in the most typical case, A is all integers less than or equal to some real number X, P is the product of all primes less than or equal to some integer z < X, and then the Legendre identity becomes: (where denotes the floor function). In this example the fact that the Legendre identity is derived from the Sieve of Eratosthenes is clear: the first term is the number of integers below X, the second term removes the multiples of all primes, the third term adds back the multiples of two primes (which were miscounted by being "crossed out twice") but also adds back the multiples of three primes once too many, and so on until all (where denotes the number of primes below z) combinations of primes have been covered. Once S(A, P) has been calculated for this special case, it can be used to bound using the expression which follows immediately from the definition of S(A, P). Limitations The Legendre sieve has a problem with fractional parts of terms accumulating into a large error, which means the sieve only gives very weak bounds in most cases. For this reason it is almost never used in practice, having been superseded by other techniques such as the Brun sieve and Selberg sieve. However, since these more powerful siev
https://en.wikipedia.org/wiki/BUNCH
The BUNCH was the nickname for the group of mainframe computer competitors of IBM in the 1970s. The name is derived from the names of the five companies: Burroughs, UNIVAC, NCR, Control Data Corporation (CDC), and Honeywell. These companies were grouped together because the market share of IBM was much higher than all of its competitors put together. During the 1960s, IBM and these five computer manufacturers, along with RCA and General Electric, had been known as "IBM and the Seven Dwarfs". The description of IBM's competitors changed after GE's 1970 sale of its computer business to Honeywell and RCA's 1971 sale of its computer business to Sperry (who owned UNIVAC), leaving only five "dwarves". The companies' initials thus lent themselves to a new acronym, BUNCH. International Data Corporation estimated in 1984 that BUNCH would receive less than $2 billion of an estimated $11.4 billion in mainframe computer sales that year, with IBM receiving most of the remainder. IBM so dominated the mainframe market that observers expected the BUNCH to merge or exit the industry. BUNCH followed IBM into the microcomputer market with their own PC compatibles. but unlike that company did not quickly adjust to retail sales of smaller computers. Digital Equipment Corporation (DEC), at one point the second largest in the industry, was joined to BUNCH as DeBUNCH. Fate of BUNCH Burroughs & UNIVAC In September 1986, after Burroughs purchased Sperry (the parent company of UNIVAC), the name of the company was changed to Unisys. NCR In 1982, NCR became involved in open systems architecture, starting with the UNIX-powered TOWER 16/32, and placed more emphasis on computers smaller than mainframes. NCR was acquired by AT&T Corporation in 1991. A restructuring of AT&T in 1996 led to its re-establishment on 1 January 1997 as a separate company. In 1998, NCR sold its computer hardware manufacturing assets to Solectron and ceased to produce general-purpose computer systems. Control Data Corpo
https://en.wikipedia.org/wiki/Puzznic
is a tile-matching video game developed and released by Taito for arcades in 1989. It was ported to the Nintendo Entertainment System, Game Boy, PC Engine, X68000, Amiga, Atari ST, Amstrad CPC, Commodore 64, MS-DOS, and ZX Spectrum between 1990 and 1991. Home computer ports were handled by Ocean Software; the 2003 PlayStation port was handled by Altron. The arcade and FM Towns versions had adult content, showing a naked woman at the end of the level; this was removed in the international arcade release (but not the US one) and other home ports. A completed Apple IIGS version was cancelled after Taito America shut down. Puzznic bears strong graphical and some gameplay similarities to Taito's own Flipull/Plotting. Reception In Japan, Game Machine listed Puzznic on their December 1, 1989 issue as being the fourth-most-successful table arcade unit of the month. The game was ranked the 34th best game of all time by Amiga Power. Legacy Many clones share the same basic gameplay of Puzznic but have added extra features over the years: Puzztrix on the web and on PC, Addled and Germinal on the iPhone, Puzzled on mobile phones. A clone for the PC, Brix, was released by Epic MegaGames in 1992. For Android devices, a clone called PuzzMagic! appeared in 2015. An iPhone clone also available with the title Gem Panic. Blockbusterz Hard Puzzle Game for iOS appeared in 2019. References External links History of Puzznic Website puzznic.com 1989 video games Amiga games Amstrad CPC games Altron games Arcade video games Atari ST games Commodore 64 games Eroge FM Towns games Game Boy games Information Global Service games Mobile games MSX games NEC PC-9801 games Nintendo Entertainment System games PlayStation (console) games Puzzle video games X68000 games TurboGrafx-16 games Ocean Software games Taito arcade games Taito L System games ZX Spectrum games Video games developed in Japan
https://en.wikipedia.org/wiki/Schizocoely
Schizocoely (adjective forms: schizocoelous or schizocoelic) is a process by which some animal embryos develop. The schizocoely mechanism occurs when secondary body cavities (coeloms) are formed by splitting a solid mass of mesodermal embryonic tissue. All schizocoelomates are protostomians and they show holoblastic, spiral, determinate cleavage. Etymology The term schizocoely derives from the Ancient Greek words (), meaning 'to split', and (), meaning 'cavity'. This refers to the fact that fluid-filled body cavities are formed by splitting of mesodermal cells. Taxonomic distribution Animals called protostomes develop through schizocoely for which they are also known as schizocoelomates. Schizocoelous development often occurs in protostomes, as in phyla Mollusca, Annelida, and Arthropoda. Deuterostomes usually exhibit enterocoely; however, some deuterostomes like enteropneusts can exhibit schizocoely as well. Embryonic development The term refers to the order of organization of cells in the gastrula leading to development of the coelom. In mollusks, annelids, and arthropods, the mesoderm (the middle germ layer) forms as a solid mass of migrated cells from the single layer of the gastrula. The new mesoderm then splits, creating the pocket-like cavity of the coelom. See also Deuterostome Development of the digestive system Developmental biology Embryology Embryonic development Ontogeny Protostome References External links Enterocoelous and schizocoelous conditions - UTM.edu Developmental biology Embryology
https://en.wikipedia.org/wiki/Invertase
β-Fructofuranosidase is an enzyme that catalyzes the hydrolysis (breakdown) of the table sugar sucrose into fructose and glucose. Alternative names for β-fructofuranosidase include invertase, saccharase, glucosucrase, β-fructosidase, invertin, sucrase, fructosylinvertase, alkaline invertase, acid invertase, and the systematic name: β-fructofuranosidase. The resulting mixture of fructose and glucose is called inverted sugar syrup. Related to invertases are sucrases. Invertases and sucrases hydrolyze sucrose to give the same mixture of glucose and fructose. Invertase is a glycoprotein that hydrolyses (cleaves) the non-reducing terminal β-fructofuranoside residues. Invertases cleave the O-C(fructose) bond, whereas the sucrases cleave the O-C(glucose) bond. Invertase cleaves the α-1,2-glycosidic bond of sucrose. For industrial use, invertase is usually derived from yeast. It is also synthesized by bees, which use it to make honey from nectar. Optimal temperature at which the rate of reaction is at its greatest is 60 °C and an optimum pH of 4.5. Typically, sugar is inverted with sulfuric acid. Invertase is produced by various organisms like yeast, fungi, bacteria, higher plants, and animals. For example: Saccharomyces cerevisiae, Saccharomyces carlsbergensis, S. pombe, Aspergillus spp, Penicillium chrysogenum, Azotobacter spp, Lactobacillus spp, Pseudomonas spp etc. Applications and examples Invertase is used to produce inverted sugar syrup. Invertase is expensive, so it may be preferable to make fructose from glucose using glucose isomerase, instead. Chocolate-covered candies, other cordials, and fondant candies include invertase, which liquefies the sugar. Inhibition Urea acts as a pure non-competitive inhibitor of invertase, presumably by breaking the intramolecular hydrogen bonds contributing to the tertiary structure of the enzyme. Structure and function Reaction pathway Invertase works to catalyze the cleavage of sucrose into its two monosaccharides, g
https://en.wikipedia.org/wiki/Passive%20optical%20network
A passive optical network (PON) is a fiber-optic telecommunications technology for delivering broadband network access to end-customers. Its architecture implements a point-to-multipoint topology in which a single optical fiber serves multiple endpoints by using unpowered (passive) fiber optic splitters to divide the fiber bandwidth among the endpoints. Passive optical networks are often referred to as the last mile between an Internet service provider (ISP) and its customers. Many fiber ISPs prefer this technology. Components and characteristics A passive optical network consists of an optical line terminal (OLT) at the service provider's central office (hub), passive (non-power-consuming) optical splitters, and a number of optical network units (ONUs) or optical network terminals (ONTs), which are near end users. A PON reduces the amount of fiber and central office equipment required compared with point-to-point architectures. A passive optical network is a form of fiber-optic access network. In most cases, downstream signals are broadcast to all premises sharing multiple fibers. Encryption can prevent eavesdropping. Upstream signals are combined using a multiple access protocol, usually time-division multiple access (TDMA). History Passive optical networks were first proposed by British Telecommunications in 1987. Two major standard groups, the Institute of Electrical and Electronics Engineers (IEEE) and the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T), develop standards along with a number of other industry organizations. The Society of Cable Telecommunications Engineers (SCTE) also specified radio frequency over glass for carrying signals over a passive optical network. FSAN and ITU Starting in 1995, work on fiber to the home architectures was done by the Full Service Access Network (FSAN) working group, formed by major telecommunications service providers and system vendors. The International Telecommun
https://en.wikipedia.org/wiki/Far%20pointer
In a segmented architecture computer, a far pointer is a pointer which includes a segment selector, making it possible to point to addresses outside of the default segment. Comparison and arithmetic on far pointers is problematic: there can be several different segment-offset address pairs pointing to one physical address. In 16-bit x86 For example, in an Intel 8086, as well as in later processors running 16-bit code, a far pointer has two parts: a 16-bit segment value, and a 16-bit offset value. A linear address is obtained by shifting the binary segment value four times to the left, and then adding the offset value. Hence the effective address is 20 bits (actually 21-bit, which led to the address wraparound and the Gate A20). There can be up to 4096 different segment-offset address pairs pointing to one physical address. To compare two far pointers, they must first be converted (normalized) to their 20-bit linear representation. On C compilers targeting the 8086 processor family, far pointers were declared using a non-standard "far" qualifier. For example, char far *p; defined a far pointer to a char. The difficulty of normalizing far pointers could be avoided with the non-standard "huge" qualifier. Example of far pointer: #include <stdio.h> int main() { char far *p =(char far *)0x55550005; char far *q =(char far *)0x53332225; *p = 80; (*p)++; printf("%d",*q); return 0; } Output of the following program: 81; Because both addresses point to same location. Physical Address = (value of segment register) * 0x10 + (value of offset). Location pointed to by pointer 'p' is : 0x5555 * 0x10 + 0x0005 = 0x55555 Location pointed to by pointer 'q' is : 0x5333 * 0x10 + 0x2225 = 0x55555 So, p and q both point to the same location 0x55555. References Computer memory
https://en.wikipedia.org/wiki/Root%20hair
Root hair, or absorbent hairs, are outgrowths of epidermal cells, specialized cells at the tip of a plant root. They are lateral extensions of a single cell and are only rarely branched. They are found in the region of maturation, of the root. Root hair cells improve plant water absorption by increasing root surface area to volume ratio which allows the root hair cell to take in more water. The large vacuole inside root hair cells makes this intake much more efficient. Root hairs are also important for nutrient uptake as they are main interface between plants and mycorrhizal fungi. Function The function of all root hairs is to collect water and mineral nutrients in the soil to be sent throughout the plant. In roots, most water absorption happens through the root hairs. The length of root hairs allows them to penetrate between soil particles and prevents harmful bacterial organisms from entering the plant through the xylem vessels. Increasing the surface area of these hairs makes plants more efficient in absorbing nutrients and interacting with microbes. As root hair cells do not carry out photosynthesis, they do not contain chloroplasts. Importance Root hairs form an important surface as they are needed to absorb most of the water and nutrients needed for the plant. They are also directly involved in the formation of root nodules in legume plants. The root hairs curl around the bacteria, which allows for the formation of an infection thread into the dividing cortical cells to form the nodule. Having a large surface area, the active uptake of water and minerals through root hairs is highly efficient. Root hair cells also secrete acids (e.g., malic and citric acid), which solubilize minerals by changing their oxidation state, making the ions easier to absorb. Formation Root hair cells vary between 15 and 17 micrometers in diameter, and 80 and 1,500 micrometers in length. Root hairs are found only in the zone of maturation, also called the zone of differentiation.
https://en.wikipedia.org/wiki/Suzuki%20group
In the mathematical discipline known as group theory, the phrase Suzuki group refers to: The Suzuki sporadic group, Suz or Sz is a sporadic simple group of order 213 · 37 · 52 · 7 · 11 · 13 = 448,345,497,600 discovered by Suzuki in 1969 One of an infinite family of Suzuki groups of Lie type discovered by Suzuki Group theory
https://en.wikipedia.org/wiki/Might%20and%20Magic%20II%3A%20Gates%20to%20Another%20World
Might and Magic II: Gates to Another World (also known as Might and Magic Book Two: Gates to Another World) is a role-playing video game developed and published by New World Computing in 1988. It is the sequel to Might and Magic Book One: The Secret of the Inner Sanctum. Gameplay After the events of Might and Magic Book One: The Secret of the Inner Sanctum, the adventurers who helped Corak defeat Sheltem on VARN take the "Gates to Another World" located in VARN to the land of CRON (Central Research Observational Nacelle). The land of CRON is facing many problems brought on by the encroachment of Sheltem and the adventurers must travel through CRON, the four elemental planes and even through time to help Corak stop Sheltem from flinging CRON into its sun. While in many ways Might and Magic II is an updated version of the original, the improved graphics help greatly with navigation, and the interface added several functions that facilitated gameplay, such as a "delay" selector which allowed for faster or slower response times, and a spinning cursor when input was required - all features lacking in Might and Magic Book One. As with Might and Magic Book One, the player used up to six player-generated characters at a time, and a total of twenty-six characters could be created, who thereafter stayed at the various inns across CRON. To continue game continuity it was possible to "import" the characters developed from the first game. Additionally, Might and Magic II became the first game in the series to utilize "hirelings", predefined characters which could extend the party to eight active characters. Hirelings were controlled like regular characters but required payment each day; pay increased with level. Other new features include two new character classes, an increased number of spells, the introduction of class "upgrade" quests and more than twice the number of mini-quests. Also added was "secondary skills" such as mountaineering (necessary for travelling mountai
https://en.wikipedia.org/wiki/Will%20Harvey
Will Harvey (born 1967) is an American software developer and Silicon Valley entrepreneur. He wrote Music Construction Set (1984) for the Apple II, the first commercial sheet music processor for home computers. Music Construction Set was ported to other systems by its publisher, Electronic Arts. He wrote two games for the Apple IIGS: Zany Golf (1988) and The Immortal (1990). Harvey founded two consumer virtual world Internet companies: IMVU, an instant messaging company, and There, Inc., an MMOG company. Career Education After high school, Harvey studied computer science at Stanford University, where he earned his Bachelor's, Master's, and Ph.D. During this period, he started two game development companies and published several additional software products through Electronic Arts. Early games Harvey went to the Nueva School for middle school. He attended Crystal Springs and Uplands for high school. The first game Harvey developed was an abstract shooter for the Apple II called Lancaster (1983). He said: Harvey contacted the president of Sirius, but the game was eventually released by minor publisher Silicon Valley Systems in 1983 and was not successful. The need for music in this game led to his development of 1984's Music Construction Set, published by Electronic Arts. It was a tremendous success. Following the success of Music Construction Set, Harvey ported Atari Games's Marble Madness to the Apple II and the Commodore 64 (1986) and developed two original games, Zany Golf (1988) and The Immortal (1990). All three projects were for Electronic Arts. The Immortal and Zany Golf were written for the Apple IIGS and ported to other systems by EA. Other companies In the mid-90s, Harvey founded Sandcastle, an Internet technology company that addressed the network latency problems underlying virtual worlds and massively multiplayer games. Sandcastle was acquired by Adobe Systems. Harvey was one of the chief technical architects at San Francisco game studio Rocket
https://en.wikipedia.org/wiki/Alternating%20sign%20matrix
In mathematics, an alternating sign matrix is a square matrix of 0s, 1s, and −1s such that the sum of each row and column is 1 and the nonzero entries in each row and column alternate in sign. These matrices generalize permutation matrices and arise naturally when using Dodgson condensation to compute a determinant. They are also closely related to the six-vertex model with domain wall boundary conditions from statistical mechanics. They were first defined by William Mills, David Robbins, and Howard Rumsey in the former context. Examples A permutation matrix is an alternating sign matrix, and an alternating sign matrix is a permutation matrix if and only if no entry equals . An example of an alternating sign matrix that is not a permutation matrix is Alternating sign matrix theorem The alternating sign matrix theorem states that the number of alternating sign matrices is The first few terms in this sequence for n = 0, 1, 2, 3, … are 1, 1, 2, 7, 42, 429, 7436, 218348, … . This theorem was first proved by Doron Zeilberger in 1992. In 1995, Greg Kuperberg gave a short proof based on the Yang–Baxter equation for the six-vertex model with domain-wall boundary conditions, that uses a determinant calculation due to Anatoli Izergin. In 2005, a third proof was given by Ilse Fischer using what is called the operator method. Razumov–Stroganov problem In 2001, A. Razumov and Y. Stroganov conjectured a connection between O(1) loop model, fully packed loop model (FPL) and ASMs. This conjecture was proved in 2010 by Cantini and Sportiello. References Further reading Bressoud, David M., Proofs and Confirmations, MAA Spectrum, Mathematical Associations of America, Washington, D.C., 1999. Bressoud, David M. and Propp, James, How the alternating sign matrix conjecture was solved, Notices of the American Mathematical Society, 46 (1999), 637–646. Mills, William H., Robbins, David P., and Rumsey, Howard Jr., Proof of the Macdonald conjecture, Inventiones Mathematicae, 66 (
https://en.wikipedia.org/wiki/Mutationism
Mutationism is one of several alternatives to evolution by natural selection that have existed both before and after the publication of Charles Darwin's 1859 book On the Origin of Species. In the theory, mutation was the source of novelty, creating new forms and new species, potentially instantaneously, in sudden jumps. This was envisaged as driving evolution, which was thought to be limited by the supply of mutations. Before Darwin, biologists commonly believed in saltationism, the possibility of large evolutionary jumps, including immediate speciation. For example, in 1822 Étienne Geoffroy Saint-Hilaire argued that species could be formed by sudden transformations, or what would later be called macromutation. Darwin opposed saltation, insisting on gradualism in evolution as geology's uniformitarianism. In 1864, Albert von Kölliker revived Geoffroy's theory. In 1901 the geneticist Hugo de Vries gave the name "mutation" to seemingly new forms that suddenly arose in his experiments on the evening primrose Oenothera lamarckiana. In the first decade of the 20th century, mutationism, or as de Vries named it mutationstheorie, became a rival to Darwinism supported for a while by geneticists including William Bateson, Thomas Hunt Morgan, and Reginald Punnett. Understanding of mutationism is clouded by the mid-20th century portrayal of the early mutationists by supporters of the modern synthesis as opponents of Darwinian evolution and rivals of the biometrics school who argued that selection operated on continuous variation. In this portrayal, mutationism was defeated by a synthesis of genetics and natural selection that supposedly started later, around 1918, with work by the mathematician Ronald Fisher. However, the alignment of Mendelian genetics and natural selection began as early as 1902 with a paper by Udny Yule, and built up with theoretical and experimental work in Europe and America. Despite the controversy, the early mutationists had by 1918 already accepted nat