source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Vacant%20niche
The issue of what exactly defines a vacant niche, also known as empty niche, and whether they exist in ecosystems is controversial. The subject is intimately tied into a much broader debate on whether ecosystems can reach equilibrium, where they could theoretically become maximally saturated with species. Given that saturation is a measure of the number of species per resource axis per ecosystem, the question becomes: is it useful to define unused resource clusters as niche 'vacancies'? History of the concept Whether vacant niches are permissible has been both confirmed and denied as the definition of a niche has changed over time. Within the pre-Hutchinsonian niche frameworks of Grinnell (1917) and Elton (1927) vacant niches were allowable. In the framework of Grinnell, the species niche was largely equivalent to its habitat, such that a niche vacancy could be looked upon as a habitat vacancy. The Eltonian framework considered the niche to be equivalent to a species position in a trophic web, or food chain, and in this respect there is always going to be a vacant niche at the top predator level. Whether this position gets filled depends upon the ecological efficiency of the species filling it however. The Hutchinsonian niche framework, on the other hand, directly precludes the possibility of there being vacant niches. Hutchinson defined the niche as an n-dimensional hyper-volume whose dimensions correspond to resource gradients over which species are distributed in a unimodal fashion. In this we see that the operational definition of his niche rests on the fact that a species is needed in order to rationally define a niche in the first place. This fact didn't stop Hutchinson from making statements inconsistent with this such as: “The question raised by cases like this is whether the three Nilghiri Corixinae fill all the available niches...or whether there are really empty niches.. . .The rapid spread of introduced species often gives evidence of empty niches,
https://en.wikipedia.org/wiki/Set%20theory%20of%20the%20real%20line
Set theory of the real line is an area of mathematics concerned with the application of set theory to aspects of the real numbers. For example, one knows that all countable sets of reals are null, i.e. have Lebesgue measure 0; one might therefore ask the least possible size of a set which is not Lebesgue null. This invariant is called the uniformity of the ideal of null sets, denoted . There are many such invariants associated with this and other ideals, e.g. the ideal of meagre sets, plus more which do not have a characterisation in terms of ideals. If the continuum hypothesis (CH) holds, then all such invariants are equal to , the least uncountable cardinal. For example, we know is uncountable, but being the size of some set of reals under CH it can be at most . On the other hand, if one assumes Martin's Axiom (MA) all common invariants are "big", that is equal to , the cardinality of the continuum. Martin's Axiom is consistent with . In fact one should view Martin's Axiom as a forcing axiom that negates the need to do specific forcings of a certain class (those satisfying the ccc, since the consistency of MA with large continuum is proved by doing all such forcings (up to a certain size shown to be sufficient). Each invariant can be made large by some ccc forcing, thus each is big given MA. If one restricts to specific forcings, some invariants will become big while others remain small. Analysing these effects is the major work of the area, seeking to determine which inequalities between invariants are provable and which are inconsistent with ZFC. The inequalities among the ideals of measure (null sets) and category (meagre sets) are captured in Cichon's diagram. Seventeen models (forcing constructions) were produced during the 1980s, starting with work of Arnold Miller, to demonstrate that no other inequalities are provable. These are analysed in detail in the book by Tomek Bartoszynski and Haim Judah, two of the eminent workers in the field. One curious
https://en.wikipedia.org/wiki/Undervalued%20stock
An undervalued stock is defined as a stock that is selling at a price significantly below what is assumed to be its intrinsic value. For example, if a stock is selling for $50, but it is worth $100 based on predictable future cash flows, then it is an undervalued stock. The undervalued stock has the intrinsic value below the investment's true intrinsic value. Numerous popular books discuss undervalued stocks. Examples are The Intelligent Investor by Benjamin Graham, also known as "The Dean of Wall Street," and The Warren Buffett Way by Robert Hagstrom. The Intelligent Investor puts forth Graham's principles that are based on mathematical calculations such as the price/earning ratio. He was less concerned with the qualitative aspects of a business such as the nature of a business, its growth potential and its management. For example, Amazon, Facebook, Netflix and Tesla in 2016, although they had a promising future, would not have appealed to Graham, since their price-earnings ratios were too high. Graham's ideas had a significant influence on the young Warren Buffett, who later became a famous US billionaire. Determining factors Morningstar uses five factors to determine when something is a value stock, namely: price/prospective earnings (a predictive version of price/earnings ratio sometimes called Forward P/E.) price/book price/sales price/cash flow dividend yield Warren Buffett, also known as "The Oracle of Omaha," stated that the value of a business is the sum of the cash flows over the life of the business discounted at an appropriate interest rate. This is in reference to the ideas of John Burr Williams. Therefore, one would not be able to predict whether a stock is undervalued without predicting the future profits of a company and future interest rates. Buffett stated that he is interested in predictable businesses and he uses the interest rate on the 10-year treasury bond in his calculations. Therefore, an investor has to be fairly certain that a c
https://en.wikipedia.org/wiki/Ecological%20goods%20and%20services
Ecological goods and services (EG&S) are the economical benefits (goods and services) arising from the ecological functions of ecosystems. Such benefits accrue to all living organisms, including animals and plants, rather than to humans alone. However, there is a growing recognition of the importance to society that ecological goods and services provide for health, social, cultural, and economic needs. Introduction Examples of ecological goods include clean air, and abundant fresh water. Examples of ecological services include purification of air and water, maintenance of biodiversity, decomposition of wastes, soil and vegetation generation and renewal, pollination of crops and natural vegetation, groundwater recharge through wetlands, seed dispersal, greenhouse gas mitigation, and aesthetically pleasing landscapes. The products and processes of ecological goods and services are complex and occur over long periods of time. They are a sub-category of public goods. The concern over ecological goods and services arises because we are losing them at an unsustainable rate, and therefore land use managers must devise a host of tools to encourage the provision of more ecological goods and services. Rural and suburban settings are especially important, as lands that are developed and converted from their natural state lose their ecological functions. Therefore, ecological goods and services provided by privately held lands become increasingly important. Markets A market may be created wherein ecological goods and services are demanded by society and supplied by public and private landowners. Some believe that public lands alone are not adequate to supply this market, and that privately held lands are needed to close this gap. What has emerged is the notion that rural landowners who provide ecological goods and services to society through good stewardship practices on their land should be duly compensated. The main tool to accomplish this to date has been to pay farmers
https://en.wikipedia.org/wiki/Homomorphic%20encryption
Homomorphic encryption is a form of encryption that allows computations to be performed on encrypted data without first having to decrypt it. The resulting computations are left in an encrypted form which, when decrypted, result in an output that is identical to that produced had the operations been performed on the unencrypted data. Homomorphic encryption can be used for privacy-preserving outsourced storage and computation. This allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted. Homomorphic encryption eliminates the need for processing data in the clear, thereby preventing attacks that would enable a hacker to access that data while it is being processed, using privilege escalation. For sensitive data, such as health care information, homomorphic encryption can be used to enable new services by removing privacy barriers inhibiting data sharing or increasing security to existing services. For example, predictive analytics in health care can be hard to apply via a third party service provider due to medical data privacy concerns, but if the predictive analytics service provider can operate on encrypted data instead, these privacy concerns are diminished. Moreover, even if the service provider's system is compromised, the data would remain secure. Description Homomorphic encryption is a form of encryption with an additional evaluation capability for computing over encrypted data without access to the secret key. The result of such a computation remains encrypted. Homomorphic encryption can be viewed as an extension of public-key cryptography. Homomorphic refers to homomorphism in algebra: the encryption and decryption functions can be thought of as homomorphisms between plaintext and ciphertext spaces. Homomorphic encryption includes multiple types of encryption schemes that can perform different classes of computations over encrypted data. The computations are represented as either Boolean or arith
https://en.wikipedia.org/wiki/Component%20%28group%20theory%29
In mathematics, in the field of group theory, a component of a finite group is a quasisimple subnormal subgroup. Any two distinct components commute. The product of all the components is the layer of the group. For finite abelian (or nilpotent) groups, p-component is used in a different sense to mean the Sylow p-subgroup, so the abelian group is the product of its p-components for primes p. These are not components in the sense above, as abelian groups are not quasisimple. A quasisimple subgroup of a finite group is called a standard component if its centralizer has even order, it is normal in the centralizer of every involution centralizing it, and it commutes with none of its conjugates. This concept is used in the classification of finite simple groups, for instance, by showing that under mild restrictions on the standard component one of the following always holds: a standard component is normal (so a component as above), the whole group has a nontrivial solvable normal subgroup, the subgroup generated by the conjugates of the standard component is on a short list, or the standard component is a previously unknown quasisimple group . References Group theory Subgroup properties
https://en.wikipedia.org/wiki/Perfect%20core
In mathematics, in the field of group theory, the perfect core (or perfect radical) of a group is its largest perfect subgroup. Its existence is guaranteed by the fact that the subgroup generated by a family of perfect subgroups is again a perfect subgroup. The perfect core is also the point where the transfinite derived series stabilizes for any group. A group whose perfect core is trivial is termed a hypoabelian group. Every solvable group is hypoabelian, and so is every free group. More generally, every residually solvable group is hypoabelian. The quotient of a group G by its perfect core is hypoabelian, and is called the hypoabelianization of G. References Functional subgroups Group theory Solvable groups
https://en.wikipedia.org/wiki/Segregate%20%28taxonomy%29
In taxonomy, a segregate, or a segregate taxon is created when a taxon is split off from another taxon. This other taxon will be better known, usually bigger, and will continue to exist, even after the segregate taxon has been split off. A segregate will be either new or ephemeral: there is a tendency for taxonomists to disagree on segregates, and later workers often reunite a segregate with the 'mother' taxon. If a segregate is generally accepted as a 'good' taxon it ceases to be a segregate. Thus, this is a way of indicating change in the taxonomic status. It should not be confused with, for example, the subdivision of a genus into subgenera. For example, the genus Alsobia is a segregate from the genus Episcia; The genera Filipendula and Aruncus are segregates from the genus Spiraea. External links A more detailed explanation, with multiple examples on mushrooms. Botanical nomenclature Plant taxonomy Taxonomy (biology)
https://en.wikipedia.org/wiki/Noga%20Alon
Noga Alon (; born 1956) is an Israeli mathematician and a professor of mathematics at Princeton University noted for his contributions to combinatorics and theoretical computer science, having authored hundreds of papers. Education and career Alon was born in 1956 in Haifa, where he graduated from the Hebrew Reali School in 1974. He graduated summa cum laude from the Technion – Israel Institute of Technology in 1979, earned a master's degree in mathematics in 1980 from Tel Aviv University, and received his Ph.D. in Mathematics at the Hebrew University of Jerusalem in 1983 with the dissertation Extremal Problems in Combinatorics supervised by Micha Perles. After postdoctoral research at the Massachusetts Institute of Technology he returned to Tel Aviv University as a senior lecturer in 1985, obtained a permanent position as an associate professor there in 1986, and was promoted to full professor in 1988. He was head of the School of Mathematical Science from 1999 to 2001, and was given the Florence and Ted Baumritter Combinatorics and Computer Science Chair, before retiring as professor emeritus and moving to Princeton University in 2018. He was editor-in-chief of the journal Random Structures and Algorithms beginning in 2008. Research Alon has published more than five hundred research papers, mostly in combinatorics and in theoretical computer science, and one book, on the probabilistic method. He has also published under the pseudonym "A. Nilli", based on the name of his daughter Nilli Alon. His research contributions include the combinatorial Nullstellensatz, an algebraic tool with many applications in combinatorics; color-coding, a technique for fixed-parameter tractability of pattern-matching algorithms in graphs; and the Alon–Boppana bound in spectral graph theory. Selected works Book The Probabilistic Method, with Joel Spencer, Wiley, 1992. 2nd ed., 2000; 3rd ed., 2008; 4th ed., 2016. Research articles Previously in the ACM Symposium on Theory o
https://en.wikipedia.org/wiki/Monitor%20filter
A monitor filter is an accessory to the computer display to filter out the light reflected from the smooth glass surface of a CRT or flat panel display. Many also include a ground to dissipate static buildup. A secondary use for monitor filters is privacy as they decrease the viewing angle of a monitor, preventing it from being viewed from the side; in this case, they are also called privacy screens. The standard type of anti-glare filter consists of a coating that reduces the reflection from a glass or plastic surface. These are manufactured from polycarbonate or acrylic plastic. An older variety of anti-glare filter used a mesh filter that had the appearance of a nylon screen. Although effective, a mesh filter also caused degradation of the image quality. Marketing names of privacy filters: HP's "SureView" Lenovo's "PrivacyGuard" Dell's "SafeScreen" Support for privacy screen is available since Linux kernel 5.17 that expose it through Direct Rendering Manager and is used by GNOME 42. References Display technology Ergonomics
https://en.wikipedia.org/wiki/Pronunciation%20assessment
Automatic pronunciation assessment is the use of speech recognition to verify the correctness of pronounced speech, as distinguished from manual assessment by an instructor or proctor. Also called speech verification, pronunciation evaluation, and pronunciation scoring, the main application of this technology is computer-aided pronunciation teaching (CAPT) when combined with computer-aided instruction for computer-assisted language learning (CALL), speech remediation, or accent reduction. Pronunciation assessment does not determine unknown speech (as in dictation or automatic transcription) but instead, knowing the expected word(s) in advance, it attempts to verify the correctness of the learner's pronunciation and ideally their intelligibility to listeners, sometimes along with often inconsequential prosody such as intonation, pitch, tempo, rhythm, and stress. Pronunciation assessment is also used in reading tutoring, for example in products such as Microsoft Teams and from Amira Learning. Automatic pronunciation assessment can also be used to help diagnose and treat speech disorders such as apraxia. The earliest work on pronunciation assessment avoided measuring genuine listener intelligibility, a shortcoming corrected in 2011 at the Toyohashi University of Technology, and included in the Versant high-stakes English fluency assessment from Pearson and mobile apps from 17zuoye Education & Technology, but still missing in 2023 products from Google Search, Microsoft, Educational Testing Service, Speechace, and ELSA. Assessing authentic listener intelligibility is essential for avoiding inaccuracies from accent bias, especially in high-stakes assessments; from words with multiple correct pronunciations; and from phoneme coding errors in machine-readable pronunciation dictionaries. In the Common European Framework of Reference for Languages (CEFR) assessment criteria for "overall phonological control", intelligibility outweighs formally correct pronunciation at all le
https://en.wikipedia.org/wiki/Multiseat%20configuration
A multiseat, multi-station or multiterminal system is a single computer which supports multiple independent local users at the same time. A "seat" consists of all hardware devices assigned to a specific workplace at which one user sits at and interacts with the computer. It consists of at least one graphics device (graphics card or just an output (e.g. HDMI/VGA/DisplayPort port) and the attached monitor/video projector) for the output and a keyboard and a mouse for the input. It can also include video cameras, sound cards and more. Motivation Since the 1960s computers have been shared between users. Especially in the early days of computing when computers were extremely expensive the usual paradigm was a central mainframe computer connected to numerous terminals. With the advent of personal computing this paradigm has been largely replaced by personal computers (or one computer per user). Multiseat setups are a return to this multiuser paradigm but based around a PC which supports a number of zero-clients usually consisting of a terminal per user (screen, keyboard, mouse). In some situations a multiseat setup is more cost-effective because it is not necessary to buy separate motherboards, microprocessors, RAM, hard disks and other components for each user. For example, buying one high speed CPU, usually costs less than buying several slower CPUs. History In the 1970s, it was very commonplace to connect multiple computer terminals to a single mainframe computer, even graphical terminals. Early terminals were connected with RS-232 type serial connections, either directly, or through modems. With the advent of Internet Protocol based networking, it became possible for multiple users to log into a host using telnet or – for a graphic environment – an X Window System "server". These systems would retain a physically secure "root console" for system administration and direct access to the host machine. Support for multiple consoles in a PC running the X interface w
https://en.wikipedia.org/wiki/Periodic%20point
In mathematics, in the study of iterated functions and dynamical systems, a periodic point of a function is a point which the system returns to after a certain number of function iterations or a certain amount of time. Iterated functions Given a mapping from a set into itself, a point in is called periodic point if there exists an >0 so that where is the th iterate of . The smallest positive integer satisfying the above is called the prime period or least period of the point . If every point in is a periodic point with the same period , then is called periodic with period (this is not to be confused with the notion of a periodic function). If there exist distinct and such that then is called a preperiodic point. All periodic points are preperiodic. If is a diffeomorphism of a differentiable manifold, so that the derivative is defined, then one says that a periodic point is hyperbolic if that it is attractive if and it is repelling if If the dimension of the stable manifold of a periodic point or fixed point is zero, the point is called a source; if the dimension of its unstable manifold is zero, it is called a sink; and if both the stable and unstable manifold have nonzero dimension, it is called a saddle or saddle point. Examples A period-one point is called a fixed point. The logistic map exhibits periodicity for various values of the parameter . For between 0 and 1, 0 is the sole periodic point, with period 1 (giving the sequence which attracts all orbits). For between 1 and 3, the value 0 is still periodic but is not attracting, while the value is an attracting periodic point of period 1. With greater than 3 but less than there are a pair of period-2 points which together form an attracting sequence, as well as the non-attracting period-1 points 0 and As the value of parameter rises toward 4, there arise groups of periodic points with any positive integer for the period; for some values of one of these repeating sequences is
https://en.wikipedia.org/wiki/Conjugacy%20problem
In abstract algebra, the conjugacy problem for a group G with a given presentation is the decision problem of determining, given two words x and y in G, whether or not they represent conjugate elements of G. That is, the problem is to determine whether there exists an element z of G such that The conjugacy problem is also known as the transformation problem. The conjugacy problem was identified by Max Dehn in 1911 as one of the fundamental decision problems in group theory; the other two being the word problem and the isomorphism problem. The conjugacy problem contains the word problem as a special case: if x and y are words, deciding if they are the same word is equivalent to deciding if is the identity, which is the same as deciding if it's conjugate to the identity. In 1912 Dehn gave an algorithm that solves both the word and conjugacy problem for the fundamental groups of closed orientable two-dimensional manifolds of genus greater than or equal to 2 (the genus 0 and genus 1 cases being trivial). It is known that the conjugacy problem is undecidable for many classes of groups. Classes of group presentations for which it is known to be soluble include: free groups (no defining relators) one-relator groups with torsion braid groups knot groups finitely presented conjugacy separable groups finitely generated abelian groups (relators include all commutators) Gromov-hyperbolic groups biautomatic groups CAT(0) groups Fundamental groups of geometrizable 3-manifolds References Group theory
https://en.wikipedia.org/wiki/U.S.%20National%20Vegetation%20Classification
The U.S. National Vegetation Classification (NVC or USNVC) is a scheme for classifying the natural and cultural vegetation communities of the United States. The purpose of this standardized vegetation classification system is to facilitate communication between land managers, scientists, and the public when managing, researching, and protecting plant communities. The non-profit group NatureServe maintains the NVC for the U.S. government. See also British National Vegetation Classification Vegetation classification External links The U.S. National Vegetation Classification website "National Vegetation Classification Standard, Version 2" FGDC-STD-005-2008, Vegetation Subcommittee, Federal Geographic Data Committee, February 2008 U.S. Geological Survey page about the Vegetation Characterization Program Federal Geographic Data Committee page about the NVC Environment of the United States Flora of the United States NatureServe Biological classification
https://en.wikipedia.org/wiki/Digital%20magnetofluidics
Digital magnetofluidics is a method for moving, combining, splitting, and controlling drops of water or biological fluids using magnetic fields. This is accomplished by adding superparamagnetic particles to a drop placed on a superhydrophobic surface. Normally this type of surface would exhibit a lotus effect and the drop of water would roll or slide off. But by using magnetic fields, the drop is stabilized and its movements and structure can be controlled. References A. Egatz-Gomez, S. Melle, A.A. García, S. Lindsay, M.A. Rubio, P. Domínguez, T. Picraux, J. Taraci, T. Clement, and M. Hayes, “Superhydrophobic Nanowire Surfaces for Drop Movement Using Magnetic Fields,” in Proc. NSTI Nanotechnology Conference and Trade Show, 2006, pp. 501–504. Fluid mechanics
https://en.wikipedia.org/wiki/Switched%20capacitor
A switched capacitor (SC) is an electronic circuit that implements a function by moving charges into and out of capacitors when electronic switches are opened and closed. Usually, non-overlapping clock signals are used to control the switches, so that not all switches are closed simultaneously. Filters implemented with these elements are termed switched-capacitor filters, which depend only on the ratios between capacitances and the switching frequency, and not on precise resistors. This makes them much more suitable for use within integrated circuits, where accurately specified resistors and capacitors are not economical to construct, but accurate clocks and accurate relative ratios of capacitances are economical. SC circuits are typically implemented using metal–oxide–semiconductor (MOS) technology, with MOS capacitors and MOS field-effect transistor (MOSFET) switches, and they are commonly fabricated using the complementary MOS (CMOS) process. Common applications of MOS SC circuits include mixed-signal integrated circuits, digital-to-analog converter (DAC) chips, analog-to-digital converter (ADC) chips, pulse-code modulation (PCM) codec-filters, and PCM digital telephony. Parallel resistor simulation using a switched-capacitor The simplest switched-capacitor (SC) circuit is made of one capacitor and two switches S and S which alternatively connect the capacitor to either in or out at a switching frequency of . Recall that Ohm's law can express the relationship between voltage, current, and resistance as: The following equivalent resistance calculation will show how during each switching cycle, this switched-capacitor circuit transfers an amount of charge from in to out such that it behaves according to a similar linear current–voltage relationship with Equivalent resistance calculation By definition, the charge on any capacitor with a voltage between its plates is: Therefore, when S is closed while S is open, the charge stored in the capacitor will b
https://en.wikipedia.org/wiki/Relative%20velocity
The relative velocity (also or ) is the velocity of an object or observer B in the rest frame of another object or observer A. Classical mechanics In one dimension (non-relativistic) We begin with relative motion in the classical, (or non-relativistic, or the Newtonian approximation) that all speeds are much less than the speed of light. This limit is associated with the Galilean transformation. The figure shows a man on top of a train, at the back edge. At 1:00 pm he begins to walk forward at a walking speed of 10 km/h (kilometers per hour). The train is moving at 40 km/h. The figure depicts the man and train at two different times: first, when the journey began, and also one hour later at 2:00 pm. The figure suggests that the man is 50 km from the starting point after having traveled (by walking and by train) for one hour. This, by definition, is 50 km/h, which suggests that the prescription for calculating relative velocity in this fashion is to add the two velocities. The diagram displays clocks and rulers to remind the reader that while the logic behind this calculation seem flawless, it makes false assumptions about how clocks and rulers behave. (See The train-and-platform thought experiment.) To recognize that this classical model of relative motion violates special relativity, we generalize the example into an equation: where: is the velocity of the Man relative to Earth, is the velocity of the Man relative to the Train, is the velocity of the Train relative to Earth. Fully legitimate expressions for "the velocity of A relative to B" include "the velocity of A with respect to B" and "the velocity of A in the coordinate system where B is always at rest". The violation of special relativity occurs because this equation for relative velocity falsely predicts that different observers will measure different speeds when observing the motion of light. In two dimensions (non-relativistic) The figure shows two objects A and B moving at constant
https://en.wikipedia.org/wiki/Voltameter
A voltameter or coulometer is a scientific instrument used for measuring electric charge (quantity of electricity) through electrolytic action. The SI unit of electric charge is the coulomb. The voltameter should not be confused with a voltmeter, which measures electric potential. The SI unit for electric potential is the volt. Etymology Michael Faraday used an apparatus that he termed a "volta-electrometer"; subsequently John Frederic Daniell called this a "voltameter". Types The voltameter is an electrolytic cell and the measurement is made by weighing the element deposited or released at the cathode in a specified time. Silver voltameter This is the most accurate type. It consists of two silver plates in a solution of silver nitrate. When current is flowing, silver dissolves at the anode and is deposited at the cathode. The cathode is initially massed, current is passed for a measured time and the cathode is massed again. Copper coulometer This is similar to the silver voltameter but the anode and cathode are copper and the solution is copper sulfate, acidified with sulfuric acid. It is cheaper than the silver voltameter, but slightly less accurate. Mercury voltameter In this device, mercury is used to determine the amount of charges transformed during the following reaction: {Hg2+} + 2e- <=> Hg^\circ These oxidation/reduction processes have 100% efficiency with the wide range of the current densities. Measuring of the quantity of electricity (coulombs) is based on the changes of the mass of the mercury electrode. Mass of the electrode can be increased during cathodic deposition of the mercury ions or decreased during the anodic dissolution of the metal. Sulfuric acid voltameter The anode and cathode are platinum and the solution is dilute sulfuric acid. Hydrogen is released at the cathode and collected in a graduated tube so that its volume can be measured. The volume is adjusted to standard temperature and pressure and the mass of hydrogen is calcu
https://en.wikipedia.org/wiki/Ouchterlony%20double%20immunodiffusion
Ouchterlony double immunodiffusion (also known as passive double immunodiffusion) is an immunological technique used in the detection, identification and quantification of antibodies and antigens, such as immunoglobulins and extractable nuclear antigens. The technique is named after Örjan Ouchterlony, the Swedish physician who developed the test in 1948 to evaluate the production diphtheria toxins from isolated bacteria. Procedure A gel plate is cut to form a series of holes ("wells") in an agar or agarose gel. A sample extract of interest (for example human cells harvested from tonsil tissue) is placed in one well, sera or purified antibodies are placed in another well and the plate left for 48 hours to develop. During this time the antigens in the sample extract and the antibodies each diffuse out of their respective wells. Where the two diffusion fronts meet, if any of the antibodies recognize any of the antigens, they will bind to the antigens and form an immune complex. The immune complex precipitates in the gel to give a thin white line (precipitin line), which is a visual signature of antigen recognition. The method can be conducted in parallel with multiple wells filled with different antigen mixtures and multiple wells with different antibodies or mixtures of antibodies, and antigen-antibody reactivity can be seen by observing between which wells the precipitate is observed. When more than one well is used there are many possible outcomes based on the reactivity of the antigen and antibody selected. The zone of equivalence lines may give a full identity (i.e. a continuous line), partial identity (i.e. a continuous line with a spur at one end), or a non-identity (i.e. the two lines cross completely). The sensitivity of the assay can be increased by using a stain such as Coomassie brilliant blue, this is done by repeated staining and destaining of the assay until the precipitin lines are at maximum visibility. Theory Precipitation occurs with most antige
https://en.wikipedia.org/wiki/Data%20Protection%20API
Data Protection Application Programming Interface (DPAPI) is a simple cryptographic application programming interface available as a built-in component in Windows 2000 and later versions of Microsoft Windows operating systems. In theory, the Data Protection API can enable symmetric encryption of any kind of data; in practice, its primary use in the Windows operating system is to perform symmetric encryption of asymmetric private keys, using a user or system secret as a significant contribution of entropy. A detailed analysis of DPAPI inner-workings was published in 2011 by Bursztein et al. For nearly all cryptosystems, one of the most difficult challenges is "key management" in part, how to securely store the decryption key. If the key is stored in plain text, then any user that can access the key can access the encrypted data. If the key is to be encrypted, another key is needed, and so on. DPAPI allows developers to encrypt keys using a symmetric key derived from the user's logon secrets, or in the case of system encryption, using the system's domain authentication secrets. The DPAPI keys used for encrypting the user's RSA keys are stored under %APPDATA%\Microsoft\Protect\{SID} directory, where {SID} is the Security Identifier of that user. The DPAPI key is stored in the same file as the master key that protects the users private keys. It usually is 64 bytes of random data. Security properties DPAPI doesn't store any persistent data for itself; instead, it simply receives plaintext and returns ciphertext (or conversely). DPAPI security relies upon the Windows operating system's ability to protect the master key and RSA private keys from compromise, which in most attack scenarios is most highly reliant on the security of the end user's credentials. A main encryption/decryption key is derived from user's password by PBKDF2 function. Particular data binary large objects can be encrypted in a way that salt is added and/or an external user-prompted password (aka "
https://en.wikipedia.org/wiki/Laboratoire%20d%27informatique%20pour%20la%20m%C3%A9canique%20et%20les%20sciences%20de%20l%27ing%C3%A9nieur
The Computer Science Laboratory for Mechanics and Engineering Sciences (LIMSI) was a CNRS pluri-disciplinary science laboratory in Orsay, France. LIMSI academics and scholars come primarily from the Engineering and Information Sciences fields, but also from Cognitive Science and Linguistics. LIMSI is associated with the Paris-Sud University. LIMSI also collaborates with other universities and engineering schools within the University Paris-Saclay. History LIMSI was created in 1972 under the leadership of Lucien Malavard, with an initial focus on numerical fluid mechanics, acoustics, and signal processing. Its research themes have progressively been expanded to Speech and Image Processing, then to a growing number of themes related to human-computer communication and interaction on the one hand; to thermics and energetics on the other hand. In January 2021, LIMSI was merged with the Laboratory for Research in Computer Science (LRI) into the new Interdisciplinary Laboratory for Numerical Sciences. Research themes LIMSI research is organized in four main themes, spanning the activities of nine research groups: Fluid Mechanics remains one of LIMSI's main research areas, with an expertise in the development of advanced numerical methodologies associated to experiments in academic configurations: the AERO and ETCM groups both contribute activities related to large-scale numerical simulations, to uncertainty quantification, to the characterization of fluid dynamics (instabilities, turbulence), to the control of flows, and to multiphysic couplings; The Energetics and study of Mass and Heat Transfer theme carries out fundamental research aimed at a better understanding of transfer phenomena in convection and radiation, in multiphasic and oscillating conditions, and at extremely low temperatures. The ETCM and TSF groups also study the analysis of large thermic systems with application to housing and solar energy. The Natural Language Processing applied to spoken, writ
https://en.wikipedia.org/wiki/Tarski%27s%20axiomatization%20of%20the%20reals
In 1936, Alfred Tarski set out an axiomatization of the real numbers and their arithmetic, consisting of only the 8 axioms shown below and a mere four primitive notions: the set of reals denoted R, a binary total order over R, denoted by the infix operator <, a binary operation of addition over R, denoted by the infix operator +, and the constant 1. The literature occasionally mentions this axiomatization but never goes into detail, notwithstanding its economy and elegant metamathematical properties. This axiomatization appears little known, possibly because of its second-order nature. Tarski's axiomatization can be seen as a version of the more usual definition of real numbers as the unique Dedekind-complete ordered field; it is however made much more concise by using unorthodox variants of standard algebraic axioms and other subtle tricks (see e.g. axioms 4 and 5, which combine the usual four axioms of abelian groups). The term "Tarski's axiomatization of real numbers" also refers to the theory of real closed fields, which Tarski showed completely axiomatizes the first-order theory of the structure 〈R, +, ·, <〉. The axioms Axioms of order (primitives: R, <): Axiom 1 If x < y, then not y < x. That is, "<" is an asymmetric relation. This implies that "<" is not a reflexive relationship, i.e. for all x, x < x is false. Axiom 2 If x < z, there exists a y such that x < y and y < z. In other words, "<" is dense in R. Axiom 3 "<" is Dedekind-complete. More formally, for all X, Y ⊆ R, if for all x ∈ X and y ∈ Y, x < y, then there exists a z such that for all x ∈ X and y ∈ Y, if z ≠ x and z ≠ y, then x < z and z < y. To clarify the above statement somewhat, let X ⊆ R and Y ⊆ R. We now define two common English verbs in a particular way that suits our purpose: X precedes Y if and only if for every x ∈ X and every y ∈ Y, x < y. The real number z separates X and Y if and only if for every x ∈ X with x ≠ z and every y ∈ Y with y ≠ z, x < z and z < y. Axiom 3 can the
https://en.wikipedia.org/wiki/Membrane%20biology
Membrane biology is the study of the biological and physiochemical characteristics of membranes, with applications in the study of cellular physiology. Membrane bioelectrical impulses are described by the Hodgkin cycle. Biophysics Membrane biophysics is the study of biological membrane structure and function using physical, computational, mathematical, and biophysical methods. A combination of these methods can be used to create phase diagrams of different types of membranes, which yields information on thermodynamic behavior of a membrane and its components. As opposed to membrane biology, membrane biophysics focuses on quantitative information and modeling of various membrane phenomena, such as lipid raft formation, rates of lipid and cholesterol flip-flop, protein-lipid coupling, and the effect of bending and elasticity functions of membranes on inter-cell connections. See also References Biophysics
https://en.wikipedia.org/wiki/FreeOTFE
FreeOTFE is a discontinued open source computer program for on-the-fly disk encryption (OTFE). On Microsoft Windows, and Windows Mobile (using FreeOTFE4PDA), it can create a virtual drive within a file or partition, to which anything written is automatically encrypted before being stored on a computer's hard or USB drive. It is similar in function to other disk encryption programs including TrueCrypt and Microsoft's BitLocker. The author, Sarah Dean, went absent as of 2011. The FreeOTFE website is unreachable as of June 2013 and the domain name is now registered by a domain squatter. The original program can be downloaded from a mirror at Sourceforge. In June 2014, a fork of the project now named LibreCrypt appeared on GitHub. Overview FreeOTFE was initially released by Sarah Dean in 2004, and was the first open source code disk encryption system that provided a modular architecture allowing 3rd parties to implement additional algorithms if needed. Older FreeOTFE licensing required that any modification to the program be placed in the public domain. This does not conform technically to section 3 of the Open Source definition. Newer program licensing omits this condition. The FreeOTFE license has not been approved by the Open Source Initiative and is not certified to be labeled with the open-source certification mark. This software is compatible with Linux encrypted volumes (e.g. LUKS, cryptoloop, dm-crypt), allowing data encrypted under Linux to be read (and written) freely. It was the first open source transparent disk encryption system to support Windows Vista and PDAs. Optional two-factor authentication using smart cards and/or hardware security modules (HSMs, also termed security tokens) was introduced in v4.0, using the PKCS#11 (Cryptoki) standard developed by RSA Laboratories. FreeOTFE also allows any number of "hidden volumes" to be created, giving plausible deniability and deniable encryption, and also has the option of encrypting full partitions or d
https://en.wikipedia.org/wiki/HTTP%20compression
HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization. HTTP data is compressed before it is sent from the server: compliant browsers will announce what methods are supported to the server before downloading the correct format; browsers that do not support compliant compression method will download uncompressed data. The most common compression schemes include gzip and Brotli; a full list of available schemes is maintained by the IANA. There are two different ways compression can be done in HTTP. At a lower level, a Transfer-Encoding header field may indicate the payload of an HTTP message is compressed. At a higher level, a Content-Encoding header field may indicate that a resource being transferred, cached, or otherwise referenced is compressed. Compression using Content-Encoding is more widely supported than Transfer-Encoding, and some browsers do not advertise support for Transfer-Encoding compression to avoid triggering bugs in servers. Compression scheme negotiation The negotiation is done in two steps, described in RFC 2616 and RFC 9110: 1. The web client advertises which compression schemes it supports by including a list of tokens in the HTTP request. For Content-Encoding, the list is in a field called Accept-Encoding; for Transfer-Encoding, the field is called TE. GET /encrypted-area HTTP/1.1 Host: www.example.com Accept-Encoding: gzip, deflate 2. If the server supports one or more compression schemes, the outgoing data may be compressed by one or more methods supported by both parties. If this is the case, the server will add a Content-Encoding or Transfer-Encoding field in the HTTP response with the used schemes, separated by commas. HTTP/1.1 200 OK Date: mon, 26 June 2016 22:38:34 GMT Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux) Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT Accept-Ranges: bytes Content-Length: 438 Connection: close Content-Type: text/html; charset=UT
https://en.wikipedia.org/wiki/List%20of%20common%20coordinate%20transformations
This is a list of some of the most commonly used coordinate transformations. 2-dimensional Let be the standard Cartesian coordinates, and the standard polar coordinates. To Cartesian coordinates From polar coordinates From log-polar coordinates By using complex numbers , the transformation can be written as That is, it is given by the complex exponential function. From bipolar coordinates From 2-center bipolar coordinates From Cesàro equation To polar coordinates From Cartesian coordinates Note: solving for returns the resultant angle in the first quadrant (). To find one must refer to the original Cartesian coordinate, determine the quadrant in which lies (for example, (3,−3) [Cartesian] lies in QIV), then use the following to solve for The value for must be solved for in this manner because for all values of , is only defined for , and is periodic (with period ). This means that the inverse function will only give values in the domain of the function, but restricted to a single period. Hence, the range of the inverse function is only half a full circle. Note that one can also use From 2-center bipolar coordinates Where 2c is the distance between the poles. To log-polar coordinates from Cartesian coordinates Arc-length and curvature In Cartesian coordinates In polar coordinates 3-dimensional Let (x, y, z) be the standard Cartesian coordinates, and (ρ, θ, φ) the spherical coordinates, with θ the angle measured away from the +Z axis (as , see conventions in spherical coordinates). As φ has a range of 360° the same considerations as in polar (2 dimensional) coordinates apply whenever an arctangent of it is taken. θ has a range of 180°, running from 0° to 180°, and does not pose any problem when calculated from an arccosine, but beware for an arctangent. If, in the alternative definition, θ is chosen to run from −90° to +90°, in opposite direction of the earlier definition, it can be found uniquely from an arcsine, but beware of an arccota
https://en.wikipedia.org/wiki/Galvanic%20isolation
Galvanic isolation is a principle of isolating functional sections of electrical systems to prevent current flow; no direct conduction path is permitted. Energy or information can still be exchanged between the sections by other means, such as capacitive, inductive, radiative, optical, acoustic, or mechanical coupling. Galvanic isolation is used where two or more electric circuits must communicate, but their grounds may be at different potentials. It is an effective method of breaking ground loops by preventing unwanted current from flowing between two units sharing a ground conductor. Galvanic isolation is also used for safety, preventing accidental electric shocks. Methods Transformer Transformers are probably the most common means of galvanic isolation. They are almost universally used in power supplies because they are a mature technology that can carry significant power. They are also used to isolate data signals in Ethernet over twisted pair. Transformers couple by magnetic flux. Except for the autotransformer, the primary and secondary windings of a transformer are not electrically connected to each other. The voltage difference that may safely be applied between windings without risk of breakdown (the isolation voltage) is specified in kilovolts by an industry standard. The same applies to magnetic amplifiers and transductors. While transformers are usually used to step up or step down the voltages, isolation transformers with a 1:1 ratio are used mostly in safety applications while keeping the voltage the same. If two electronic systems have a common ground, they are not galvanically isolated. The common ground might not normally and intentionally have connection to functional poles, but might become connected. For this reason isolation transformers do not supply a GND/earth pole. Opto-isolator Opto-isolators transmit information by modulating light. The sender (light source) and receiver (photosensitive device) are not electrically connected. Typical
https://en.wikipedia.org/wiki/Unknotting%20problem
In mathematics, the unknotting problem is the problem of algorithmically recognizing the unknot, given some representation of a knot, e.g., a knot diagram. There are several types of unknotting algorithms. A major unresolved challenge is to determine if the problem admits a polynomial time algorithm; that is, whether the problem lies in the complexity class P. Computational complexity First steps toward determining the computational complexity were undertaken in proving that the problem is in larger complexity classes, which contain the class P. By using normal surfaces to describe the Seifert surfaces of a given knot, showed that the unknotting problem is in the complexity class NP. claimed the weaker result that unknotting is in AM ∩ co-AM; however, later they retracted this claim. In 2011, Greg Kuperberg proved that (assuming the generalized Riemann hypothesis) the unknotting problem is in co-NP, and in 2016, Marc Lackenby provided an unconditional proof of co-NP membership. The unknotting problem has the same computational complexity as testing whether an embedding of an undirected graph in Euclidean space is linkless. Unknotting algorithms Several algorithms solving the unknotting problem are based on Haken's theory of normal surfaces: Haken's algorithm uses the theory of normal surfaces to find a disk whose boundary is the knot. Haken originally used this algorithm to show that unknotting is decidable, but did not analyze its complexity in more detail. Hass, Lagarias, and Pippenger showed that the set of all normal surfaces may be represented by the integer points in a polyhedral cone and that a surface witnessing the unknottedness of a curve (if it exists) can always be found on one of the extreme rays of this cone. Therefore, vertex enumeration methods can be used to list all of the extreme rays and test whether any of them corresponds to a bounding disk of the knot. Hass, Lagarias, and Pippenger used this method to show that the unknottedness is i
https://en.wikipedia.org/wiki/Eucalyptus%20oil
Eucalyptus oil is the generic name for distilled oil from the leaf of Eucalyptus, a genus of the plant family Myrtaceae native to Australia and cultivated worldwide. Eucalyptus oil has a history of wide application, as a pharmaceutical, antiseptic, repellent, flavouring, fragrance and industrial uses. The leaves of selected Eucalyptus species are steam distilled to extract eucalyptus oil. Types and production Eucalyptus oils in the trade are categorized into three broad types according to their composition and main end-use: medicinal, perfumery and industrial. The most prevalent is the standard cineole-based "oil of eucalyptus", a colourless mobile liquid (yellow with age) with a penetrating, camphoraceous, woody-sweet scent. China produces about 75% of the world trade, but most of this is derived from the cineole fractions of camphor laurel rather than being true eucalyptus oil. Significant producers of true eucalyptus include South Africa, Portugal, Spain, Brazil, Australia, Chile, and Eswatini. Global production is dominated by Eucalyptus globulus. However, Eucalyptus kochii and Eucalyptus polybractea have the highest cineole content, ranging from 80 to 95%. The British Pharmacopoeia states that the oil must have a minimum cineole content of 70% if it is pharmaceutical grade. Rectification is used to bring lower grade oils up to the high cineole standard required. In 1991, global annual production was estimated at 3,000 tonnes for the medicinal eucalyptus oil with another 1,500 tonnes for the main perfumery oil (produced from Eucalyptus citriodora). The eucalyptus genus also produces non-cineole oils, including piperitone, phellandrene, citral, methyl cinnamate and geranyl acetate. Uses Herbal medicine The European Medicines Agency Committee on Herbal Medicinal Products concluded that traditional medicines based on eucalyptus oil can be used for treating cough associated with the common cold, and to relieve symptoms of localized muscle pain. Repellent and b
https://en.wikipedia.org/wiki/Constrained%20Shortest%20Path%20First
Constrained Shortest Path First (CSPF) is an extension of shortest path algorithms. The path computed using CSPF is a shortest path fulfilling a set of constraints. It simply means that it runs shortest path algorithm after pruning those links that violate a given set of constraints. A constraint could be minimum bandwidth required per link (also known as bandwidth guaranteed constraint), end-to-end delay, maximum number of links traversed, include/exclude nodes. CSPF is widely used in MPLS Traffic Engineering. The routing using CSPF is known as Constraint Based Routing (CBR). The path computed using CSPF could be exactly same as that of computed from OSPF and IS-IS, or it could be completely different depending on the set of constraints to be met. Example with bandwidth constraint Consider the network to the right, where a route has to be computed from router-A to the router-C satisfying bandwidth constrained of x- units, and link cost for each link is based on hop-count (i.e., 1). If x = 50 units then CSPF will give path A → B → C. If x = 55 units then CSPF will give path A → D → E → C. If x = 90 units then CSPF will give path A → D → E → F → C. In all of these cases OSPF and IS-IS will result in path A → B → C. However, if the link costs in this topology are different, CSPF may accordingly determine a different path. For example, suppose that as before, hop count is used as link cost for all links but A → B and B → C, for which the cost is 4. In this case: If x = 50 units then CSPF will give path A → D → E → C. If x = 55 units then CSPF will give path A → D → E → C. If x = 90 units then CSPF will give path A → D → E → F → C. References MPLS networking Network protocols Internet protocols Routing protocols
https://en.wikipedia.org/wiki/John%20C.%20Trautwine
John Cresson Trautwine (March 30, 1810, Philadelphia, Pennsylvania – September 14, 1883, Philadelphia) was an American civil engineer, architect, and engineering writer. A consultant on numerous canal projects in North and South America, he was later remembered for reporting in 1852 that a canal through Panama would be impossible. Career Trautwine began studying civil engineering in the office of William Strickland, an architect and early railroad civil engineer, and helped erect the second building of the United States Mint in Philadelphia. In 1831, he became a civil engineer with the Columbia Railway. In 1835, under Strickland's direction, he drew one of the earliest maps of Maryland: a proposed route for the Wilmington and Susquehanna Railroad from Wilmington, Delaware, to North East, Maryland. In 1836, he became an engineer with the Philadelphia and Trenton Railroad. From 1836 to 1842, he was an engineer with the Hiawassee Railway, which connected Georgia and Tennessee. In 1835, Trautwine designed Pennsylvania Hall, the first building erected for Gettysburg College. A "temple-style edifice with four columns in the portico", it was, as of 1958, the only building he was known to have designed. In 1838, Trautwine once again worked under Strickland, as assistant engineer for the W&S, which had merged with three other railroads to create the first rail link from Philadelphia to Baltimore. (This main line survives today as part of Amtrak's Northeast Corridor.) His service is noted on the 1839 Newkirk Viaduct Monument in Philadelphia. In 1844, Trautwine was elected as a member to the American Philosophical Society. He later executed surveys for the Panama Railway in 1850, for the Lackawanna and Lanesborough Railway in Susquehanna County, Pa., in 1856, and for a railway route across Honduras in 1857. With George Totten, he built the Canal del Dique between the Bay of Cartagena and the Magdalena River in Colombia. He also planned a system of docks for the city of
https://en.wikipedia.org/wiki/Normal%20cone
In algebraic geometry, the normal cone of a subscheme of a scheme is a scheme analogous to the normal bundle or tubular neighborhood in differential geometry. Definition The normal cone or of an embedding , defined by some sheaf of ideals I is defined as the relative Spec When the embedding i is regular the normal cone is the normal bundle, the vector bundle on X corresponding to the dual of the sheaf . If X is a point, then the normal cone and the normal bundle to it are also called the tangent cone and the tangent space (Zariski tangent space) to the point. When Y = Spec R is affine, the definition means that the normal cone to X = Spec R/I is the Spec of the associated graded ring of R with respect to I. If Y is the product X × X and the embedding i is the diagonal embedding, then the normal bundle to X in Y is the tangent bundle to X. The normal cone (or rather its projective cousin) appears as a result of blow-up. Precisely, let be the blow-up of Y along X. Then, by definition, the exceptional divisor is the pre-image ; which is the projective cone of . Thus, The global sections of the normal bundle classify embedded infinitesimal deformations of Y in X; there is a natural bijection between the set of closed subschemes of , flat over the ring D of dual numbers and having X as the special fiber, and H0(X, NX Y). Properties Compositions of regular embeddings If are regular embeddings, then is a regular embedding and there is a natural exact sequence of vector bundles on X: If are regular embeddings of codimensions and if is a regular embedding of codimension then In particular, if is a smooth morphism, then the normal bundle to the diagonal embedding (r-fold) is the direct sum of copies of the relative tangent bundle . If is a closed immersion and if is a flat morphism such that , then If is a smooth morphism and is a regular embedding, then there is a natural exact sequence of vector bundles on X: (which is a special case of an exa
https://en.wikipedia.org/wiki/Leading%20Edge%20Products
Leading Edge Products, Inc., was a computer manufacturer in the 1980s and the 1990s. It was based in Canton, Massachusetts. History Leading Edge was founded in 1980 by Thomas Shane and Michael Shane. At the outset, they were a PC peripherals company selling aftermarket products such as Elephant Memory Systems brand floppy disk media ("Elephant. Never forgets") and printer ribbons, and acting as the sole North American distributor/reseller of printers from the Japanese manufacturer C. Itoh, the most memorable being the popular low-end dot-matrix printer, "The Gorilla Banana". In 1984 the company sold the computer aftermarket product line and sales division to Dennison Computer Supplies, a division of Dennison Manufacturing. In 1984, they began to use Daewoo parts, and in 1989, they were acquired by Daewoo, as part of their recovery from Chapter 11 bankruptcy. (Shane declared that the costs of a legal dispute with Mitsubishi led to its bankruptcy). In January 1990, Daewoo hired Al Agbay, a veteran executive from Panasonic to lead the company out of Chapter 11 Bankruptcy. In the three years that followed, Agbay and his executive team repaid dealers approximately $16 million and increased annual revenues to over $250 million before a contract dispute severed Agbay and Daewoo's relationship. In October, 1995, Daewoo sold the company to Manuhold Investment AG, a Swiss electronics company. Leading Edge had sold 185,000 of its PC clones in the United States in 1994, but in 1995 sales fell from 90,000 in the first half to almost none in the second half. By 1997 the company was defunct. Products Hardware The first known computer to be produced by Leading Edge is the Model M, released in 1982. By 1986 it sold for $1695 (US) with a monitor and two floppy drives. It used an Intel 8088-2 processor, running at a maximum of 7.16 MHz on an 8 bit bus, compared to 6 MHz for the IBM PC-AT on a 16 bit bus. The 'M' stands for Mitsubishi, their parts provider. They began produci
https://en.wikipedia.org/wiki/Coarse%20structure
In the mathematical fields of geometry and topology, a coarse structure on a set X is a collection of subsets of the cartesian product X × X with certain properties which allow the large-scale structure of metric spaces and topological spaces to be defined. The concern of traditional geometry and topology is with the small-scale structure of the space: properties such as the continuity of a function depend on whether the inverse images of small open sets, or neighborhoods, are themselves open. Large-scale properties of a space—such as boundedness, or the degrees of freedom of the space—do not depend on such features. Coarse geometry and coarse topology provide tools for measuring the large-scale properties of a space, and just as a metric or a topology contains information on the small-scale structure of a space, a coarse structure contains information on its large-scale properties. Properly, a coarse structure is not the large-scale analog of a topological structure, but of a uniform structure. Definition A on a set is a collection of subsets of (therefore falling under the more general categorization of binary relations on ) called , and so that possesses the identity relation, is closed under taking subsets, inverses, and finite unions, and is closed under composition of relations. Explicitly: Identity/diagonal: The diagonal is a member of —the identity relation. Closed under taking subsets: If and then Closed under taking inverses: If then the inverse (or transpose) is a member of —the inverse relation. Closed under taking unions: If then their union is a member of Closed under composition: If then their product is a member of —the composition of relations. A set endowed with a coarse structure is a . For a subset of the set is defined as We define the of by to be the set also denoted The symbol denotes the set These are forms of projections. A subset of is said to be a if is a controlled set. Intuition Th
https://en.wikipedia.org/wiki/Leading%20Edge%20Model%20D
The Leading Edge Model D is an IBM clone first released by Leading Edge Hardware in July 1985. It was initially priced at $1,495 and configured with dual 5.25" floppy drives, 256 KB of RAM, and a monochrome monitor. It was manufactured by South Korean conglomerate Daewoo and distributed by Canton, Massachusetts-based Leading Edge. Engineer Stephen Kahng spent about four months designing the Model D at a cost of $200,000. Kahng later became CEO of Macintosh clone maker Power Computing. In August 1986, Leading Edge cut the price of the base model by $200, to $1,295, and increased the base memory of the machine to 512 KB. The Model D was an immediate success, selling 100,000 units in its first year of production. It sold well for several years, until a dispute with dealers forced Leading Edge into bankruptcy in 1989. Hardware The Model D initially featured an Intel 8088 microprocessor at 4.77 MHz, although later models had a switch in the back to run at 4.77 MHz (normal) or 7.16 MHz (high). Earlier models have no turbo switch and run only at 4.77 MHz, while a few of the later ones (seemingly very rare) are 7.16 MHz only. Four models are known: DC-2010, DC-2011, DC-2010E, and DC-2011E. The "E" seems to correlate with the capability of running at 7.16 MHz. The addition of the Intel 8087 floating point unit (FPU) coprocessor is supported in all Leading Edge Model D revisions with an onboard 40-pin DIP socket. Unlike the IBM PC and IBM PC/XT, the Model D integrates video, the disk controller, a battery backed clock (real-time clock or RTC), serial, and parallel ports directly onto the motherboard rather than putting them on plug-in cards. This allows the Model D to be half the size of the IBM PC, with four free ISA expansion slots compared to the PC's one slot after installing necessary cards. The motherboard came in eight different revisions: Revision 1, 5, 7, 8, CC1, CC2, WC1, and WC2. Revisions 1 through 7 are usually found in models DC-2010 and DC-2011, with revi
https://en.wikipedia.org/wiki/Transcytosis
Transcytosis (also known as cytopempsis) is a type of transcellular transport in which various macromolecules are transported across the interior of a cell. Macromolecules are captured in vesicles on one side of the cell, drawn across the cell, and ejected on the other side. Examples of macromolecules transported include IgA, transferrin, and insulin. While transcytosis is most commonly observed in epithelial cells, the process is also present elsewhere. Blood capillaries are a well-known site for transcytosis, though it occurs in other cells, including neurons, osteoclasts and M cells of the intestine. Regulation The regulation of transcytosis varies greatly due to the many different tissues in which this process is observed. Various tissue-specific mechanisms of transcytosis have been identified. Brefeldin A, a commonly used inhibitor of ER-to-Golgi apparatus transport, has been shown to inhibit transcytosis in dog kidney cells, which provided the first clues as to the nature of transcytosis regulation. Transcytosis in dog kidney cells has also been shown be regulated at the apical membrane by Rab17, as well as Rab11a and Rab25. Further work on dog kidney cells has shown that a signaling cascade involving the phosphorylation of EGFR by Yes leading to the activation of Rab11FIP5 by MAPK1 upregulates transcytosis. Transcytosis has been shown to be inhibited by the combination of progesterone and estradiol followed by activation mediated by prolactin in the rabbit mammary gland during pregnancy. In the thyroid, follicular cell transcytosis is regulated positively by TSH . The phosphorylation of caveolin 1 induced by hydrogen peroxide has been shown to be critical to the activation of transcytosis in pulmonary vascular tissue. It can therefore be concluded that the regulation of transcytosis is a complex process that varies between tissues. Role in pathogenesis Due to the function of transcytosis as a process that transports macromolecules across cells, it can be a
https://en.wikipedia.org/wiki/Supersampling
Supersampling or supersampling anti-aliasing (SSAA) is a spatial anti-aliasing method, i.e. a method used to remove aliasing (jagged and pixelated edges, colloquially known as "jaggies") from images rendered in computer games or other computer programs that generate imagery. Aliasing occurs because unlike real-world objects, which have continuous smooth curves and lines, a computer screen shows the viewer a large number of small squares. These pixels all have the same size, and each one has a single color. A line can only be shown as a collection of pixels, and therefore appears jagged unless it is perfectly horizontal or vertical. The aim of supersampling is to reduce this effect. Color samples are taken at several instances inside the pixel (not just at the center as normal), and an average color value is calculated. This is achieved by rendering the image at a much higher resolution than the one being displayed, then shrinking it to the desired size, using the extra pixels for calculation. The result is a downsampled image with smoother transitions from one line of pixels to another along the edges of objects. The number of samples determines the quality of the output. Motivation Aliasing is manifested in the case of 2D images as moiré pattern and pixelated edges, colloquially known as "jaggies". Common signal processing and image processing knowledge suggests that to achieve perfect elimination of aliasing, proper spatial sampling at the Nyquist rate (or higher) after applying a 2D Anti-aliasing filter is required. As this approach would require a forward and inverse fourier transformation, computationally less demanding approximations like supersampling were developed to avoid domain switches by staying in the spatial domain ("image domain"). Method Computational cost and adaptive supersampling Supersampling is computationally expensive because it requires much greater video card memory and memory bandwidth, since the amount of buffer used is several time
https://en.wikipedia.org/wiki/NSA%20Suite%20A%20Cryptography
NSA Suite A Cryptography is NSA cryptography which "contains classified algorithms that will not be released." "Suite A will be used for the protection of some categories of especially sensitive information (a small percentage of the overall national security-related information assurance market)." Incomplete list of Suite A algorithms: ACCORDION BATON CDL 1 CDL 2 FFC FIREFLY JOSEKI KEESEE MAYFLY MEDLEY MERCATOR SAVILLE SHILLELAGH WALBURN WEASEL See also Commercial National Security Algorithm Suite NSA Suite B Cryptography References General NSA Suite B Cryptography / Cryptographic Interoperability Cryptography standards National Security Agency cryptography Standards of the United States
https://en.wikipedia.org/wiki/Otis%20King
Otis Carter Formby King (1876–1944) was an electrical engineer in London who invented and produced a cylindrical slide rule with helical scales, primarily for business uses initially. The product was named Otis King's Patent Calculator, and was manufactured and sold by Carbic Ltd. in London from about 1922 to about 1972. With a log-scale decade length of 66 inches, the Otis King calculator should be about a full digit more accurate than a 6-inch pocket slide rule. But due to inaccuracies in tic-mark placement, some portions of its scales will read off by more than they should. For example, a reading of 4.630 might represent an answer of 4.632, or almost one part in 2000 error, when it should be accurate to one part in 6000 (66"/6000 = 0.011" estimated interpolation accuracy). The Geniac brand cylindrical slide rule sold by Oliver Garfield Company in New York was initially a relabelled Otis King; Garfield later made his own, probably unauthorized version of the Otis King (around 1959). The UK patents covering the mechanical device(s) would have expired in about 1941–1942 (i.e. 20 years after filing of the patent) but copyright in the drawings would typically only expire 70 years after the author's death. Patents UK patent GB 207,762 (1922) UK patent GB 183,723 (1921) UK patent GB 207,856 (1922) US patent US 1,645,009 (1923) Canadian patent CA 241986 Canadian patent CA 241076 French patent FR569985 French patent FR576616 German patent DE 418814 See also Bygrave slide rule Fuller's cylindrical slide rule External links Dick Lyon's Otis King pages Cylindrical rules at Museum of HP Calculators References 1876 births 1944 deaths English inventors Mechanical calculators Logarithms English inventions Analog computers
https://en.wikipedia.org/wiki/Smart%20mine
Smart mine refers to a number of next-generation land mine designs being developed by military forces around the world. Many are designed to self-destruct or self-deactivate at the end of a conflict or a preset period of time. Others are so-called "self-healing" minefields which can detect when a gap in the field has been created and will direct its mines to reposition themselves to eliminate that gap, making it much more difficult and dangerous to create a safe path through the minefield. The development of smart mines began as a response to the International Campaign to Ban Landmines as a way of reducing non-combatant and civilian injury. Critics claim that new technology is unreliable, and that the perception of a "safe mine" will lead to increased deployment of land mines in future conflicts. Current guidelines allow for a 10% failure rate, leaving a significant number of mines to pose a threat. Additionally, in the case of self-destructing mines, civilians still are at risk of injury when the mine self-destructs and are denied access to land which has been mined. As mines are inherently non-discriminate weapons, even smart mines may injure civilians during a time of war. Many critics believe this to be unacceptable under international law. The human security paradigm is outspoken on the issue of the reduction in the use of land mines due to the extremely individual nature of their impact - a facet that is ignored by traditional security concerns which focus on military and state level security issues. References See also Land mine Smart devices Land mines
https://en.wikipedia.org/wiki/Voltinism
Voltinism is a term used in biology to indicate the number of broods or generations of an organism in a year. The term is most often applied to insects, and is particularly in use in sericulture, where silkworm varieties vary in their voltinism. Univoltine (monovoltine) – (adjective) referring to organisms having one brood or generation per year Bivoltine (divoltine) – (adjective) referring to organisms having two broods or generations per year Trivoltine – (adjective) referring to organisms having three broods or generations per year Multivoltine (polyvoltine) – (adjective) referring to organisms having more than two broods or generations per year Semivoltine – There are two meanings: (biology) Less than univoltine; having a brood or generation less often than once per year or (adjective) referring to organisms whose generation time is more than one year. Examples The speckled wood butterfly is univoltine in the northern part of its range, e.g. northern Scandinavia. Adults emerge in late spring, mate, and die shortly after laying eggs; their offspring will grow until pupation, enter diapause in anticipation of the winter, and emerge as adults the following year – thus resulting in a single generation of butterflies per year. In southern Scandinavia, the same species is bivoltine – here, the offspring of spring-emerging adults will develop directly into adults during the summer, mate, and die. Their offspring in turn constitute a second generation, which is the generation that will enter winter diapause and emerge as adults (and mate) in the spring of the following year. This results in a pattern of one short-lived generation (c. 2–3 months) that breeds during the summer, and one long-lived generation (c. 9–10 months) that diapauses through the winter and breeds in the spring. The Rocky Mountain parnassian and the High brown fritillary are more examples of univoltine butterfly species. The bee species Macrotera portalis is bivoltine, and is estimated to ha
https://en.wikipedia.org/wiki/Tandy%20Video%20Information%20System
The Tandy Memorex Video Information System (VIS) is an interactive, multimedia CD-ROM player produced by the Tandy Corporation starting in 1992. It is similar in function to the Philips CD-i and Commodore CDTV systems (particularly the CDTV, since both the VIS and CDTV were adaptations of existing computer platforms and operating systems to the set-top-box design). The VIS systems were sold only at Radio Shack, under the Memorex brand, both of which Tandy owned at the time. Modular Windows Modular Windows is a special version of Microsoft Windows 3.1, designed to run on the Tandy Video Information System. Microsoft intended Modular Windows to be an embedded operating system for various devices, especially those designed to be connected to televisions. However, the VIS is the only known product that actually used this Windows version. It has been claimed that Microsoft created a new, incompatible version of Modular Windows ("1.1") shortly after the VIS shipped. No products are known to have actually used Modular Windows 1.1. Reception The VIS was not a successful product; by some reports Radio Shack only sold 11,000 units during the lifetime of the product. Radio Shack store employees jokingly referred to the VIS as "Virtually Impossible to Sell". Tandy discontinued the product in early 1994 and all remaining units were sold to a liquidator. Spinoffs While Modular Windows was discontinued, other modular, embedded versions of Windows were later released. These include Windows CE and Windows XP Embedded. VIS applications could be written using tools and techniques similar to those used to write software for IBM PC compatible personal computers running Microsoft Windows. This concept was carried forward in the Microsoft Xbox. Specifications Details of the system include: CPU: Intel 286 Video System: Cirrus Logic Sound System: Yamaha Chipset: NCR Corporation CDROM ×2 IDE by Mitsumi OS: Microsoft Modular Windows Additional details: Intel 80286 processo
https://en.wikipedia.org/wiki/Mason%27s%20gain%20formula
Mason's gain formula (MGF) is a method for finding the transfer function of a linear signal-flow graph (SFG). The formula was derived by Samuel Jefferson Mason, whom it is also named after. MGF is an alternate method to finding the transfer function algebraically by labeling each signal, writing down the equation for how that signal depends on other signals, and then solving the multiple equations for the output signal in terms of the input signal. MGF provides a step by step method to obtain the transfer function from a SFG. Often, MGF can be determined by inspection of the SFG. The method can easily handle SFGs with many variables and loops including loops with inner loops. MGF comes up often in the context of control systems, microwave circuits and digital filters because these are often represented by SFGs. Formula The gain formula is as follows: where: Δ = the determinant of the graph. yin = input-node variable yout = output-node variable G = complete gain between yin and yout N = total number of forward paths between yin and yout Gk = path gain of the kth forward path between yin and yout Li = loop gain of each closed loop in the system LiLj = product of the loop gains of any two non-touching loops (no common nodes) LiLjLk = product of the loop gains of any three pairwise nontouching loops Δk = the cofactor value of Δ for the kth forward path, with the loops touching the kth forward path removed. Definitions Path: a continuous set of branches traversed in the direction that they indicate. Forward path: A path from an input node to an output node in which no node is touched more than once. Loop: A path that originates and ends on the same node in which no node is touched more than once. Path gain: the product of the gains of all the branches in the path. Loop gain: the product of the gains of all the branches in the loop. Procedure to find the solution Make a list of all forward paths, and their gains, and label these Gk. Make a list of all
https://en.wikipedia.org/wiki/Retract%20%28group%20theory%29
In mathematics, in the field of group theory, a subgroup of a group is termed a retract if there is an endomorphism of the group that maps surjectively to the subgroup and is the identity on the subgroup. In symbols, is a retract of if and only if there is an endomorphism such that for all and for all . The endomorphism is an idempotent element in the transformation monoid of endomorphisms, so it is called an idempotent endomorphism or a retraction. The following is known about retracts: A subgroup is a retract if and only if it has a normal complement. The normal complement, specifically, is the kernel of the retraction. Every direct factor is a retract. Conversely, any retract which is a normal subgroup is a direct factor. Every retract has the congruence extension property. Every regular factor, and in particular, every free factor, is a retract. See also Retraction (category theory) Retraction (topology) References Group theory Subgroup properties
https://en.wikipedia.org/wiki/Norm%20%28group%29
In mathematics, in the field of group theory, the norm of a group is the intersection of the normalizers of all its subgroups. This is also termed the Baer norm, after Reinhold Baer. The following facts are true for the Baer norm: It is a characteristic subgroup. It contains the center of the group. It is contained inside the second term of the upper central series. It is a Dedekind group, so is either abelian or has a direct factor isomorphic to the quaternion group. If it contains an element of infinite order, then it is equal to the center of the group. References Group theory Functional subgroups
https://en.wikipedia.org/wiki/Innovations%20in%20Systems%20and%20Software%20Engineering
Innovations in Systems and Software Engineering: A NASA Journal is a peer-reviewed scientific journal of computer science covering systems and software engineering, including formal methods. It is published by Springer Science+Business Media on behalf of NASA. The editors-in-chief are Michael Hinchey (University of Limerick) and Shawn Bohner (Rose-Hulman Institute of Technology). Abstracting and indexing The journal is abstracted and indexed in: References External links Academic journals established in 2005 Computer science journals Systems engineering Software engineering publications Springer Science+Business Media academic journals Formal methods publications Quarterly journals NASA mass media Hybrid open access journals
https://en.wikipedia.org/wiki/Photoprotection
Photoprotection is the biochemical process that helps organisms cope with molecular damage caused by sunlight. Plants and other oxygenic phototrophs have developed a suite of photoprotective mechanisms to prevent photoinhibition and oxidative stress caused by excess or fluctuating light conditions. Humans and other animals have also developed photoprotective mechanisms to avoid UV photodamage to the skin, prevent DNA damage, and minimize the downstream effects of oxidative stress. In photosynthetic organisms In organisms that perform oxygenic photosynthesis, excess light may lead to photoinhibition, or photoinactivation of the reaction centers, a process that does not necessarily involve chemical damage. When photosynthetic antenna pigments such as chlorophyll are excited by light absorption, unproductive reactions may occur by charge transfer to molecules with unpaired electrons. Because oxygenic phototrophs generate O2 as a byproduct from the photocatalyzed splitting of water (H2O), photosynthetic organisms have a particular risk of forming reactive oxygen species. Therefore, a diverse suite of mechanisms has developed in photosynthetic organisms to mitigate these potential threats, which become exacerbated under high irradiance, fluctuating light conditions, in adverse environmental conditions such as cold or drought, and while experiencing nutrient deficiencies which cause an imbalance between energetic sinks and sources. In eukaryotic phototrophs, these mechanisms include non-photochemical quenching mechanisms such as the xanthophyll cycle, biochemical pathways which serve as "relief valves", structural rearrangements of the complexes in the photosynthetic apparatus, and use of antioxidant molecules. Higher plants sometimes employ strategies such as reorientation of leaf axes to minimize incident light striking the surface. Mechanisms may also act on a longer time-scale, such as up-regulation of stress response proteins or down-regulation of pigment
https://en.wikipedia.org/wiki/Transformer%20effect
The transformer effect, or mutual induction, is one of the processes by which an electromotive force (e.m.f.) is induced. In a transformer, a changing electric current in a primary coil creates a changing magnetic field that induces a current in a secondary coil. This process is one of two ways of inducing an electromotive force, the other being the relative motion of a current-carrying conductor within a magnetic field. This method relies on a conductor and magnetic field moving relative to one another, leading to a rate of change of flux, dΦ/dt. This can be explained further by Faraday's law of electromagnetic induction and refined by Lenz's Law. Electrodynamics
https://en.wikipedia.org/wiki/Offline%20reader
An offline reader (sometimes called an offline browser or offline navigator) is computer software that downloads e-mail, newsgroup posts or web pages, making them available when the computer is offline: not connected to a server. Offline readers are useful for portable computers and dial-up access. Variations Website-mirroring software Website mirroring software is software that allows for the download of a copy of an entire website to the local hard disk for offline browsing. In effect, the downloaded copy serves as a mirror of the original site. Web crawler software such as Wget can be used to generate a site mirror. Offline mail and news readers Offline mail readers are computer programs that allow users to read electronic mail or other messages (for example, those on bulletin board systems) with a minimum of connection time to the server storing the messages. BBS servers accomplished this by packaging up multiple messages into a compressed file, e.g., a QWK packet, for the user to download using, e.g., Xmodem, Ymodem, Zmodem, and then disconnect. The user reads and replies to the messages locally and packages up and uploads any replies or new messages back to the server upon the next connection. Internet mail servers using POP3 or IMAP4 send the messages uncompressed as part of the protocol, and outbound messages using SMTP are also uncompressed. Offline news readers using NNTP are similar, but the messages are organized into news groups. Most e-mail protocols, like the common POP3 and IMAP4 used for internet mail, need be on-line only during message transfer; the same applies to the NNTP protocol used by Usenet (Network news). Most end-user mailers, such as Outlook Express and AOL, can be used offline even if they are mainly intended to be used online, but some mailers such as Juno are mainly intended to be used offline. Off-line mail readers are generally considered to be those systems that did not originally offer such functionality, notably on bulle
https://en.wikipedia.org/wiki/World%20Games%20%28video%20game%29
World Games is a sports video game developed by Epyx for the Commodore 64 in 1986. Versions for the Apple IIGS, Amstrad CPC, ZX Spectrum, Master System and other contemporary systems were also released. The NES version was released by Milton Bradley, and ported by Software Creations on behalf of producer Rare. The game is a continuation of the Epyx sports line that includes Summer Games and Winter Games. World Games was made available in Europe for the Wii virtual console on April 25, 2008. Events The events available vary slightly depending on the platform, and may include: Weightlifting (Soviet Union) Slalom skiing (France) Log rolling (Canada) Cliff diving (Mexico) Caber toss (Scotland) Bull riding (United States) Barrel jumping (Germany) Sumo Wrestling (Japan) The game allows the player to compete in all of the events sequentially, choose a few events, choose just one event, or practice an event. Reception Writing for Info, Benn Dunnington gave the Commodore 64 version of World Games three-plus stars out of five and described it as "my least favorite of the series". Stating that slalom skiing was the best event, he concluded that "Epyx does such a nice, consistent job of execution, tho, that it's hard to take off too many points even for such boring material". Computer Gaming Worlds Rick Teverbaugh criticized the slalom skiing and log rolling events' difficulty, but concluded that "World Games is still a must for the avid sports games". Charles Ardai called the game "an adequate sequel" to Epyx's previous Games, and praised the graphics. He criticized the mechanics "as bizarre little joystick patterns which have little to do with the events" but still recommended the game because of the log rolling event. Jame Trunzo praised the game's use of advanced graphics and sound, including humorous effects. Also noted was the variety in the included games, preventing the game from getting too repetitive. The game was reviewed in 1988 in Dragon #132 by Har
https://en.wikipedia.org/wiki/Ethernet%20over%20SDH
Ethernet Over SDH (EoS or EoSDH) or Ethernet over SONET refers to a set of protocols which allow Ethernet traffic to be carried over synchronous digital hierarchy networks in an efficient and flexible way. The same functions are available using SONET. Ethernet frames which are to be sent on the SDH link are sent through an "encapsulation" block (typically Generic Framing Procedure or GFP) to create a synchronous stream of data from the asynchronous Ethernet packets. The synchronous stream of encapsulated data is then passed through a mapping block which typically uses virtual concatenation (VCAT) to route the stream of bits over one or more SDH paths. As this is byte interleaved, it provides a better level of security compared to other mechanisms for Ethernet transport. After traversing SDH paths, the traffic is processed in the reverse fashion: virtual concatenation path processing to recreate the original synchronous byte stream, followed by decapsulation to converting the synchronous data stream to an asynchronous stream of Ethernet frames. The SDH paths may be VC-4, VC-3, VC-12 or VC-11 paths. Up to 64 VC-11 or VC-12 paths can be concatenated together to form a single larger virtually concatenated group. Up to 256 VC-3 or VC-4 paths can be concatenated together to form a single larger virtually concatenated group. The paths within a group are referred to as "members". A virtually concatenated group is typically referred to by the notation -v, where is VC-4, VC-3, VC-12 or VC-11 and X is the number of members in the group. A 10-Mbit/s Ethernet link is often transported over a VC-12-5v which allows the full bandwidth to be carried for all packet sizes. A 100-Mbit/s Ethernet link is often transported over a VC-3-2v which allows the full bandwidth to be carried when smaller packets are used (< 250 bytes) and Ethernet flow control restricts the rate of traffic for larger packets. But does only give ca. 97Mbit/s, not full 100Mb. A 1000-Mbit/s (or 1 GigE) Ether
https://en.wikipedia.org/wiki/AP%20Biology
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) References External links AP Biology at CollegeBoard.com AP Biology Teacher Community at CollegeBoard.org Advanced Placement Biology education Standardized tests zh:大学
https://en.wikipedia.org/wiki/LMHOSTS
The LMHOSTS (LAN Manager Hosts) file is used to enable Domain Name Resolution under Windows when other methods, such as WINS, fail. It is used in conjunction with workgroups and domains. If you are looking for a simple, general mechanism for the local specification of IP addresses for specific hostnames (server names), use the HOSTS file, not the LMHOSTS file. The file, if it exists, is read as the LMHOSTS setting file. A sample file () is provided. It contains documentation for manually configuring the file. File locations Windows 95, 98, Millennium Edition The file is located in , and a sample file () is installed here. Note that is an environment variable pointing to the Windows installation directory, usually . Windows NT 4.0, Windows 2000, Windows XP, Vista, 7, 8, 10, Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2016+ The file is located in , and a sample file () is installed here. Note that is an environment variable pointing to the Windows installation directory, usually . See also HOSTS file NetBIOS External links Domain Browsing with TCP/IP and LMHOSTS Files LMHOSTS File Information and Predefined Keywords Microsoft knowledgebase article Using LMHOSTS Files on Windows NT Windows communication and services Configuration files
https://en.wikipedia.org/wiki/AP%20Statistics
Advanced Placement (AP) Statistics (also known as AP Stats) is a college-level high school statistics course offered in the United States through the College Board's Advanced Placement program. This course is equivalent to a one semester, non-calculus-based introductory college statistics course and is normally offered to sophomores, juniors and seniors in high school. One of the College Board's more recent additions, the AP Statistics exam was first administered in May 1996 to supplement the AP program's math offerings, which had previously consisted of only AP Calculus AB and BC. In the United States, enrollment in AP Statistics classes has increased at a higher rate than in any other AP class. Students may receive college credit or upper-level college course placement upon passing the three-hour exam ordinarily administered in May. The exam consists of a multiple-choice section and a free-response section that are both 90 minutes long. Each section is weighted equally in determining the students' composite scores. History The Advanced Placement program has offered students the opportunity to pursue college-level courses while in high school. Along with the Educational Testing Service, the College Board administered the first AP Statistics exam in May 1997. The course was first taught to students in the 1996-1997 academic year. Prior to that, the only mathematics courses offered in the AP program included AP Calculus AB and BC. Students who didn't have a strong background in college-level math, however, found the AP Calculus program inaccessible and sometimes declined to take a math course in their senior year. Since the number of students required to take statistics in college is almost as large as the number of students required to take calculus, the College Board decided to add an introductory statistics course to the AP program. Since the prerequisites for such a program doesn't require mathematical concepts beyond those typically taught in a second-year al
https://en.wikipedia.org/wiki/Kuwabara%20kuwabara
is a phrase used in the Japanese language to ward off lightning. It is analogous to the English phrase "knock on wood" to prevent bad luck or "rain rain go away". The word kuwabara literally means "mulberry field". According to one explanation, there is a Chinese legend that mulberry trees are not struck by lightning. In contrast, journalist Moku Jōya asserts that the "origin of kuwabara is not definitely known, but it has nothing to do with mulberry plants, though it means 'mulberry fields'." In popular culture The phrase was used in Metal Gear Solid 3: Snake Eater by antagonist Colonel Volgin, a character with the ability to control electricity. It has also been used in various Japanese animation, including Inuyasha, Urusei Yatsura, Sekirei, Aku no Hana, and Yu Yu Hakusho. In a well-known monologue, the Yu Yu Hakusho character Kazuma Kuwabara remarks "A mulberry is a tree and Kuwabara is a man". The phrase was also used in an episode of Mushi-Shi entitled "Lightning's End" and in Street Fighter X Tekken by Yoshimitsu. In the video game Genshin Impact, one of the playable characters, the Raiden Shogun, the archon of a Japanese-inspired realm called Inazuma, makes reference to this phrase in one of her voiced dialogs. In folklore In the 9th century, there was a Japanese aristocrat called Sugawara no Michizane. Sugawara Michizane, who died bearing a heavy grudge after being trapped and exiled to Kyushu, threw his fierce anger in the form of his thunderbolts as a god of lightning. In 930, Seiryoden of the court was struck by a large thunderbolt. The master of onmyo said that this misfortune was the work of the vengeful spirit of Michizane. Those who trapped Michizane trembled with fear and tried to placate the curse by dedicating a prayer to his vengeful ghost, thus leading to the construction of Kitano Shrine. The land that Michizane owned was known as Kuwabara, so people thought it would be a good idea to claim the land he/she was standing on was a part of
https://en.wikipedia.org/wiki/Anemic%20domain%20model
The anemic domain model is described as a programming anti-pattern where the domain objects contain little or no business logic like validations, calculations, rules, and so forth. The business logic is thus baked into the architecture of the program itself, making refactoring and maintenance more difficult and time-consuming. Overview This anti-pattern was first described by Martin Fowler, who considers the practice an anti-pattern. He says: In an anemic domain design, business logic is typically implemented in separate classes which transform the state of the domain objects. Fowler calls such external classes transaction scripts. This pattern is a common approach in Java applications, possibly encouraged by technologies such as early versions of EJB's Entity Beans, as well as in .NET applications following the Three-Layered Services Application architecture where such objects fall into the category of "Business Entities" (although Business Entities can also contain behavior). Fowler describes the transaction script pattern thus: In his book "Patterns of Enterprise Application Architecture", Fowler noted that the transaction script pattern may be proper for many simple business applications, and obviates a complex OO-database mapping layer. An anemic domain model might occur in systems that are influenced from Service-Oriented Architectures, where behaviour does not or tends to not travel, such as messaging/pipeline architectures, or SOAP/REST APIs. Architectures like COM+ and Remoting allow behaviour, but increasingly the web has favoured disconnected and stateless architectures. Criticism There is some criticism as to whether this software design pattern should be considered an anti-pattern, since many see also benefits in it, for example: Clear separation between logic and data. Works well for simple applications. Results in stateless logic, which facilitates scaling out. Avoids the need for a complex OO-Database mapping layer. More compatibility wi
https://en.wikipedia.org/wiki/Flavored%20liquor
Flavored liquors (also called infused liquors) are liquors that have added flavoring and, in some cases, a small amount of added sugar. They are distinct from liqueurs in that liqueurs have a high sugar content and may also contain glycerine. Flavored liquors may have a base of vodka or white rum, both of which have little taste of their own, or they may have a tequila or brandy base. Typically, a fruit extract is added to the base spirit. Flavored rice wine, rum, tequila, vodka and whiskey Flavored rums and vodkas frequently have an alcohol content that is 5–10% ABV less than the corresponding unflavored spirit. Flavored rice wines—flavors include star anise-coffee, banana-cinnamon, coconut-pineapple, galangal-tamarind, ginger-red chili, green tea-orange, lemon-lemongrass and mango-green chili. Flavored rums in the West Indies originally consisted only of spiced rums such as Captain Morgan whereas in the Indian Ocean (Madagascar, Reunion Island and Mauritius) only of vanilla and fruits. Available flavors include cinnamon, lemon, lime, orange, vanilla, and raspberry, and extend to such exotic flavors as mango, coconut, pineapple, banana, passion fruit, and watermelon. Flavored tequilas—flavors include lime, orange, mango, coconut, watermelon, strawberry, pomegranate, chili pepper, cinnamon, jalapeño, cocoa and coffee. Flavored vodkas—flavors include lemon, lime, lemon-lime, orange, tangerine, grapefruit, raspberry, strawberry, blueberry, teaberry, vanilla, black currant, chili pepper, cherry, apple, green apple, coffee, chocolate, cranberry, peach, pear, passion fruit, pomegranate, plum, mango, white grape, banana, pineapple, coconut, mint, melon, rose, herbs, bacon, honey, cinnamon, kiwifruit, whipped cream, tea, root beer, caramel, marshmallow, and many more. Other flavored liquors Absinthe (wormwood, anise, fennel, and other herbs) Akvavit (caraway seeds, anise, dill, fennel, coriander, and grains of paradise) Anise liquors: A family of liquors native to
https://en.wikipedia.org/wiki/VLYNQ
VLYNQ is a proprietary interface developed by Texas Instruments and used for broadband products, such as WLAN and modems, VOIP processors and audio and digital media processor chips. The chip implements a full-duplex serial communications interface that enables the extension of an internal bus segment to one or more external physical devices. The external devices are mapped into local, physical address space and appear as if they are on the internal bus. Multiple VLYNQ devices are daisy-chained, communication is peer-to-peer, host/peripheral. Data transferred over the VLYNQ interface is 8B/10B encoded and packetized. VLYNQ is the name of a proprietary interface developed by Texas Instruments. It is used for TI's broadband products, such as modems and WLAN, voice broadband processors, digital media processors, and OMAP media processor chips. The ACX111 WLAN cards used in AR7 devices look like mini-PCI, but actually they are dual mode cards, that talk both, mini-PCI and VLYNQ. Details The VLYNQ bus signals include 1 clock signal [CLK], and 1 to 8 Transmit lines [TX0 and TX1 ...], and 1 to 8 Receive lines [RX0 and RX1.....]. All VLYNQ signals are dedicated and driven by only one device. The transmit pins of one device connect to the receive lines of the next device. The VLYNQ bus will operate at a maximum clock speed of 125 MHz. However the actual clock speed is dependent on the physical device with the VLYNQ. So a device may have a clock speed other than 125 MHz. For example, a device may have an internal 100 MHz [maximum] clock rate, or external 80 MHz [maximum] clock rate. When clocked at 125 MHz, a single T/R pair then delivers an effective data throughput of about 73 Mbit/s (for single, 32-bit word transfers), while a dual T/R pair implementation delivers 146 Mbit/s, and a maximum eight-channel version delivers 584 Mbit/s. In-band flow-control lets the interface independently throttle the transmit and receive data streams. If data packets contain four or 16 wo
https://en.wikipedia.org/wiki/Vortex%20ring%20gun
The vortex ring gun is an experimental non-lethal weapon for crowd control that uses high-energy vortex rings of gas to knock down people or spray them with marking ink or other chemicals. The concept was explored by the US Army starting in 1998, and by some commercial firms. Knockdown of distant individuals currently seems unlikely even if the rings are launched at theoretical maximum speed. As for the delivery of chemicals, leakage during flight is still a problem. Weapons based on similar principles but different designs and purposes have been described before, typically using acetylene-air or hydrogen–oxygen explosions to create and propel the vortices. Operation In a typical concept, a blank cartridge is fired into a gun barrel that has a diverging nozzle screwed onto the muzzle. In the nozzle, the short pulse of high pressure gas briefly accelerates to a supersonic exit velocity, whereupon a portion of the exhaust transforms from axial flow into a subsonic, high spin vortex ring with the potential energy to fly hundreds of feet. The nozzle of a vortex ring gun is designed to both contain the short pulse of accelerating gas until the maximum pressure is lowered to atmospheric and to straighten the exhaust into an axial flow. The objective is to form the vortex ring with the highest possible velocity and spin by colliding a short pulse of a supersonic jet stream against the relatively stagnant air behind the spherically expanding shock wave. Without the nozzle, the high pressure jet stream is reduced to atmospheric by standing shock waves at the muzzle, and the resulting vortex ring is not only formed by a lower velocity jet stream but also degraded by turbulence. History Invention The concept was developed by American aircraft builder Thomas Shelton during World War II as a means to deliver chemical weapons. Shelton later developed a large version to deliver a mechanical shock. These were never employed for military purposes, and after the war Shelton
https://en.wikipedia.org/wiki/Color%20quantization
In computer graphics, color quantization or color image quantization is quantization applied to color spaces; it is a process that reduces the number of distinct colors used in an image, usually with the intention that the new image should be as visually similar as possible to the original image. Computer algorithms to perform color quantization on bitmaps have been studied since the 1970s. Color quantization is critical for displaying images with many colors on devices that can only display a limited number of colors, usually due to memory limitations, and enables efficient compression of certain types of images. The name "color quantization" is primarily used in computer graphics research literature; in applications, terms such as optimized palette generation, optimal palette generation, or decreasing color depth are used. Some of these are misleading, as the palettes generated by standard algorithms are not necessarily the best possible. Algorithms Most standard techniques treat color quantization as a problem of clustering points in three-dimensional space, where the points represent colors found in the original image and the three axes represent the three color channels. Almost any three-dimensional clustering algorithm can be applied to color quantization, and vice versa. After the clusters are located, typically the points in each cluster are averaged to obtain the representative color that all colors in that cluster are mapped to. The three color channels are usually red, green, and blue, but another popular choice is the Lab color space, in which Euclidean distance is more consistent with perceptual difference. The most popular algorithm by far for color quantization, invented by Paul Heckbert in 1979, is the median cut algorithm. Many variations on this scheme are in use. Before this time, most color quantization was done using the population algorithm or population method, which essentially constructs a histogram of equal-sized ranges and assigns col
https://en.wikipedia.org/wiki/NOR%20logic
A NOR gate or a NOT OR gate is a logic gate which gives a positive output only when both inputs are negative. Like NAND gates, NOR gates are so-called "universal gates" that can be combined to form any other kind of logic gate. For example, the first embedded system, the Apollo Guidance Computer, was built exclusively from NOR gates, about 5,600 in total for the later versions. Today, integrated circuits are not constructed exclusively from a single type of gate. Instead, EDA tools are used to convert the description of a logical circuit to a netlist of complex gates (standard cells) or transistors (full custom approach). NOR A NOR gate is logically an inverted OR gate. It has the following truth table: Making other gates by using NOR gates A NOR gate is a universal gate, meaning that any other gate can be represented as a combination of NOR gates. NOT This is made by joining the inputs of a NOR gate. As a NOR gate is equivalent to an OR gate leading to NOT gate, joining the inputs makes the output of the "OR" part of the NOR gate the same as the input, eliminating it from consideration and leaving only the NOT part. OR An OR gate is made by inverting the output of a NOR gate. Note that we already know that a NOT gate is equivalent to a NOR gate with its inputs joined. AND An AND gate gives a 1 output when both inputs are 1. Therefore, an AND gate is made by inverting the inputs of a NOR gate. Again, note that a NOR gate is equivalent to a NOT with its inputs joined. NAND A NAND gate is made by inverting the output of an AND gate. The word NAND means that it is not AND. As the name suggests, it will give 0 when both the inputs are 1. XNOR An XNOR gate is made by connecting four NOR gates as shown below. This construction entails a propagation delay three times that of a single NOR gate. Alternatively, an XNOR gate is made by considering the conjunctive normal form , noting from de Morgan's Law that a NOR gate is an inverted-input AND gate. This constr
https://en.wikipedia.org/wiki/Gemalto
Gemalto was an international digital security company providing software applications, secure personal devices such as smart cards and tokens, e-wallets and managed services. It was formed in June 2006 by the merger of two companies, Axalto and Gemplus International. Gemalto N.V.'s revenue in 2018 was €2.969 billion. The company was purchased by Thales Group in April 2019 and is now operating as Thales DIS (Digital Identity and Security). Gemalto was until its acquisition the world's largest manufacturer of SIM cards. Thales DIS is headquartered in Amsterdam, The Netherlands, and has subsidiaries and group companies in several countries. It has approximately 15,000 employees in 110 offices along with 24 production sites, 47 personalization centers, and 35 R&D centers in 47 countries. History In June 2006, smart card providers Gemplus and Axalto merged to become Gemalto (a portmanteau of the original company names.) Axalto was a Schlumberger IPO spin-off in 2004. Between the merger and 2015, Gemalto completed a series of acquisitions: the Leigh Mardon's personalization center (Taiwan), Multos International, NamITech in South Africa, NXP mobile services business, the mobile software solution provider O3SIS, Trusted Logic (the secure software platform provider), Serverside (personalization of bank cards with digital images generated by end users), XIRING's banking activity, Netsize (a mobile communications service and commerce enabler), Valimo Wireless, a provider in mobile authentication, the internet banking security specialist Todos AB in Sweden, Cinterion the German specialist of machine-to-machine (M2M), SensorLogic (an M2M service delivery platform provider), Plastkart in Turkey, Ericsson’s mobile payment platform IPX, the information security company SafeNet and Buzzinbees, the automatic SIM activation expert. Axalto and Schlumberger Schlumberger began its chip card activities in February 1979 when it licensed and marketed certain chip card technologies d
https://en.wikipedia.org/wiki/Multimedia%20computer
A multimedia computer is a computer that is optimized for multimedia performance. Early home computers lacked the power and storage necessary for true multimedia. The games for these systems, along with the demo scene were able to achieve high sophistication and technical polish using only simple, blocky graphics and digitally generated sound. The Amiga 1000 from Commodore International has been called the first multimedia computer. Its groundbreaking animation, graphics and sound technologies enabled multimedia content to flourish. Famous demos such as the Boing Ball and Juggler showed off the Amiga's abilities. Later the Atari ST series and Apple Macintosh II extended the concept; the Atari integrated a MIDI port and was the first computer under US$1000 to have 1 megabyte of RAM which is a realistic minimum for multimedia content and the Macintosh was the first computer able to display true photorealistic graphics as well as integrating a CD-ROM drive, whose high capacity was essential for delivering multimedia content in the pre-Internet era. While the Commodore machines had the hardware to present multimedia of the kinds listed above, it lacked a way to create it easily. A computer system consists of hardware and software, so the earliest known system for creating and deploying multimedia content can be found the archives of the Smithsonian Institution and is called VirtualVideo. It consisted of a standard PC with a added digital imaging board, a added digital audio capture board (that was sold as a phone answering device), and the DOS authoring software, VirtualVideo Producer. The system stored content on a local hard drive, but could use networked computer storage as well. The name for the software was used because at the time, the mid 1980's, the term multimedia was used to describe slide shows with sound. This software was later sold as Tempra and in 1993, was included with Tay Vaugh's first edition of Multimedia: Making It Work. Multimedia capabiliti
https://en.wikipedia.org/wiki/Box%20blur
A box blur (also known as a box linear filter) is a spatial domain linear filter in which each pixel in the resulting image has a value equal to the average value of its neighboring pixels in the input image. It is a form of low-pass ("blurring") filter. A 3 by 3 box blur ("radius 1") can be written as matrix Due to its property of using equal weights, it can be implemented using a much simpler accumulation algorithm, which is significantly faster than using a sliding-window algorithm. Box blurs are frequently used to approximate a Gaussian blur. By the central limit theorem, repeated application of a box blur will approximate a Gaussian blur. In the frequency domain, a box blur has zeros and negative components. That is, a sine wave with a period equal to the size of the box will be blurred away entirely, and wavelengths shorter than the size of the box may be phase-reversed, as seen when two bokeh circles touch to form a bright spot where there would be a dark spot between two bright spots in the original image. Extensions Gwosdek, et al. has extended Box blur to take a fractional radius: the edges of the 1-D filter are expanded with a fraction. It makes slightly better gaussian approximation possible due to the elimination of integer-rounding error. Mario Klingemann has a "stack blur" that tries to better emulate gaussian's look in one pass by stacking weights: The triangular impulse response it forms decomposes to two rounds of box blur. Stacked Integral Image by Bhatia et al. takes the weighted average of a few box blurs to fit the gaussian response curve. Implementation The following pseudocode implements a 3x3 box blur. Box blur (image) { set newImage to image; For x /*row*/, y/*column*/ on newImage do: { // Kernel would not fit! If x < 1 or y < 1 or x + 1 == width or y + 1 == height then: Continue; // Set P to the average of 9 pixels: X X X X P X X X X // C
https://en.wikipedia.org/wiki/Modified%20Morlet%20wavelet
Modified Mexican hat, Modified Morlet and Dark soliton or Darklet wavelets are derived from hyperbolic (sech) (bright soliton) and hyperbolic tangent (tanh) (dark soliton) pulses. These functions are derived intuitively from the solutions of the nonlinear Schrödinger equation in the anomalous and normal dispersion regimes in a similar fashion to the way that the Morlet and the Mexican hat are derived. The modified Morlet is defined as: Wavelets
https://en.wikipedia.org/wiki/Japan%20Advanced%20Institute%20of%20Science%20and%20Technology
The is a postgraduate university in Nomi, Ishikawa, established in 1990. JAIST was established in the centre of Ishikawa Science Park (ISP). JAIST has programs of advanced research and development in science and technology. This university has several satellite campuses: Shinagawa Campus in Shinagawa, Tokyo (relocated from its earlier Tamachi Campus in Minato, Tokyo), open course in Information Technology and Management of Technology (MOT), and satellite lectures in Kanazawa City and Toyama City. In The 21st Century Center Of Excellence Program, JSPS granted two programs to JAIST. One program is the (2003), and the other program is (2004). History 1989 A committee was organized at Tokyo Institute of Technology for foundation of a research-intensive university in Ishikawa Prefecture (Hokuriku region). 1990 JAIST was founded in Japan as Japan's first postgraduate university without undergraduate faculty. Graduate School of Information Science was organized. The Institute Library was constructed. 1991 Graduate School of Materials Science was organized. The Center for Information Science was established. 1992 The Center for New Materials was established. 1993 The Center for Research and Investigation of Advanced Science and Technology was established. 1994 The Health Care Center was established. 1996 The Institute Library opens. Graduate School of Knowledge Science was organized. 1998 The Center for Knowledge Science was established. 2001 The Research Center for Remote Learning was established. The Internet Research Center was established. 2002 The Center for Nano Materials and Technology was established, as the result of reorganization of the Center for New Materials. The Venture Business Laboratory was established. 2003 The IP (Intellectual Property) Operation Center was established. The Center for Strategic Development of Science and Technology was established. 2004 JAIST was incorporated as a National University Corporation. The Research
https://en.wikipedia.org/wiki/Forensic%20biology
Forensic biology is the use of biological principles and techniques in the context of law enforcement investigations. Forensic biology mainly focuses on DNA sequencing of biological matter found at crime scenes. This assists investigators in identifying potential suspects or unidentified bodies. Forensic biology has many sub-branches, such as forensic anthropology, forensic entomology, forensic odontology, forensic pathology, and forensic toxicology. Disciplines History The first known briefings of forensic procedures still used today are recorded as far back as the 7th century through the concept of utilizing fingerprints as a means of identification. By the 7th century, forensic procedures were used to account criminals of guilt charges among other things. Nowadays, the practice of autopsies and forensic investigations has seen a significant surge in both public interest and technological advancements. One of the early pioneers in employing these methods, which would later evolve into the field of forensics, was Alphonse Bertillon, who is also known as the "father of criminal identification". In 1879, he introduced a scientific approach to personal identification by developing the science of anthropometry. This method involved a series of body measurements for distinguishing one human individual from another. Karl Landsteiner later made further significant discoveries in forensics. In 1901, he found out that blood could be categorized into different groups: A, B, AB, and O, and thus blood typing was introduced to the world of crime-solving. This development led to further studies and eventually, a whole new spectrum of criminology was added in the fields of medicine and forensics. Dr Leone Lattes, a professor at the Institute of Forensic Medicine in Turin, Italy, has made significant additions into forensics as well. In 1915, he discovered a method to determine the blood group of dried bloodstains, which marked a significant advancement from prior techn
https://en.wikipedia.org/wiki/Critical%20control%20point
Critical Control Point (CCP) is the point where the failure of Standard Operation Procedure (SOP) could cause harm to customers and to the business, or even loss of the business itself. It is a point, step or procedure at which controls can be applied and a food safety hazard can be prevented, eliminated or reduced to acceptable (critical) levels. The most common CCP is cooking, where food safety managers designate critical limits. CCP identification is also an important step in risk and reliability analysis for water treatment processes. Food in cooking In the United States, the Food and Drug Administration (FDA) establishes minimum internal temperatures for cooked foods. These values can be superseded by state or local health code requirements, but they cannot be below the FDA limits. Temperatures should be measured with a probe thermometer in the thickest part of meats, or the center of other dishes, avoiding bones and container sides. Minimum internal temperatures are set as follows: 165 °F (74 °C) for 15 seconds Poultry (such as whole or ground chicken, turkey, or duck) Stuffed meats, fish, poultry, and pasta Any previously cooked foods that are reheated from a temperature below 135 °F (57 °C), provided they have been refrigerated or warm less than 2 hours Any potentially hazardous foods cooked in a microwave, such as poultry, meat, fish, or eggs 155 °F (68 °C) for 15 seconds Ground meats (such as beef or pork) Injected meats (such as flavor-injected roasts or brined hams) Ground or minced fish Eggs that will be held for a length of time before eaten 145 °F (63 °C) for 15 seconds Steaks and chops such as beef, pork, veal, and lamb Fish Eggs cooked for immediate service 145 °F (63 °C) for 4 minutes Roasts (can be cooked to lower temperatures for increased lengths of time) 135 °F (57 °C) for 15 seconds Cooked fruits or vegetables that will be held for a length of time before eaten Any commercially processed, ready-to-eat foods that will be
https://en.wikipedia.org/wiki/Hammett%20equation
In organic chemistry, the Hammett equation describes a linear free-energy relationship relating reaction rates and equilibrium constants for many reactions involving benzoic acid derivatives with meta- and para-substituents to each other with just two parameters: a substituent constant and a reaction constant. This equation was developed and published by Louis Plack Hammett in 1937 as a follow-up to qualitative observations in his 1935 publication. The basic idea is that for any two reactions with two aromatic reactants only differing in the type of substituent, the change in free energy of activation is proportional to the change in Gibbs free energy. This notion does not follow from elemental thermochemistry or chemical kinetics and was introduced by Hammett intuitively. The basic equation is: where = Reference constant = Substituent constant = Reaction rate constant relating the equilibrium constant, , for a given equilibrium reaction with substituent R and the reference constant when R is a hydrogen atom to the substituent constant which depends only on the specific substituent R and the reaction rate constant ρ which depends only on the type of reaction but not on the substituent used. The equation also holds for reaction rates k of a series of reactions with substituted benzene derivatives: In this equation is the reference reaction rate of the unsubstituted reactant, and k that of a substituted reactant. A plot of for a given equilibrium versus for a given reaction rate with many differently substituted reactants will give a straight line. Substituent constants The starting point for the collection of the substituent constants is a chemical equilibrium for which the substituent constant is arbitrarily set to 0 and the reaction constant is set to 1: the deprotonation of benzoic acid or benzene carboxylic acid (R and R' both H) in water at 25 °C. Having obtained a value for K0, a series of equilibrium constants (K) are now determined based on t
https://en.wikipedia.org/wiki/Announcement%20%28computing%29
An announcement (ANN) is a Usenet, mailing list or e-mail message sent to notify subscribers that a software project has made a new release version. Newsgroup announcement recipients often have a name like "comp.somegroup.announce". Mailing list announcement recipients often have a name like "toolname-announce". In an announcement, the subject line commonly contains the abbreviated prefix ANN: or [ANN]. The contents of an announcement usually contain a title line which contains the tool name, version, release name, and date. Additional contents often fall into the following message sections: About: a short paragraph summary of the tool's purpose Changes: a list of the highest impact changes since the last release (should be brief since the changelog comprises the definitive list) Resources: links to project pages of interest, such as homepage, where to download, bug tracking system, etc. Some additional, optional fields might include "Highlights", "Author(s)", "License", "Requirements", and "Release History". Announcement messages are usually sent in plain text form. Example Example announcement message subject line: ANN: fooutils 0.9.42 beta released Example announcement message contents: ============================================== Fooutils 0.9.42 Beta Released -- 2006 Feb 16 ============================================== ANNOUNCING Fooutils v0.2.12beta, the first beta release. About Fooutils -------------- Fooutils are a set of utilities that... Changes ------- Improved the searching facility by including... Fixed bugs: #123, #456, ... Resources --------- Homepage: http://fooutils.org Documentation: http://fooutils.org/docs Download: http://fooutils.org/download Bug Tracker: http://fooutils.org/newticket Mailing Lists: http://lists.fooutils.org/ See also List of e-mail subject abbreviations Further reading Free software culture and documents Technical communication Usenet
https://en.wikipedia.org/wiki/Shadow%20biosphere
A shadow biosphere is a hypothetical microbial biosphere of Earth that would use radically different biochemical and molecular processes from that of currently known life. Although life on Earth is relatively well studied, if a shadow biosphere exists it may still remain unnoticed, because the exploration of the microbial world targets primarily the biochemistry of the macro-organisms. The hypothesis It has been proposed that the early Earth hosted multiple origins of life, some of which produced chemical variations on life as we know it. Some argue that these alternative life forms could have become extinct, either by being out-competed by other forms of life, or they might have become one with the present day life via mechanisms like lateral gene transfer. Others, however, argue that this other form of life might still exist to this day. Steven A. Benner, Alonso Ricardo, and Matthew A. Carrigan, biochemists at the University of Florida, argued that if organisms based on RNA once existed, they might still be alive today, unnoticed because they do not contain ribosomes, which are usually used to detect living microorganisms. They suggest searching for them in environments that are low in sulfur, environments that are spatially constrained (for example, minerals with pores smaller than one micrometre), or environments that cycle between extreme hot and cold. Other proposed candidates for a shadow biosphere include organisms using different suites of amino acids in their proteins or different molecular units (e.g., bases or sugars) in their nucleic acids, having a chirality opposite of ours, using some of the nonstandard amino acids, or using arsenic instead of phosphorus, having a different genetic code, or even another kind of chemical for its genetic material that are not nucleic acids (DNA nor RNA) chains or biopolymers. Carol Cleland, a philosopher of science at the University of Colorado (Boulder), argues that desert varnish, whose status as biological or non
https://en.wikipedia.org/wiki/Emery%27s%20rule
In 1909, the entomologist Carlo Emery noted that social parasites among insects (e.g., kleptoparasites) tend to be parasites of species or genera to which they are closely related. Over time, this pattern has been recognized in many additional cases, and generalized to what is now known as Emery's rule. The pattern is best known for various taxa of Hymenoptera. For example, the social wasp Dolichovespula adulterina parasitizes other members of its genus such as Dolichovespula norwegica and Dolichovespula arenaria. Emery's rule is also applicable to members of other kingdoms such as fungi, red algae, and mistletoe. The significance and general relevance of this pattern are still a matter of some debate, as a great many exceptions exist, though a common explanation for the phenomenon when it occurs is that the parasites may have started as facultative parasites within the host species itself (such forms of intraspecific parasitism are well-known, even in some species of bees), but later became reproductively isolated and split off from the ancestral species, a form of sympatric speciation. When a parasitic species is a sister taxon to its host in a phylogenetic sense, the relationship is considered to be in "strict" adherence to Emery's rule. When the parasite is a close relative of the host but not its sister species, the relationship is in "loose" adherence to the rule. References Parasitology 1909 introductions Biological rules 1909 in biology
https://en.wikipedia.org/wiki/Tempered%20glass
Tempered or toughened glass is a type of safety glass processed by controlled thermal or chemical treatments to increase its strength compared with normal glass. Tempering puts the outer surfaces into compression and the interior into tension. Such stresses cause the glass, when broken, to shatter into small granular chunks instead of splintering into jagged shards as ordinary annealed glass does. The granular chunks are less likely to cause injury. Tempered glass is used for its safety and strength in a variety of applications, including passenger vehicle windows (apart from windshield), shower doors, aquariums, architectural glass doors and tables, refrigerator trays, mobile phone screen protectors, bulletproof glass components, diving masks, and plates and cookware. Properties Tempered glass is about four times stronger than annealed glass. The more rapid contraction of the outer layer during manufacturing induces compressive stresses in the surface of the glass balanced by tensile stresses in the body of the glass. Fully tempered 6-mm thick glass must have either a minimum surface compression of 69 MPa (10 000 psi) or an edge compression of not less than 67 MPa (9 700 psi). For it to be considered safety glass, the surface compressive stress should exceed . As a result of the increased surface stress, when broken the glass breaks into small rounded chunks as opposed to sharp jagged shards. Compressive surface stresses give tempered glass increased strength. Annealed glass has almost no internal stress and usually forms microscopic cracks on its surface. Tension applied to the glass can drive crack propagation which, once begun, concentrates tension at the tip of the crack driving crack propagation at the speed of sound through the glass. Consequently, annealed glass is fragile and breaks into irregular and sharp pieces. The compressive stresses on the surface of tempered glass contain flaws, preventing their propagation or expansion. Any cutting or grindi
https://en.wikipedia.org/wiki/Take-grant%20protection%20model
The take-grant protection model is a formal model used in the field of computer security to establish or disprove the safety of a given computer system that follows specific rules. It shows that even though the question of safety is in general undecidable, for specific systems it is decidable in linear time. The model represents a system as directed graph, where vertices are either subjects or objects. The edges between them are labeled, and the label indicates the rights that the source of the edge has over the destination. Two rights occur in every instance of the model: take and grant. They play a special role in the graph rewriting rules describing admissible changes of the graph. There are a total of four such rules: take rule allows a subject to take rights of another object (add an edge originating at the subject) grant rule allows a subject to grant own rights to another object (add an edge terminating at the subject) create rule allows a subject to create new objects (add a vertex and an edge from the subject to the new vertex) remove rule allows a subject to remove rights it has over on another object (remove an edge originating at the subject) Preconditions for : subject s has the right Take for o. object o has the right r on p. Preconditions for : subject s has the right Grant for o. s has the right r on p. Using the rules of the take-grant protection model, one can reproduce in which states a system can change, with respect to the distribution of rights. Therefore, one can show if rights can leak with respect to a given safety model. References External links Diagram and sample problem Analysis Technical Report PCS-TR90-151 (NASA) Computer security models
https://en.wikipedia.org/wiki/Genocchi%20number
In mathematics, the Genocchi numbers Gn, named after Angelo Genocchi, are a sequence of integers that satisfy the relation The first few Genocchi numbers are 0, −1, −1, 0, 1, 0, −3, 0, 17 , see . Properties The generating function definition of the Genocchi numbers implies that they are rational numbers. In fact, G2n+1 = 0 for n ≥ 1 and (−1)nG2n is an odd positive integer. Genocchi numbers Gn are related to Bernoulli numbers Bn by the formula Combinatorial interpretations The exponential generating function for the signed even Genocchi numbers (−1)nG2n is They enumerate the following objects: Permutations in S2n−1 with descents after the even numbers and ascents after the odd numbers. Permutations π in S2n−2 with 1 ≤ π(2i−1) ≤ 2n−2i and 2n−2i ≤ π(2i) ≤ 2n−2. Pairs (a1,…,an−1) and (b1,…,bn−1) such that ai and bi are between 1 and i and every k between 1 and n−1 occurs at least once among the ai's and bi's. Reverse alternating permutations a1 < a2 > a3 < a4 >…>a2n−1 of [2n−1] whose inversion table has only even entries. See also Euler number References Richard P. Stanley (1999). Enumerative Combinatorics, Volume 2, Exercise 5.8. Cambridge University Press. Gérard Viennot, Interprétations combinatoires des nombres d'Euler et de Genocchi, Seminaire de Théorie des Nombres de Bordeaux, Volume 11 (1981-1982) Serkan Araci, Mehmet Acikgoz, Erdoğan Şen, Some New Identities of Genocchi Numbers and Polynomials Eponymous numbers in mathematics Integer sequences Factorial and binomial topics
https://en.wikipedia.org/wiki/List%20of%20fuel%20cell%20vehicles
A fuel cell vehicle is a vehicle that uses a fuel cell to power an electric drive system. There are also hybrid vehicles meaning that they are fitted with a fuel cell and a battery or a fuel cell and an ultracapacitor. For HICEV see List of hydrogen internal combustion engine vehicles. For a discussion of the advantages and disadvantages of fuel cell vehicles, see fuel cell vehicle. Cars Production Cars commercially available for sale or leasing. Demonstration fleets Cars for testing and pre-production. 1996 - Toyota FCHV-1 1997 - Toyota FCHV-2 1999 - Lotus Engineering Black Cab 2000 - Ford Focus FCV 2001 - Hyundai Santa Fe FCEV 2001 - GM HydroGen3 / Opel HydroGen3 2001 - Toyota FCHV-3 2001 - Toyota FCHV-4 2001 - Toyota FCHV-5 2001 - Nissan X-Trail FCHV 2004 - Audi A2H2 2004 - Mercedes-Benz A-Class F-Cell, powered by Ballard Power Systems 2005 - Fiat Panda Hydrogen 2005 - Mazda Premacy Hydrogen RE Hybrid 2007 - Chevrolet Equinox Fuel Cell / GM HydroGen4 also known as Opel HydroGen4 and Vauxhall HydroGen4 2008 - Kia Borrego FCEV 2008 - PSA H2Origin 2008 - Renault Scenic ZEV H2 2008 - Toyota FCHV-adv 2010 - Mercedes-Benz B-Class F-Cell 2016 - Roewe 950 Fuel Cell (plug-in hybrid fuel cell) 2020 - Maxus EUNIQ 7 Minivan Concept Aetek/FYK 2008 - FYK Aetek AS (unknown hybrid) Alfa Romeo: 2010 - Alfa Romeo MiTo FCEV Audi: 2009 - Audi Q5 FCEV 2014 - Audi A7 h-tron quattro powered by Ballard Power Systems BMW: 2010 - BMW 1 Series Fuel-cell hybrid electric 2012 - BMW i8 fuel-cell prototype 2015 - BMW 5 Series GT (F07) eDrive Hydrogen Fuel Cell 2019 - BMW i Hydrogen NEXT Chang'an: 2010 - Z-SHINE FCV Chrysler: 1999 - Jeep Commander 2001 - Chrysler Natrium 2003 - Jeep Treo Daimler: 1994 - Mercedes-Benz NECAR 1 based on Mercedes-Benz MB100 1996 - Mercedes-Benz NECAR 2 based on Mercedes-Benz V-Class 1997 - Mercedes-Benz NECAR 3, 4 (1999) and 5 (2000) based on Mercedes-Benz A-Class 2002 - Mercedes-Benz NECAR F-Cell based on the Mer
https://en.wikipedia.org/wiki/Game%20Oriented%20Assembly%20Lisp
Game Oriented Assembly Lisp (GOAL, also known as Game Object Assembly Lisp) is a programming language, a dialect of the language Lisp, made for video games developed by Andy Gavin and the Jak and Daxter team at the company Naughty Dog. It was written using Allegro Common Lisp and used in the development of the entire Jak and Daxter series of games. Design GOAL's syntax resembles the Lisp dialect Scheme, though with many idiosyncratic object-oriented programming features such as classes, inheritance, and virtual functions. GOAL encourages an imperative programming style: programs tend to consist of a sequence of events to be executed rather than the functional programming style of functions to be evaluated recursively. This is a diversion from Scheme, which allows such side effects but does not encourage imperative style. GOAL does not run in an interpreter, but instead is compiled directly into PlayStation 2 machine code to execute. It offers limited facilities for garbage collection, relying extensively on runtime support. It offers dynamic memory allocation primitives designed to make it well-suited to running in constant memory on a video game console. GOAL has extensive support for inlined assembly language code using a special rlet form, allowing programs to freely mix assembly and higher-level constructs within one function. The GOAL compiler is implemented in Allegro Common Lisp. It supports a long term compiling listener session which gives the compiler knowledge about the state of the compiled and thus running program, including the symbol table. This, in addition to dynamic linking, allows a function to be edited, recompiled, uploaded, and inserted into a running game without having to restart. The process is similar to the edit and continue feature offered by some C++ compilers, but allows programs to replace arbitrary amounts of code (even up to entire object files), and does not interrupt the running game with the debugger. This feature was used t
https://en.wikipedia.org/wiki/NIST%20%28metric%29
NIST is a method for evaluating the quality of text which has been translated using machine translation. Its name comes from the US National Institute of Standards and Technology. It is based on the BLEU metric, but with some alterations. Where BLEU simply calculates n-gram precision adding equal weight to each one, NIST also calculates how informative a particular n-gram is. That is to say when a correct n-gram is found, the rarer that n-gram is, the more weight it will be given. For example, if the bigram "on the" is correctly matched, it will receive lower weight than the correct matching of bigram "interesting calculations", as this is less likely to occur. NIST also differs from BLEU in its calculation of the brevity penalty insofar as small variations in translation length do not impact the overall score as much. See also BLEU F-Measure METEOR Noun-phrase chunking ROUGE (metric) Word error rate (WER) References NIST 2005 Machine Translation Evaluation Official Results Evaluation of machine translation
https://en.wikipedia.org/wiki/Privacy%20Act%20%28Canada%29
The Privacy Act () is the federal information-privacy legislation of Canada that came into effect on July 1, 1983. Administered by the Privacy Commissioner of Canada, the Act sets out rules for how institutions of the Government of Canada collect, use, disclose, retain, and dispose of personal information of individuals. The Act does not apply to political parties, political representatives (i.e., members of Parliament and senators), courts, and private sector organizations. All provinces and territories have their own laws governing their public sectors. Overview Some salient provisions of the legislation are as follows: A government institution may not collect personal information unless it relates directly to an operating program or activity of the institution (section 4). With some exceptions, when a government institution collects an individual's personal information from the individual, it must inform the individual of the purpose for which the information is being collected (section 5(2)). With some exceptions, personal information under the control of a government institution may be used only for the purpose for which the information was obtained or for a use consistent with that purpose, unless the individual consents (section 7). With some exceptions, personal information under the control of a government institution may not be disclosed, unless the individual consents (section 8). Every Canadian citizen or permanent resident has the right to be given access to personal information about the individual under the control of a government institution that is reasonably retrievable by the government institution, and request correction if the information is inaccurate (section 12). A government institution can refuse requests for access to personal information in four cases: The request interferes with the responsibilities of the government, such as national defence and law enforcement investigations (sections 19-25) The request contains the persona
https://en.wikipedia.org/wiki/Moment%20matrix
In mathematics, a moment matrix is a special symmetric square matrix whose rows and columns are indexed by monomials. The entries of the matrix depend on the product of the indexing monomials only (cf. Hankel matrices.) Moment matrices play an important role in polynomial fitting, polynomial optimization (since positive semidefinite moment matrices correspond to polynomials which are sums of squares) and econometrics. Application in regression A multiple linear regression model can be written as where is the explained variable, are the explanatory variables, is the error, and are unknown coefficients to be estimated. Given observations , we have a system of linear equations that can be expressed in matrix notation. or where and are each a vector of dimension , is the design matrix of order , and is a vector of dimension . Under the Gauss–Markov assumptions, the best linear unbiased estimator of is the linear least squares estimator , involving the two moment matrices and defined as and where is a square normal matrix of dimension , and is a vector of dimension . See also Design matrix Gramian matrix Projection matrix References External links Matrices Least squares
https://en.wikipedia.org/wiki/Forensic%20electrical%20engineering
Forensic electrical engineering is a branch of forensic engineering, and is concerned with investigating electrical failures and accidents in a legal context. Many forensic electrical engineering investigations apply to fires suspected to be caused by electrical failures. Forensic electrical engineers are most commonly retained by insurance companies or attorneys representing insurance companies, or by manufacturers or contractors defending themselves against subrogation by insurance companies. Other areas of investigation include accident investigation involving electrocution, and intellectual property disputes such as patent actions. Additionally, since electrical fires are most often cited as the cause for "suspect" fires an electrical engineer is often employed to evaluate the electrical equipment and systems to determine whether the cause of the fire was electrical in nature. Goals The ultimate goal of these investigations is often to determine the legal liability for a fire or other accident for purposes of insurance subrogation or an injury lawsuit. Some examples include: Defective appliances: If a property fire was caused by an appliance which had a manufacturing or design defect (for example, a coffee maker overheating and igniting), making it unreasonably hazardous, the insurance company might attempt to collect the cost of the fire damage ("subrogate") from the manufacturer; if the fire caused personal injury or death, the injured party might also attempt an injury lawsuit against the manufacturer, in addition to the carrier of health or life insurance attempting subrogation. Improper workmanship: If, for example, an electrician made an improper installation in a house, leading to an electrical fault and fire, he or she could likewise be the target of subrogation or an injury lawsuit (for this reason, electricians are required to carry liability insurance). Electrical injury: If an electrical fault or unreasonably hazardous electrical syst
https://en.wikipedia.org/wiki/Genomic%20library
A genomic library is a collection of overlapping DNA fragments that together make up the total genomic DNA of a single organism. The DNA is stored in a population of identical vectors, each containing a different insert of DNA. In order to construct a genomic library, the organism's DNA is extracted from cells and then digested with a restriction enzyme to cut the DNA into fragments of a specific size. The fragments are then inserted into the vector using DNA ligase. Next, the vector DNA can be taken up by a host organism - commonly a population of Escherichia coli or yeast - with each cell containing only one vector molecule. Using a host cell to carry the vector allows for easy amplification and retrieval of specific clones from the library for analysis. There are several kinds of vectors available with various insert capacities. Generally, libraries made from organisms with larger genomes require vectors featuring larger inserts, thereby fewer vector molecules are needed to make the library. Researchers can choose a vector also considering the ideal insert size to find the desired number of clones necessary for full genome coverage. Genomic libraries are commonly used for sequencing applications. They have played an important role in the whole genome sequencing of several organisms, including the human genome and several model organisms. History The first DNA-based genome ever fully sequenced was achieved by two-time Nobel Prize winner, Frederick Sanger, in 1977. Sanger and his team of scientists created a library of the bacteriophage, phi X 174, for use in DNA sequencing. The importance of this success contributed to the ever-increasing demand for sequencing genomes to research gene therapy. Teams are now able to catalog polymorphisms in genomes and investigate those candidate genes contributing to maladies such as Parkinson's disease, Alzheimer's disease, multiple sclerosis, rheumatoid arthritis, and Type 1 diabetes. These are due to the advance of genome-
https://en.wikipedia.org/wiki/Ideal%20point
In hyperbolic geometry, an ideal point, omega point or point at infinity is a well-defined point outside the hyperbolic plane or space. Given a line l and a point P not on l, right- and left-limiting parallels to l through P converge to l at ideal points. Unlike the projective case, ideal points form a boundary, not a submanifold. So, these lines do not intersect at an ideal point and such points, although well-defined, do not belong to the hyperbolic space itself. The ideal points together form the Cayley absolute or boundary of a hyperbolic geometry. For instance, the unit circle forms the Cayley absolute of the Poincaré disk model and the Klein disk model. While the real line forms the Cayley absolute of the Poincaré half-plane model . Pasch's axiom and the exterior angle theorem still hold for an omega triangle, defined by two points in hyperbolic space and an omega point. Properties The hyperbolic distance between an ideal point and any other point or ideal point is infinite. The centres of horocycles and horoballs are ideal points; two horocycles are concentric when they have the same centre. Polygons with ideal vertices Ideal triangles if all vertices of a triangle are ideal points the triangle is an ideal triangle. Some properties of ideal triangles include: All ideal triangles are congruent. The interior angles of an ideal triangle are all zero. Any ideal triangle has an infinite perimeter. Any ideal triangle has area where K is the (always negative) curvature of the plane. Ideal quadrilaterals if all vertices of a quadrilateral are ideal points, the quadrilateral is an ideal quadrilateral. While all ideal triangles are congruent, not all quadrilaterals are; the diagonals can make different angles with each other resulting in noncongruent quadrilaterals. Having said this: The interior angles of an ideal quadrilateral are all zero. Any ideal quadrilateral has an infinite perimeter. Any ideal (convex non intersecting) quadrilateral h
https://en.wikipedia.org/wiki/Data%20domain
In data management and database analysis, a data domain is the collection of values that a data element may contain. The rule for determining the domain boundary may be as simple as a data type with an enumerated list of values. For example, a database table that has information about people, with one record per person, might have a "marital status" column. This column might be declared as a string data type, and allowed to have one of two known code values: "M" for married, "S" for single, and NULL for records where marital status is unknown or not applicable. The data domain for the marital status column is: "M", "S". In a normalized data model, the reference domain is typically specified in a reference table. Following the previous example, a Marital Status reference table would have exactly two records, one per allowed value—excluding NULL. Reference tables are formally related to other tables in a database by the use of foreign keys. Less simple domain boundary rules, if database-enforced, may be implemented through a check constraint or, in more complex cases, in a database trigger. For example, a column requiring positive numeric values may have a check constraint declaring that the values must be greater than zero. This definition combines the concepts of domain as an area over which control is exercised and the mathematical idea of a set of values of an independent variable for which a function is defined, as in Domain of a function. See also Data modeling Reference data Master data management Database normalization References Data modeling
https://en.wikipedia.org/wiki/Radicidation
Radicidation is a specific case of food irradiation where the dose of ionizing radiation applied to the food is sufficient to reduce the number of viable specific non-spore-forming pathogenic bacteria to such a level that none are detectable when the treated food is examined by any recognized method. The required dose is in the range of 2 – 8 kGy. The term may also be applied to the destruction of parasites such as tapeworm and trichina in meat, in which case the required dose is in the range of 0.1 – 1 kGy. When the process is used specifically for destroying enteropathogenic and enterotoxinogenic organisms belonging to the genus Salmonella, it is referred to as Salmonella radicidation. The term Radicidation is derived from radiation and 'caedere' (Latin for fell, cut, kill). See also Radappertization Radurization References Radiation Food preservation
https://en.wikipedia.org/wiki/Radappertization
Radappertization is a form of food irradiation which applies a dose of ionizing radiation sufficient to reduce the number and activity of viable microorganisms to such an extent that very few, if any, are detectable in the treated food by any recognized method (viruses being excepted). No microbial spoilage or toxicity should become detectable in a food so treated, regardless of the conditions under which it is stored, provided the packaging remains undamaged. The required dose is usually in the range of 25-45 kiloGrays. The shelf life of radappertized foods correctly packaged will mainly depend on the service life of the packaging material and its barrier properties. Radappertization is derived from the combination of radiation and Appert, the name of the French scientist and engineer who invented sterilized food for the troops of Napoleon. See also Radicidation Radurization References Food preservation Radiation Radiobiology Nuclear technology
https://en.wikipedia.org/wiki/Automated%20storage%20and%20retrieval%20system
An automated storage and retrieval system (ASRS or AS/RS) consists of a variety of computer-controlled systems for automatically placing and retrieving loads from defined storage locations. Automated storage and retrieval systems (AS/RS) are typically used in applications where: There is a very high volume of loads being moved into and out of storage Storage density is important because of space constraints No value is added in this process (no processing, only storage and transport) Accuracy is critical because of potential expensive damages to the load An AS/RS can be used with standard loads as well as nonstandard loads, meaning that each standard load can fit in a uniformly-sized volume; for example, the film canisters in the image of the Defense Visual Information Center are each stored as part of the contents of the uniformly sized metal boxes, which are shown in the image. Standard loads simplify the handling of a request of an item. In addition, audits of the accuracy of the inventory of contents can be restricted to the contents of an individual metal box, rather than undergoing a top-to-bottom search of the entire facility, for a single item. They can also be used in self storage places. Overview AS/RS systems are designed for automated storage and retrieval of parts and items in manufacturing, distribution, retail, wholesale and institutions. They first originated in the 1960s, initially focusing on heavy pallet loads but with the evolution of the technology the handled loads have become smaller. The systems operate under computerized control, maintaining an inventory of stored items. Retrieval of items is accomplished by specifying the item type and quantity to be retrieved. The computer determines where in the storage area the item can be retrieved from and schedules the retrieval. It directs the proper automated storage and retrieval machine (SRM) to the location where the item is stored and directs the machine to deposit the item at a loca
https://en.wikipedia.org/wiki/Cantor%20cube
In mathematics, a Cantor cube is a topological group of the form {0, 1}A for some index set A. Its algebraic and topological structures are the group direct product and product topology over the cyclic group of order 2 (which is itself given the discrete topology). If A is a countably infinite set, the corresponding Cantor cube is a Cantor space. Cantor cubes are special among compact groups because every compact group is a continuous image of one, although usually not a homomorphic image. (The literature can be unclear, so for safety, assume all spaces are Hausdorff.) Topologically, any Cantor cube is: homogeneous; compact; zero-dimensional; AE(0), an absolute extensor for compact zero-dimensional spaces. (Every map from a closed subset of such a space into a Cantor cube extends to the whole space.) By a theorem of Schepin, these four properties characterize Cantor cubes; any space satisfying the properties is homeomorphic to a Cantor cube. In fact, every AE(0) space is the continuous image of a Cantor cube, and with some effort one can prove that every compact group is AE(0). It follows that every zero-dimensional compact group is homeomorphic to a Cantor cube, and every compact group is a continuous image of a Cantor cube. References Topological groups Georg Cantor
https://en.wikipedia.org/wiki/%27t%20Hooft%20symbol
The t Hooft symbol is a collection of numbers which allows one to express the generators of the SU(2) Lie algebra in terms of the generators of Lorentz algebra. The symbol is a blend between the Kronecker delta and the Levi-Civita symbol. It was introduced by Gerard 't Hooft. It is used in the construction of the BPST instanton. Definition is the 't Hooft symbol: Where and are instances of the Kronecker delta, and is the Levi-Civita symbol. In other words, they are defined by () where the latter are the anti-self-dual 't Hooft symbols. Matrix Form In matrix form, the 't Hooft symbols are and their anti-self-duals are the following: Properties They satisfy the self-duality and the anti-self-duality properties: Some other properties are The same holds for except for and Obviously due to different duality properties. Many properties of these are tabulated in the appendix of 't Hooft's paper and also in the article by Belitsky et al. See also Instanton 't Hooft anomaly 't Hooft–Polyakov monopole 't Hooft loop References Gauge theories Mathematical symbols
https://en.wikipedia.org/wiki/Orthogonal%20coordinates
In mathematics, orthogonal coordinates are defined as a set of coordinates in which the coordinate hypersurfaces all meet at right angles (note that superscripts are indices, not exponents). A coordinate surface for a particular coordinate is the curve, surface, or hypersurface on which is a constant. For example, the three-dimensional Cartesian coordinates is an orthogonal coordinate system, since its coordinate surfaces constant, constant, and constant are planes that meet at right angles to one another, i.e., are perpendicular. Orthogonal coordinates are a special but extremely common case of curvilinear coordinates. Motivation While vector operations and physical laws are normally easiest to derive in Cartesian coordinates, non-Cartesian orthogonal coordinates are often used instead for the solution of various problems, especially boundary value problems, such as those arising in field theories of quantum mechanics, fluid flow, electrodynamics, plasma physics and the diffusion of chemical species or heat. The chief advantage of non-Cartesian coordinates is that they can be chosen to match the symmetry of the problem. For example, the pressure wave due to an explosion far from the ground (or other barriers) depends on 3D space in Cartesian coordinates, however the pressure predominantly moves away from the center, so that in spherical coordinates the problem becomes very nearly one-dimensional (since the pressure wave dominantly depends only on time and the distance from the center). Another example is (slow) fluid in a straight circular pipe: in Cartesian coordinates, one has to solve a (difficult) two dimensional boundary value problem involving a partial differential equation, but in cylindrical coordinates the problem becomes one-dimensional with an ordinary differential equation instead of a partial differential equation. The reason to prefer orthogonal coordinates instead of general curvilinear coordinates is simplicity: many complications arise
https://en.wikipedia.org/wiki/Hopf%20conjecture
In mathematics, Hopf conjecture may refer to one of several conjectural statements from differential geometry and topology attributed to Heinz Hopf. Positively or negatively curved Riemannian manifolds The Hopf conjecture is an open problem in global Riemannian geometry. It goes back to questions of Heinz Hopf from 1931. A modern formulation is: A compact, even-dimensional Riemannian manifold with positive sectional curvature has positive Euler characteristic. A compact, (2d)-dimensional Riemannian manifold with negative sectional curvature has Euler characteristic of sign . For surfaces, these statements follow from the Gauss–Bonnet theorem. For four-dimensional manifolds, this follows from the finiteness of the fundamental group and Poincaré duality and Euler–Poincaré formula equating for 4-manifolds the Euler characteristic with and Synge's theorem, assuring that the orientation cover is simply connected, so that the Betti numbers vanish . For 4-manifolds, the statement also follows from the Chern–Gauss–Bonnet theorem as noticed by John Milnor in 1955 (written down by Shiing-Shen Chern in 1955.). For manifolds of dimension 6 or higher the conjecture is open. An example of Robert Geroch had shown that the Chern–Gauss–Bonnet integrand can become negative for . The positive curvature case is known to hold however for hypersurfaces in (Hopf) or codimension two surfaces embedded in . For sufficiently pinched positive curvature manifolds, the Hopf conjecture (in the positive curvature case) follows from the sphere theorem, a theorem which had also been conjectured first by Hopf. One of the lines of attacks is by looking for manifolds with more symmetry. It is particular for example that all known manifolds of positive sectional curvature allow for an isometric circle action. The corresponding vector field is called a killing vector field. The conjecture (for the positive curvature case) has also been proved for manifolds of dimension or admitting an isometr
https://en.wikipedia.org/wiki/Care-of%20address
A care-of address (usually referred to as CoA) is a temporary IP address for a mobile device used in Internet routing. This allows a home agent to forward messages to the mobile device. A separate address is required because the IP address of the device that is used as host identification is topologically incorrect—it does not match the network of attachment. The care-of address splits the dual nature of an IP address, that is, its use is to identify the host and the location within the global IP network. Address assignment The care-of address can be acquired by the mobile node in two different ways: Foreign agent care-of address (FACoA): The mobile node receives the same CoA as the foreign agent. All mobile nodes in the foreign network are given the same CoA. Co-located care-of address: Each mobile node in the foreign network is assigned its own CoA, usually by a DHCP server. This might happen in a network where the foreign agent has not been deployed yet. Given the imminent IPv4 address exhaustion, the first solution is more frequently chosen, because it does not waste a public IP address for every mobile node when changing network location, as the collocated CoA does. The care-of address has to be a valid IP address within the foreign network, so that it allows the mobile node to receive and make connections with any host in the outside. To send outgoing packets, the mobile node may as well use its home address but, since it is not a connected IP address for the current network attachment, some routers in the way might prevent the packets from reaching the destination. That is why, in IPv4, the outgoing information is transported to the home agent (which is in the home network) by means of an IP tunnel. From the Home Network, the packets of the Mobile Node can be sent using its original Home Address, without any routing problem. The Correspondent Node will send its information again to the Home Network. Thus, it has to be sent on through the tunnel to the
https://en.wikipedia.org/wiki/Oppenheim%20conjecture
In Diophantine approximation, the Oppenheim conjecture concerns representations of numbers by real quadratic forms in several variables. It was formulated in 1929 by Alexander Oppenheim and later the conjectured property was further strengthened by Harold Davenport and Oppenheim. Initial research on this problem took the number n of variables to be large, and applied a version of the Hardy-Littlewood circle method. The definitive work of Margulis, settling the conjecture in the affirmative, used methods arising from ergodic theory and the study of discrete subgroups of semisimple Lie groups. Overview Meyer's theorem states that an indefinite integral quadratic form Q in n variables, n ≥ 5, nontrivially represents zero, i.e. there exists a non-zero vector x with integer components such that Q(x) = 0. The Oppenheim conjecture can be viewed as an analogue of this statement for forms Q that are not multiples of a rational form. It states that in this case, the set of values of Q on integer vectors is a dense subset of the real line. History Several versions of the conjecture were formulated by Oppenheim and Harold Davenport. Let Q be a real nondegenerate indefinite quadratic form in n variables. Suppose that n ≥ 3 and Q is not a multiple of a form with rational coefficients. Then for any ε > 0 there exists a non-zero vector x with integer components such that |Q(x)| < ε. For n ≥ 5 this was conjectured by Oppenheim in 1929; the stronger version is due to Davenport in 1946. Let Q and n have the same meaning as before. Then for any ε > 0 there exists a non-zero vector x with integer components such that 0 < |Q(x, x)| < ε. This was conjectured by Oppenheim in 1953 and proved by Birch, Davenport, and Ridout for n at least 21, and by Davenport and Heilbronn for diagonal forms in five variables. Other partial results are due to Oppenheim (for forms in four variables, but under the strong restriction that the form represents zero over Z), Watson, Iwaniec, Baker–Schli
https://en.wikipedia.org/wiki/Toroidal%20coordinates
Toroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that separates its two foci. Thus, the two foci and in bipolar coordinates become a ring of radius in the plane of the toroidal coordinate system; the -axis is the axis of rotation. The focal ring is also known as the reference circle. Definition The most common definition of toroidal coordinates is together with ). The coordinate of a point equals the angle and the coordinate equals the natural logarithm of the ratio of the distances and to opposite sides of the focal ring The coordinate ranges are , and Coordinate surfaces Surfaces of constant correspond to spheres of different radii that all pass through the focal ring but are not concentric. The surfaces of constant are non-intersecting tori of different radii that surround the focal ring. The centers of the constant- spheres lie along the -axis, whereas the constant- tori are centered in the plane. Inverse transformation The coordinates may be calculated from the Cartesian coordinates (x, y, z) as follows. The azimuthal angle is given by the formula The cylindrical radius of the point P is given by and its distances to the foci in the plane defined by is given by The coordinate equals the natural logarithm of the focal distances whereas equals the angle between the rays to the foci, which may be determined from the law of cosines Or explicitly, including the sign, where . The transformations between cylindrical and toroidal coordinates can be expressed in complex notation as Scale factors The scale factors for the toroidal coordinates and are equal whereas the azimuthal scale factor equals Thus, the infinitesimal volume element equals Differential Operators The Laplacian is given by For a vector field the Vector Laplacian is given by Other differential operators such as and can be expressed in the
https://en.wikipedia.org/wiki/Bispherical%20coordinates
Bispherical coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that connects the two foci. Thus, the two foci and in bipolar coordinates remain points (on the -axis, the axis of rotation) in the bispherical coordinate system. Definition The most common definition of bispherical coordinates is where the coordinate of a point equals the angle and the coordinate equals the natural logarithm of the ratio of the distances and to the foci The coordinates ranges are -∞ < < ∞, 0 ≤ ≤ and 0 ≤ ≤ 2. Coordinate surfaces Surfaces of constant correspond to intersecting tori of different radii that all pass through the foci but are not concentric. The surfaces of constant are non-intersecting spheres of different radii that surround the foci. The centers of the constant- spheres lie along the -axis, whereas the constant- tori are centered in the plane. Inverse formulae The formulae for the inverse transformation are: where and Scale factors The scale factors for the bispherical coordinates and are equal whereas the azimuthal scale factor equals Thus, the infinitesimal volume element equals and the Laplacian is given by Other differential operators such as and can be expressed in the coordinates by substituting the scale factors into the general formulae found in orthogonal coordinates. Applications The classic applications of bispherical coordinates are in solving partial differential equations, e.g., Laplace's equation, for which bispherical coordinates allow a separation of variables. However, the Helmholtz equation is not separable in bispherical coordinates. A typical example would be the electric field surrounding two conducting spheres of different radii. References Bibliography External links MathWorld description of bispherical coordinates Three-dimensional coordinate systems Orthogonal coordinate systems
https://en.wikipedia.org/wiki/Bipolar%20cylindrical%20coordinates
Bipolar cylindrical coordinates are a three-dimensional orthogonal coordinate system that results from projecting the two-dimensional bipolar coordinate system in the perpendicular -direction. The two lines of foci and of the projected Apollonian circles are generally taken to be defined by and , respectively, (and by ) in the Cartesian coordinate system. The term "bipolar" is often used to describe other curves having two singular points (foci), such as ellipses, hyperbolas, and Cassini ovals. However, the term bipolar coordinates is never used to describe coordinates associated with those curves, e.g., elliptic coordinates. Basic definition The most common definition of bipolar cylindrical coordinates is where the coordinate of a point equals the angle and the coordinate equals the natural logarithm of the ratio of the distances and to the focal lines (Recall that the focal lines and are located at and , respectively.) Surfaces of constant correspond to cylinders of different radii that all pass through the focal lines and are not concentric. The surfaces of constant are non-intersecting cylinders of different radii that surround the focal lines but again are not concentric. The focal lines and all these cylinders are parallel to the -axis (the direction of projection). In the plane, the centers of the constant- and constant- cylinders lie on the and axes, respectively. Scale factors The scale factors for the bipolar coordinates and are equal whereas the remaining scale factor . Thus, the infinitesimal volume element equals and the Laplacian is given by Other differential operators such as and can be expressed in the coordinates by substituting the scale factors into the general formulae found in orthogonal coordinates. Applications The classic applications of bipolar coordinates are in solving partial differential equations, e.g., Laplace's equation or the Helmholtz equation, for which bipolar coordinates allo
https://en.wikipedia.org/wiki/Elliptic%20cylindrical%20coordinates
Elliptic cylindrical coordinates are a three-dimensional orthogonal coordinate system that results from projecting the two-dimensional elliptic coordinate system in the perpendicular -direction. Hence, the coordinate surfaces are prisms of confocal ellipses and hyperbolae. The two foci and are generally taken to be fixed at and , respectively, on the -axis of the Cartesian coordinate system. Basic definition The most common definition of elliptic cylindrical coordinates is where is a nonnegative real number and . These definitions correspond to ellipses and hyperbolae. The trigonometric identity shows that curves of constant form ellipses, whereas the hyperbolic trigonometric identity shows that curves of constant form hyperbolae. Scale factors The scale factors for the elliptic cylindrical coordinates and are equal whereas the remaining scale factor . Consequently, an infinitesimal volume element equals and the Laplacian equals Other differential operators such as and can be expressed in the coordinates by substituting the scale factors into the general formulae found in orthogonal coordinates. Alternative definition An alternative and geometrically intuitive set of elliptic coordinates are sometimes used, where and . Hence, the curves of constant are ellipses, whereas the curves of constant are hyperbolae. The coordinate must belong to the interval [-1, 1], whereas the coordinate must be greater than or equal to one. The coordinates have a simple relation to the distances to the foci and . For any point in the (x,y) plane, the sum of its distances to the foci equals , whereas their difference equals . Thus, the distance to is , whereas the distance to is . (Recall that and are located at and , respectively.) A drawback of these coordinates is that they do not have a 1-to-1 transformation to the Cartesian coordinates Alternative scale factors The scale factors for the alternative elliptic coordinates are
https://en.wikipedia.org/wiki/Parabolic%20cylindrical%20coordinates
In mathematics, parabolic cylindrical coordinates are a three-dimensional orthogonal coordinate system that results from projecting the two-dimensional parabolic coordinate system in the perpendicular -direction. Hence, the coordinate surfaces are confocal parabolic cylinders. Parabolic cylindrical coordinates have found many applications, e.g., the potential theory of edges. Basic definition The parabolic cylindrical coordinates are defined in terms of the Cartesian coordinates by: The surfaces of constant form confocal parabolic cylinders that open towards , whereas the surfaces of constant form confocal parabolic cylinders that open in the opposite direction, i.e., towards . The foci of all these parabolic cylinders are located along the line defined by . The radius has a simple formula as well that proves useful in solving the Hamilton–Jacobi equation in parabolic coordinates for the inverse-square central force problem of mechanics; for further details, see the Laplace–Runge–Lenz vector article. Scale factors The scale factors for the parabolic cylindrical coordinates and are: Differential elements The infinitesimal element of volume is The differential displacement is given by: The differential normal area is given by: Del Let be a scalar field. The gradient is given by The Laplacian is given by Let be a vector field of the form: The divergence is given by The curl is given by Other differential operators can be expressed in the coordinates by substituting the scale factors into the general formulae found in orthogonal coordinates. Relationship to other coordinate systems Relationship to cylindrical coordinates : Parabolic unit vectors expressed in terms of Cartesian unit vectors: Parabolic cylinder harmonics Since all of the surfaces of constant , and are conicoids, Laplace's equation is separable in parabolic cylindrical coordinates. Using the technique of the separation of variables, a separated solution to Laplace's
https://en.wikipedia.org/wiki/Prolate%20spheroidal%20coordinates
Prolate spheroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional elliptic coordinate system about the focal axis of the ellipse, i.e., the symmetry axis on which the foci are located. Rotation about the other axis produces oblate spheroidal coordinates. Prolate spheroidal coordinates can also be considered as a limiting case of ellipsoidal coordinates in which the two smallest principal axes are equal in length. Prolate spheroidal coordinates can be used to solve various partial differential equations in which the boundary conditions match its symmetry and shape, such as solving for a field produced by two centers, which are taken as the foci on the z-axis. One example is solving for the wavefunction of an electron moving in the electromagnetic field of two positively charged nuclei, as in the hydrogen molecular ion, H2+. Another example is solving for the electric field generated by two small electrode tips. Other limiting cases include areas generated by a line segment (μ = 0) or a line with a missing segment (ν=0). The electronic structure of general diatomic molecules with many electrons can also be solved to excellent precision in the prolate spheroidal coordinate system. Definition The most common definition of prolate spheroidal coordinates is where is a nonnegative real number and . The azimuthal angle belongs to the interval . The trigonometric identity shows that surfaces of constant form prolate spheroids, since they are ellipses rotated about the axis joining their foci. Similarly, the hyperbolic trigonometric identity shows that surfaces of constant form hyperboloids of revolution. The distances from the foci located at are Scale factors The scale factors for the elliptic coordinates are equal whereas the azimuthal scale factor is resulting in a metric of Consequently, an infinitesimal volume element equals and the Laplacian can be written Other differential o