source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/SUnit
SUnit is a unit testing framework for the programming language Smalltalk. It is the original source of the xUnit design, originally written by one of the creators of Extreme Programming, Kent Beck. SUnit allows writing tests and checking results in Smalltalk. History SUnit was originally described by Beck in "Simple Smalltalk Testing: With Patterns" (1989), then published as chapter 30 "Simple Smalltalk Testing", in the book Kent Beck's Guide to Better Smalltalk by Kent Beck, Donald G. Firesmith (Editor) (Publisher: Cambridge University Press, Pub. Date: December 1998, , 408pp) External links @ Camp Smalltalk SUnit @ Ward Cunningham's Wiki Extreme programming Unit testing frameworks Unit testing
https://en.wikipedia.org/wiki/Wireless%20network%20interface%20controller
A wireless network interface controller (WNIC) is a network interface controller which connects to a wireless network, such as Wi-Fi or Bluetooth, rather than a wired network, such as a Token Ring or Ethernet. A WNIC, just like other NICs, works on the layers 1 and 2 of the OSI model and uses an antenna to communicate via radio waves. A wireless network interface controller may be implemented as an expansion card and connected using PCI bus or PCIe bus, or connected via USB, PC Card, ExpressCard, Mini PCIe or M.2. The low cost and ubiquity of the Wi-Fi standard means that many newer mobile computers have a wireless network interface built into the motherboard. The term is usually applied to IEEE 802.11 adapters; it may also apply to a NIC using protocols other than 802.11, such as one implementing Bluetooth connections. Modes of operation An 802.11 WNIC can operate in two modes known as infrastructure mode and ad hoc mode: Infrastructure mode In an infrastructure mode network the WNIC needs a wireless access point: all data is transferred using the access point as the central hub. All wireless nodes in an infrastructure mode network connect to an access point. All nodes connecting to the access point must have the same service set identifier (SSID) as the access point, and if a kind of wireless security is enabled on the access point (such as WEP or WPA), they must share the same keys or other authentication parameters. Ad hoc mode In an ad hoc mode network the WNIC does not require an access point, but rather can interface with all other wireless nodes directly. All the nodes in an ad hoc network must have the same channel and SSID. Specifications The IEEE 802.11 standard sets out low-level specifications for how all 802.11 wireless networks operate. Earlier 802.11 interface controllers are usually only compatible with earlier variants of the standard, while newer cards support both current and old standards. Specifications commonly used in marketi
https://en.wikipedia.org/wiki/De%20Bruijn%20sequence
In combinatorial mathematics, a de Bruijn sequence of order n on a size-k alphabet A is a cyclic sequence in which every possible length-n string on A occurs exactly once as a substring (i.e., as a contiguous subsequence). Such a sequence is denoted by and has length , which is also the number of distinct strings of length n on A. Each of these distinct strings, when taken as a substring of , must start at a different position, because substrings starting at the same position are not distinct. Therefore, must have at least symbols. And since has exactly symbols, de Bruijn sequences are optimally short with respect to the property of containing every string of length n at least once. The number of distinct de Bruijn sequences is The sequences are named after the Dutch mathematician Nicolaas Govert de Bruijn, who wrote about them in 1946. As he later wrote, the existence of de Bruijn sequences for each order together with the above properties were first proved, for the case of alphabets with two elements, by . The generalization to larger alphabets is due to . Automata for recognizing these sequences are denoted as de Bruijn automata. In most applications, A = {0,1}. History The earliest known example of a de Bruijn sequence comes from Sanskrit prosody where, since the work of Pingala, each possible three-syllable pattern of long and short syllables is given a name, such as 'y' for short–long–long and 'm' for long–long–long. To remember these names, the mnemonic yamātārājabhānasalagām is used, in which each three-syllable pattern occurs starting at its name: 'yamātā' has a short–long–long pattern, 'mātārā' has a long–long–long pattern, and so on, until 'salagām' which has a short–short–long pattern. This mnemonic, equivalent to a de Bruijn sequence on binary 3-tuples, is of unknown antiquity, but is at least as old as Charles Philip Brown's 1869 book on Sanskrit prosody that mentions it and considers it "an ancient line, written by Pāṇini". In 1894, A. de
https://en.wikipedia.org/wiki/Vortex%20ring%20state
The vortex ring state (VRS) is a dangerous aerodynamic condition that may arise in helicopter flight, when a vortex ring system engulfs the rotor, causing severe loss of lift. Often the term settling with power is used as a synonym, e.g., in Australia, the UK, and the USA, but not in Canada, which uses the latter term for a different phenomenon. A vortex ring state sets in when the airflow around a helicopter's main rotor assumes a rotationally symmetrical form over the tips of the blades, supported by a laminar flow over the blade tips, and a countering upflow of air outside and away from the rotor. In this condition, the rotor falls into a new topological state of the surrounding flow field, induced by its own downwash, and suddenly loses lift. Since vortex rings are surprisingly stable fluid dynamical phenomena (a form of topological soliton), the best way to recover from them is to laterally steer clear of them, in order to re-establish lift, and to break them up using maximum engine power, in order to establish turbulence. This is also why the condition is often mistaken for "settling with insufficient power": high-powered manoeuvres can both induce a vortex ring state in free air, and then at low altitude, during landing conditions, possibly break it. If sufficient power is not available to maintain the airfoil of the rotor at a stalled condition, while generating sufficient lift, the aircraft will not be able to stay afloat before the vortex ring state dissipates, and will crash. This condition also occurs with tiltrotors, and it was responsible for an accident involving a V-22 Osprey in 2000. Vortex ring state caused the loss of a heavily modified MH-60 helicopter during Operation Neptune Spear, the 2011 raid in which Osama bin Laden was killed. Description Because the blades are rotating about a central axis, the speed of each airfoil is lowest at the point where it connects to the hub-and-grip assembly. This fundamental physical reality means that th
https://en.wikipedia.org/wiki/Larder
A larder is a cool area for storing food prior to use. Originally, it was where raw meat was larded—covered in fat—to be preserved. By the 18th century, the term had expanded: at that point, a dry larder was where bread, pastry, milk, butter, or cooked meats were stored. Larders were commonplace in houses before the widespread use of the refrigerator. Stone larders were designed to keep cold in the hottest weather. They had slate or marble shelves two or three inches thick. These shelves were wedged into thick stone walls. Fish or vegetables were laid directly onto the shelves and covered with muslin or handfuls of wet rushes were sprinkled under and around. Essential qualities Cool, dry, and well-ventilated. Usually on the shady side of the house. No fireplaces or hot flues in any of the adjoining walls. Might have a door to an outside yard. Had windows with wire gauze in them instead of glass. Description In the northern hemisphere, most houses would be arranged to have their larders and kitchens on the north or west side of the house where they received the least amount of sun. In Australia and New Zealand, larders were placed on the south or east sides of the house for the same reason. Many larders have small, unglazed windows with window openings covered in fine mesh. This allows free circulation of air without allowing flies to enter. Many larders also have tiled or painted walls to simplify cleaning. Older larders, and especially those in larger houses, have hooks in the ceiling to hang joints of meat. A pantry may contain a thrawl, a term used in Derbyshire and Yorkshire to denote a stone slab or shelf used to keep food cool in the days before refrigeration was domestically available. In a late medieval hall, a thrawl would have been equivalent to a larder. In a large or moderately large 19th-century house, all these rooms would have been placed in the lowest, and/or as convenient, coolest possible location in the building to use the mass of the groun
https://en.wikipedia.org/wiki/Estimation%20theory
Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements. In estimation theory, two approaches are generally considered: The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector. Examples For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age. Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated. As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal. Basics For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is a statistical sample – a set of data points taken from a random vector (RV) of size N. Put into a vector, Secondly, there are M parameters whose values are to be estimated. Third, the continuous probability density function (pdf) or its
https://en.wikipedia.org/wiki/Approach%20space
In topology, a branch of mathematics, approach spaces are a generalization of metric spaces, based on point-to-set distances, instead of point-to-point distances. They were introduced by Robert Lowen in 1989, in a series of papers on approach theory between 1988 and 1995. Definition Given a metric space (X, d), or more generally, an extended pseudoquasimetric (which will be abbreviated ∞pq-metric here), one can define an induced map d: X × P(X) → [0,∞] by d(x, A) = inf{d(x, a) : a ∈ A}. With this example in mind, a distance on X is defined to be a map X × P(X) → [0,∞] satisfying for all x in X and A, B ⊆ X, d(x, {x}) = 0, d(x, Ø) = ∞, d(x, A∪B) = min(d(x, A), d(x, B)), For all 0 ≤ ε ≤ ∞, d(x, A) ≤ d(x, A(ε)) + ε, where we define A(ε) = {x : d(x, A) ≤ ε}. (The "empty infimum is positive infinity" convention is like the nullary intersection is everything convention.) An approach space is defined to be a pair (X, d) where d is a distance function on X. Every approach space has a topology, given by treating A → A(0) as a Kuratowski closure operator. The appropriate maps between approach spaces are the contractions. A map f: (X, d) → (Y, e) is a contraction if e(f(x), f[A]) ≤ d(x, A) for all x ∈ X and A ⊆ X. Examples Every ∞pq-metric space (X, d) can be distanced to (X, d), as described at the beginning of the definition. Given a set X, the discrete distance is given by d(x, A) = 0 if x ∈ A and d(x, A) = ∞ if x ∉ A. The induced topology is the discrete topology. Given a set X, the indiscrete distance is given by d(x, A) = 0 if A is non-empty, and d(x, A) = ∞ if A is empty. The induced topology is the indiscrete topology. Given a topological space X, a topological distance is given by d(x, A) = 0 if x ∈ A, and d(x, A) = ∞ otherwise. The induced topology is the original topology. In fact, the only two-valued distances are the topological distances. Let P = [0, ∞] be the extended non-negative reals. Let d+(x, A) = max(x − sup A, 0) for x ∈ P and A ⊆ P. Given any
https://en.wikipedia.org/wiki/NOV%20%28computers%29
NOV, or News Overview, is a widely deployed indexing method for Usenet articles, also found in some Internet email implementations. Written in 1992 by Geoff Collyer, NOV replaced a variety of incompatible indexing schemes used in different client programs, each typically requiring custom modifications to each news server before they could be used. In modern NNTP implementations, NOV is exposed as the and related commands. Operation In its original implementation, the header lines of each incoming message are examined, and a single line of text is appended to the overview files, with one overview file present for each newsgroup. Tab (ASCII code 9) characters and line breaks within the headers are converted to spaces (ASCII code 32), and the header fields within each overview line are then delimited by tab characters. The first seven fields in a NOV line are fixed and unlabeled: Subject: header contents From: header contents Date: header contents Message-ID: header contents References: header contents Size of the article in octets Lines: header contents The header lines are those defined in either RFC 2822 or RFC 1036. If data for any of these fields is missing, a tab alone is put in its place. The value of the size field is approximate, as servers may count line endings as one or two characters. Additionally, the lines value may be calculated by the server, supplied by the message sender, or omitted altogether. An arbitrary number of additional fields may be added to any NOV line. The eighth and later fields must be labeled in the form "Header-Name: contents", again delimited by tabs. The order and presence of additional fields are allowed to vary from line to line, and from server to server. Some server provide a schema of what is recorded to new overview lines in the form of an NNTP command, but this cannot be relied upon to be accurate for older entries. In practice, most servers supply only one optional field, the contents of the Xref: header, to allow
https://en.wikipedia.org/wiki/Elementary%20arithmetic
Elementary arithmetic is a branch of mathematics involving basic numerical operations, namely addition, subtraction, multiplication, and division. Due to its low level of abstraction, broad range of application, and position as the foundation of all mathematics, elementary arithmetic is generally the first critical branch of mathematics to be taught in schools. Digits Symbols called digits are used to represent the value of numbers in a numeral system. The most commonly used digits are the Arabic numerals (0 to 9). The Hindu-Arabic numeral system is the most commonly used numeral system, being a positional notation system used to represent numbers using these digits. Successor function and size In elementary arithmetic, the successor of a natural number (including zero) is the result of adding one to that number, whereas the predecessor of a natural number (excluding zero) is the result obtained by subtracting one from that number. For example, the successor of zero is one and the predecessor of eleven is ten ( and ). Every natural number has a successor, and all natural numbers (except zero) have a predecessor. If one number is greater than () another number, then the latter is less than () the first one. For example, three is less than eight (), and eight is greater than three (). Counting Counting involves assigning a natural number to each object in a set, starting with one for the first object and increasing by one for each subsequent object. The number of objects in the set is the count which is equal to the highest natural number assigned to an object in the set. This count is also known as the cardinality of the set. Counting can also be the process of tallying using tally marks, drawing a mark for each object in a set. In more advanced mathematics, the process of counting can be thought of as constructing a one-to-one correspondence (or bijection), between the elements of a set and the set , where is a natural number, and the size of the set is .
https://en.wikipedia.org/wiki/Sigma-ideal
In mathematics, particularly measure theory, a -ideal, or sigma ideal, of a σ-algebra (, read "sigma") is a subset with certain desirable closure properties. It is a special type of ideal. Its most frequent application is in probability theory. Let be a measurable space (meaning is a -algebra of subsets of ). A subset of is a -ideal if the following properties are satisfied: ; When and then implies ; If then Briefly, a sigma-ideal must contain the empty set and contain subsets and countable unions of its elements. The concept of -ideal is dual to that of a countably complete (-) filter. If a measure is given on the set of -negligible sets ( such that ) is a -ideal. The notion can be generalized to preorders with a bottom element as follows: is a -ideal of just when (i') (ii') implies and (iii') given a sequence there exists some such that for each Thus contains the bottom element, is downward closed, and satisfies a countable analogue of the property of being upwards directed. A -ideal of a set is a -ideal of the power set of That is, when no -algebra is specified, then one simply takes the full power set of the underlying set. For example, the meager subsets of a topological space are those in the -ideal generated by the collection of closed subsets with empty interior. See also References Bauer, Heinz (2001): Measure and Integration Theory. Walter de Gruyter GmbH & Co. KG, 10785 Berlin, Germany. Measure theory Families of sets
https://en.wikipedia.org/wiki/Triparental%20mating
Triparental mating is a form of bacterial conjugation where a conjugative plasmid present in one bacterial strain assists the transfer of a mobilizable plasmid present in a second bacterial strain into a third bacterial strain. Plasmids are introduced into bacteria for such purposes as transformation, cloning, or transposon mutagenesis. Triparental matings can help overcome some of the barriers to efficient plasmid mobilization. For instance, if the conjugative plasmid and the mobilizable plasmid are members of the same incompatibility group they do not need to stably coexist in the second bacterial strain for the mobilizable plasmid to be transferred. History Triparental mating was identified in yeasts in 1960 and then in Escherichia coli in 1962. Process Requirements A helper strain, carrying a conjugative plasmid (such as the F-plasmid) that codes for genes required for conjugation and DNA transfer. A donor strain, carrying a mobilizable plasmid that can utilize the transfer functions of the conjugative plasmid. A recipient strain, you wish to introduce the mobilizable plasmid into. Five to seven days are required to determine if the plasmid was successfully introduced into the new bacterial strain and confirm that there is no carryover of the helper or donor strain. In contrast, electroporation does not require a helper or donor strain. This helps avoid possible contamination with other strains. The introduction of the plasmid can be verified in the recipient strain in two days, making electroporation a faster and more efficient method of transformation. Electroporation however does not work with all bacteria and is mostly limited to well-characterized model organisms. See also Bacterial conjugation Plasmid Transposon (applications) Bacteriophage Three-parent baby External links Protocol for P. aeruginosa References Molecular biology
https://en.wikipedia.org/wiki/PBKDF2
In cryptography, PBKDF1 and PBKDF2 (Password-Based Key Derivation Function 1 and 2) are key derivation functions with a sliding computational cost, used to reduce vulnerability to brute-force attacks. PBKDF2 is part of RSA Laboratories' Public-Key Cryptography Standards (PKCS) series, specifically PKCS#5 v2.0, also published as Internet Engineering Task Force's RFC2898. It supersedes PBKDF1, which could only produce derived keys up to 160 bits long. RFC8018 (PKCS#5 v2.1), published in 2017, recommends PBKDF2 for password hashing. Purpose and operation PBKDF2 applies a pseudorandom function, such as hash-based message authentication code (HMAC), to the input password or passphrase along with a salt value and repeats the process many times to produce a derived key, which can then be used as a cryptographic key in subsequent operations. The added computational work makes password cracking much more difficult, and is known as key stretching. When the standard was written in the year 2000 the recommended minimum number of iterations was 1,000, but the parameter is intended to be increased over time as CPU speeds increase. A Kerberos standard in 2005 recommended 4,096 iterations; Apple reportedly used 2,000 for iOS 3, and 10,000 for iOS 4; while LastPass in 2011 used 5,000 iterations for JavaScript clients and 100,000 iterations for server-side hashing. In 2023, OWASP recommended to use 600,000 iterations for PBKDF2-HMAC-SHA256 and 210,000 for PBKDF2-HMAC-SHA512. Having a salt added to the password reduces the ability to use precomputed hashes (rainbow tables) for attacks, and means that multiple passwords have to be tested individually, not all at once. The public key cryptography standard recommends a salt length of at least 64 bits. The US National Institute of Standards and Technology recommends a salt length of 128 bits. Key derivation process The PBKDF2 key derivation function has five input parameters: where: is a pseudorandom function of two parameters
https://en.wikipedia.org/wiki/Half-integer
In mathematics, a half-integer is a number of the form where is a whole number. For example, are all half-integers. The name "half-integer" is perhaps misleading, as the set may be misunderstood to include numbers such as 1 (being half the integer 2). A name such as "integer-plus-half" may be more accurate, but even though not literally true, "half integer" is the conventional term. Half-integers occur frequently enough in mathematics and in quantum mechanics that a distinct term is convenient. Note that halving an integer does not always produce a half-integer; this is only true for odd integers. For this reason, half-integers are also sometimes called half-odd-integers. Half-integers are a subset of the dyadic rationals (numbers produced by dividing an integer by a power of two). Notation and algebraic structure The set of all half-integers is often denoted The integers and half-integers together form a group under the addition operation, which may be denoted However, these numbers do not form a ring because the product of two half-integers is not a half-integer; e.g. The smallest ring containing them is , the ring of dyadic rationals. Properties The sum of half-integers is a half-integer if and only if is odd. This includes since the empty sum 0 is not half-integer. The negative of a half-integer is a half-integer. The cardinality of the set of half-integers is equal to that of the integers. This is due to the existence of a bijection from the integers to the half-integers: , where is an integer Uses Sphere packing The densest lattice packing of unit spheres in four dimensions (called the D4 lattice) places a sphere at every point whose coordinates are either all integers or all half-integers. This packing is closely related to the Hurwitz integers: quaternions whose real coefficients are either all integers or all half-integers. Physics In physics, the Pauli exclusion principle results from definition of fermions as particles which have spins
https://en.wikipedia.org/wiki/Veronese%20surface
In mathematics, the Veronese surface is an algebraic surface in five-dimensional projective space, and is realized by the Veronese embedding, the embedding of the projective plane given by the complete linear system of conics. It is named after Giuseppe Veronese (1854–1917). Its generalization to higher dimension is known as the Veronese variety. The surface admits an embedding in the four-dimensional projective space defined by the projection from a general point in the five-dimensional space. Its general projection to three-dimensional projective space is called a Steiner surface. Definition The Veronese surface is the image of the mapping given by where denotes homogeneous coordinates. The map is known as the Veronese embedding. Motivation The Veronese surface arises naturally in the study of conics. A conic is a degree 2 plane curve, thus defined by an equation: The pairing between coefficients and variables is linear in coefficients and quadratic in the variables; the Veronese map makes it linear in the coefficients and linear in the monomials. Thus for a fixed point the condition that a conic contains the point is a linear equation in the coefficients, which formalizes the statement that "passing through a point imposes a linear condition on conics". Veronese map The Veronese map or Veronese variety generalizes this idea to mappings of general degree d in n+1 variables. That is, the Veronese map of degree d is the map with m given by the multiset coefficient, or more familiarly the binomial coefficient, as: The map sends to all possible monomials of total degree d (of which there are ); we have since there are variables to choose from; and we subtract since the projective space has coordinates. The second equality shows that for fixed source dimension n, the target dimension is a polynomial in d of degree n and leading coefficient For low degree, is the trivial constant map to and is the identity map on so d is generally taken to be
https://en.wikipedia.org/wiki/High-end%20audio
High-end audio is a class of consumer home audio equipment marketed to audiophiles on the basis of high price or quality, and esoteric or novel sound reproduction technologies. The term can refer simply to the price, to the build quality of the components, or to the subjective or objective quality of sound reproduction. Definition The distinction between the terms high end and high fidelity is not well defined. According to one industry commentator, high-end could be defined as, "Gear below which's price and performance one could not go without compromising the music and the sound." Harry Pearson, founder of The Absolute Sound magazine, is widely acknowledged to have coined the term high-end audio. Costs High-end audio equipment can be extremely expensive. It is sometimes referred to as cost-no-object equipment. Audiophile equipment can encompass the full range from budget to high-end in terms of price. Fidelity assessment The fidelity of sound reproduction may be assessed aurally or using audio system measurements. The human sense of hearing is subjective and difficult to define. Psychoacoustics is a division of acoustics that studies this field. Measurements can be deceiving; high or low figures of certain technical characteristics do not necessarily offer a good representation of how the equipment sounds to each person. For example, some valve amplifiers produce greater amounts of total harmonic distortion, but this type of distortion (2nd harmonic) is not as disturbing to the ear as the higher-order distortions produced by poorly designed transistor equipment. The validity of certain products is often questioned. These include accessories such as speaker wires utilizing exotic materials (such as oxygen-free copper) and construction geometries, cable stands for lifting them off the floor (as a way to control mechanically induced vibrations), connectors, sprays and other tweaks. See also Audio noise measurement Broadcast quality Professional audio S
https://en.wikipedia.org/wiki/Johnny%20Stenb%C3%A4ck
Johnny Stenbäck is a Finnish software engineer mostly known for his work on the Mozilla browser. He was one of the first developers outside Netscape to get involved with the Mozilla source released by Netscape in March 1998. Stenbäck started working on the source code soon after the release, then working for the Finnish software company Citec (Citec created DocZilla, a Mozilla-based SGML browser). In 2000 he was hired by Netscape and moved to California. In 2003 Stenbäck joined the Mozilla Foundation. Stenbäck is currently working at Google. Publications Co-author of "Extending Mozilla or How to Do the Impossible", which was originally prepared as a tutorial for Xtech'99. Editor of Document Object Model (DOM) Level 2 HTML Specification, a W3C Recommendation. Mozilla developers Free software programmers Open source people Computer programmers Living people Finnish computer programmers Year of birth missing (living people) Finnish expatriates in the United States
https://en.wikipedia.org/wiki/Thermal%20expansion
Thermal expansion is the tendency of matter to change its shape, area, volume, and density in response to a change in temperature, usually not including phase transitions. Temperature is a monotonic function of the average molecular kinetic energy of a substance. When a substance is heated, molecules begin to vibrate and move more, usually creating more distance between themselves. Substances which contract with increasing temperature are unusual, and only occur within limited temperature ranges (see examples below). The relative expansion (also called strain) divided by the change in temperature is called the material's coefficient of linear thermal expansion and generally varies with temperature. As energy in particles increases, they start moving faster and faster, weakening the intermolecular forces between them and therefore expanding the substance. Overview Predicting expansion If an equation of state is available, it can be used to predict the values of the thermal expansion at all the required temperatures and pressures, along with many other state functions. Contraction effects (negative thermal expansion) A number of materials contract on heating within certain temperature ranges; this is usually called negative thermal expansion, rather than "thermal contraction". For example, the coefficient of thermal expansion of water drops to zero as it is cooled to 3.983 °C and then becomes negative below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies of water maintaining this temperature at their lower depths during extended periods of sub-zero weather. Other materials are also known to exhibit negative thermal expansion. Fairly pure silicon has a negative coefficient of thermal expansion for temperatures between about 18 and 120 kelvin. ALLVAR Alloy 30, a titanium alloy, exhibits anisotropic negative thermal expansion across a wide range of temperatures. Factors affecting thermal expansion Unlike g
https://en.wikipedia.org/wiki/Entry%20point
In computer programming, an entry point is the place in a program where the execution of a program begins, and where the program has access to command line arguments. To start a program's execution, the loader or operating system passes control to its entry point. (During booting, the operating system itself is the program). This marks the transition from load time (and dynamic link time, if present) to run time. For some operating systems and programming languages, the entry point is in a runtime library, a set of support functions for the language. The library code initializes the program and then passes control to the program proper. In other cases, the program may initialize the runtime library itself. In simple systems, execution begins at the first statement, which is common in interpreted languages, simple executable formats, and boot loaders. In other cases, the entry point is at some other known memory address which can be an absolute address or relative address (offset). Alternatively, execution of a program can begin at a named point, either with a conventional name defined by the programming language or operating system or at a caller-specified name. In many C-family languages, this is a function called main; as a result, the entry point is often known as the main function. In JVM languages such as Java the entry point is a static method called main; in CLI languages such as C# the entry point is a static method named Main. Usage Entry points apply both to source code and to executable files. However, in day-to-day software development, programmers specify the entry points only in source code, which makes them much better known. Entry points in executable files depend on the application binary interface (ABI) of the actual operating system, and are generated by the compiler or linker (if not fixed by the ABI). Other linked object files may also have entry points, which are used later by the linker when generating entry points of an executable fil
https://en.wikipedia.org/wiki/Mathematical%20chemistry
Mathematical chemistry is the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. Mathematical chemistry has also sometimes been called computer chemistry, but should not be confused with computational chemistry. Major areas of research in mathematical chemistry include chemical graph theory, which deals with topology such as the mathematical study of isomerism and the development of topological descriptors or indices which find application in quantitative structure-property relationships; and chemical aspects of group theory, which finds applications in stereochemistry and quantum chemistry. Another important area is molecular knot theory and circuit topology that describe the topology of folded linear molecules such as proteins and Nucleic Acids. The history of the approach may be traced back to the 19th century. Georg Helm published a treatise titled "The Principles of Mathematical Chemistry: The Energetics of Chemical Phenomena" in 1894. Some of the more contemporary periodical publications specializing in the field are MATCH Communications in Mathematical and in Computer Chemistry, first published in 1975, and the Journal of Mathematical Chemistry, first published in 1987. In 1986 a series of annual conferences MATH/CHEM/COMP taking place in Dubrovnik was initiated by the late Ante Graovac. The basic models for mathematical chemistry are molecular graph and topological index. In 2005 the International Academy of Mathematical Chemistry (IAMC) was founded in Dubrovnik (Croatia) by Milan Randić. The Academy has 82 members (2009) from all over the world, including six scientists awarded with a Nobel Prize. See also Bibliography Molecular Descriptors for Chemoinformatics, by R. Todeschini and V. Consonni, Wiley-VCH, Weinheim, 2009. Mathematical Chemistry Series, by D. Bonchev, D. H. Rouvray (Eds.), Gordon and Breach Science Publisher, Amsterdam, 2000
https://en.wikipedia.org/wiki/Phenotypic%20switching
Phenotypic switching is switching between multiple cellular morphologies. David R. Soll described two such systems: the first high frequency switching system between several morphological stages and a second high frequency switching system between opaque and white cells. The latter is an epigenetic switching system Phenotypic switching in Candida albicans is often used to refer to the epigenetic white-to-opaque switching system. C. albicans needs this switch for sexual mating. Next to the two above mentioned switching systems many other switching systems are known in C. albicans. A second example occurs in melanoma, where malignantly transformed pigment cells switch back-and-forth between phenotypes of proliferation and invasion in response to changing microenvironments, driving metastatic progression. See also Polyphenism References External links Cell biology
https://en.wikipedia.org/wiki/PXES
PXES also known as PXES Universal Linux Thin Client, was created in early 2001 by Diego Torres Milano. PXES is a Linux distribution designed to be run on thin clients using PXE; however, it is also possible to boot PXES from a CD-ROM or hard disk if the NIC or BIOS does not support PXE. In 2006, The PXES project merged with 2X Software, who are merging PXES with the 2XOS. Distribution of PXES will remain free. References External links PXES at Sourceforge.net 2X Software 2XOS PXES museum Remote desktop Network booting
https://en.wikipedia.org/wiki/National%20Defence%20Radio%20Establishment
The National Defence Radio Establishment (, FRA) is a Swedish government agency organised under the Ministry of Defence. The two main tasks of FRA are signals intelligence (SIGINT), and support to government authorities and state-owned companies regarding computer security. The FRA is not allowed to initialize any surveillance on their own, and operates purely on assignment from the Government, the Government Offices, the Armed Forces, the Swedish National Police Board and Swedish Security Service (SÄPO). Decisions and oversight regarding information interception is provided by the Defence Intelligence Court and the Defence Intelligence Commission; additional oversight regarding protection of privacy is provided by the Swedish Data Protection Authority. History Signals Intelligence has existed in Sweden since 1905 when Swedish General Staff and Naval Staff respectively, had departments for signals intelligence and cryptanalysis. These departments succeeded, for instance, to decode the Russian Baltic Sea Fleet cipher. After World War I, this ability mostly ceased as politicians did not see its value and did not grant funding. The Swedish Navy still continued in a smaller scale and developed the competence further. One of the first major successes was in 1933 when the cipher of the Russian OGPU (predecessor to KGB) was solved. In 1937, the Swedish Defence Staff was established and the Crypto Department, with its Crypto Detail IV, was responsible for cryptanalysis. In 1940, when Germany occupied Denmark and Norway, the German Wehrmacht requested to use the Swedish telephone network for its communication. This was accepted and Crypto Detail IV immediately started to intercept. The traffic was almost always encrypted by the German state-of-the-art cipher machine Geheimfernschreiber. This device was believed to produce indecipherable messages, with its 893,622,318,929,520,960 different crypto key settings. After two weeks of single hand work, the Swedish professor o
https://en.wikipedia.org/wiki/Tetractys
The tetractys (), or tetrad, or the tetractys of the decad is a triangular figure consisting of ten points arranged in four rows: one, two, three, and four points in each row, which is the geometrical representation of the fourth triangular number. As a mystical symbol, it was very important to the secret worship of Pythagoreanism. There were four seasons, and the number was also associated with planetary motions and music. Pythagorean symbol The first four numbers symbolize the musica universalis and the Cosmos as: Monad – Unity Dyad – Power – Limit/Unlimited (peras/apeiron) Triad – Harmony Tetrad – Kosmos The four rows add up to ten, which was unity of a higher order (The Dekad). The Tetractys symbolizes the four classical elements—air, fire, water, and earth. The Tetractys represented the organization of space: the first row represented zero dimensions (a point) the second row represented one dimension (a line of two points) the third row represented two dimensions (a plane defined by a triangle of three points) the fourth row represented three dimensions (a tetrahedron defined by four points) A prayer of the Pythagoreans shows the importance of the Tetractys (sometimes called the "Mystic Tetrad"), as the prayer was addressed to it. As a portion of the secret religion, initiates were required to swear a secret oath by the Tetractys. They then served as novices, which required them to observe silence for a period of five years. The Pythagorean oath also mentioned the Tetractys: By that pure, holy, four lettered name on high, nature's eternal fountain and supply, the parent of all souls that living be, by him, with faith find oath, I swear to thee. It is said that the Pythagorean musical system was based on the Tetractys as the rows can be read as the ratios of 4:3 (perfect fourth), 3:2 (perfect fifth), 2:1 (octave), forming the basic intervals of the Pythagorean scales. That is, Pythagorean scales are generated from combining pure fourths (in a
https://en.wikipedia.org/wiki/Isopropyl%20%CE%B2-D-1-thiogalactopyranoside
{{DISPLAYTITLE:Isopropyl β-D-1-thiogalactopyranoside}} Isopropyl β--1-thiogalactopyranoside (IPTG) is a molecular biology reagent. This compound is a molecular mimic of allolactose, a lactose metabolite that triggers transcription of the lac operon, and it is therefore used to induce protein expression where the gene is under the control of the lac operator. Mechanism of action Like allolactose, IPTG binds to the lac repressor and releases the tetrameric repressor from the lac operator in an allosteric manner, thereby allowing the transcription of genes in the lac operon, such as the gene coding for beta-galactosidase, a hydrolase enzyme that catalyzes the hydrolysis of β-galactosides into monosaccharides. But unlike allolactose, the sulfur (S) atom creates a chemical bond which is non-hydrolyzable by the cell, preventing the cell from metabolizing or degrading the inducer. Therefore, its concentration remains constant during an experiment. IPTG uptake by E. coli can be independent of the action of lactose permease, since other transport pathways are also involved. At low concentration, IPTG enters cells through lactose permease, but at high concentrations (typically used for protein induction), IPTG can enter the cells independently of lactose permease. Use in laboratory When stored as a powder at 4 °C or below, IPTG is stable for 5 years. It is significantly less stable in solution; Sigma recommends storage for no more than a month at room temperature. IPTG is an effective inducer of protein expression in the concentration range of 100 μmol/L to 3.0 mmol/L. Typically, a sterile, filtered 1 mol/L solution of IPTG is added 1:1000 to an exponentially growing bacterial culture, to give a final concentration of 1 mmol/L. The concentration used depends on the strength of induction required, as well as the genotype of cells or plasmid used. If lacIq, a mutant that over-produces the lac repressor, is present, then a higher concentration of IPTG may be necessary.
https://en.wikipedia.org/wiki/Burroughs%20Medium%20Systems
The Burroughs B2500 through Burroughs B4900 was a series of mainframe computers developed and manufactured by Burroughs Corporation in Pasadena, California, United States, from 1966 to 1991. They were aimed at the business world with an instruction set optimized for the COBOL programming language. They were also known as Burroughs Medium Systems, by contrast with the Burroughs Large Systems and Burroughs Small Systems. History and architecture First generation The B2500 and B3500 computers were announced in 1966. They operated directly on COBOL-68's primary decimal data types: strings of up to 100 digits, with one EBCDIC or ASCII digit character or two 4-bit binary-coded decimal BCD digits per byte. Portable COBOL programs did not use binary integers at all, so the B2500 did not either, not even for memory addresses. Memory was addressed down to the 4-bit digit in big-endian style, using 5-digit decimal addresses. Floating point numbers also used base 10 rather than some binary base, and had up to 100 mantissa digits. A typical COBOL statement 'ADD A, B GIVING C' may use operands of different lengths, different digit representations, and different sign representations. This statement compiled into a single 12-byte instruction with 3 memory operands. Complex formatting for printing was accomplished by executing a single EDIT instruction with detailed format descriptors. Other high level instructions implemented "translate this buffer through this (e.g. EBCDIC to ASCII) conversion table into that buffer" and "sort this table using these sort requirements into that table". In extreme cases, single instructions could run for several hundredths of a second. MCP could terminate over-long instructions but could not interrupt and resume partially completed instructions. (Resumption is a prerequisite for doing page style virtual memory when operands cross page boundaries.) The machine matched COBOL so closely that the COBOL compiler was simple and fast, and CO
https://en.wikipedia.org/wiki/General-purpose%20markup%20language
A general-purpose markup language is a markup language that is used for more than one purpose or situation. Other, more specialized domain-specific markup languages are often based upon these languages. For example, HTML 4.1 and earlier are domain-specific markup languages (for webpages), and are based on the syntax of SGML, which is a general-purpose markup language. List Notable general-purpose markup languages include: ASN.1 (Abstract Syntax Notation One) EBML LML – general-purpose markup language for expressing markdown, variables, and expressions for machine-readable and executable legal documentation GML – the predecessor of SGML SGML – a predecessor of XML XML – a stripped-down form of SGML YAML GLML – General-purpose Legal Markup Language See also Comparison of document markup languages General-purpose language General-purpose modeling language General-purpose programming language S-expression Data serialization formats
https://en.wikipedia.org/wiki/List%20of%20web%20service%20protocols
The following is a list of web service protocols. BEEP - Blocks Extensible Exchange Protocol CTS - Canonical Text Services Protocol E-Business XML Hessian Internet Open Trading Protocol JSON-RPC JSON-WSP SOAP - outgrowth of XML-RPC, originally an acronym for Simple Object Access Protocol Universal Description, Discovery, and Integration (UDDI) Web Processing Service (WPS) WSCL - Web Services Conversation Language WSFL - Web Services Flow Language (superseded by BPEL) XINS Standard Calling Convention - HTTP parameters in (GET/POST/HEAD), POX out XLANG - XLANG-Specification (superseded by BPEL) XML-RPC - XML Remote Procedure Call See also List of web service frameworks List of web service specifications Service-oriented architecture Web service Application layer protocols web service
https://en.wikipedia.org/wiki/Condensation%20algorithm
The condensation algorithm (Conditional Density Propagation) is a computer vision algorithm. The principal application is to detect and track the contour of objects moving in a cluttered environment. Object tracking is one of the more basic and difficult aspects of computer vision and is generally a prerequisite to object recognition. Being able to identify which pixels in an image make up the contour of an object is a non-trivial problem. Condensation is a probabilistic algorithm that attempts to solve this problem. The algorithm itself is described in detail by Isard and Blake in a publication in the International Journal of Computer Vision in 1998. One of the most interesting facets of the algorithm is that it does not compute on every pixel of the image. Rather, pixels to process are chosen at random, and only a subset of the pixels end up being processed. Multiple hypotheses about what is moving are supported naturally by the probabilistic nature of the approach. The evaluation functions come largely from previous work in the area and include many standard statistical approaches. The original part of this work is the application of particle filter estimation techniques. The algorithm’s creation was inspired by the inability of Kalman filtering to perform object tracking well in the presence of significant background clutter. The presence of clutter tends to produce probability distributions for the object state which are multi-modal and therefore poorly modeled by the Kalman filter. The condensation algorithm in its most general form requires no assumptions about the probability distributions of the object or measurements. Algorithm overview The condensation algorithm seeks to solve the problem of estimating the conformation of an object described by a vector at time , given observations of the detected features in the images up to and including the current time. The algorithm outputs an estimate to the state conditional probability density by appl
https://en.wikipedia.org/wiki/Cunningham%20Project
The Cunningham Project is a collaborative effort started in 1925 to factor numbers of the form bn ± 1 for b = 2, 3, 5, 6, 7, 10, 11, 12 and large n. The project is named after Allan Joseph Champneys Cunningham, who published the first version of the table together with Herbert J. Woodall. There are three printed versions of the table, the most recent published in 2002, as well as an online version by Samuel Wagstaff. The current limits of the exponents are: Factors of Cunningham number Two types of factors can be derived from a Cunningham number without having to use a factorization algorithm: algebraic factors of binomial numbers (e.g. difference of two squares and sum of two cubes), which depend on the exponent, and aurifeuillean factors, which depend on both the base and the exponent. Algebraic factors From elementary algebra, for all k, and for odd k. In addition, b2n − 1 = (bn − 1)(bn + 1). Thus, when m divides n, bm − 1 and bm + 1 are factors of bn − 1 if the quotient of n over m is even; only the first number is a factor if the quotient is odd. bm + 1 is a factor of bn − 1, if m divides n and the quotient is odd. In fact, and See this page for more information. Aurifeuillean factors When the number is of a particular form (the exact expression varies with the base), aurifeuillean factorization may be used, which gives a product of two or three numbers. The following equations give aurifeuillean factors for the Cunningham project bases as a product of F, L and M: Let b = s2 × k with squarefree k, if one of the conditions holds, then have aurifeuillean factorization. (i) and (ii) and Other factors Once the algebraic and aurifeuillean factors are removed, the other factors of bn ± 1 are always of the form 2kn + 1, since they are all factors of . When n is prime, both algebraic and aurifeuillean factors are not possible, except the trivial factors (b − 1 for bn − 1 and b + 1 for bn + 1). For Mersenne numbers, the trivial factors are not poss
https://en.wikipedia.org/wiki/Plant%20senescence
Plant senescence is the process of aging in plants. Plants have both stress-induced and age-related developmental aging. Chlorophyll degradation during leaf senescence reveals the carotenoids, such as anthocyanin and xanthophylls, which are the cause of autumn leaf color in deciduous trees. Leaf senescence has the important function of recycling nutrients, mostly nitrogen, to growing and storage organs of the plant. Unlike animals, plants continually form new organs and older organs undergo a highly regulated senescence program to maximize nutrient export. Hormonal regulation of senescence Programmed senescence seems to be heavily influenced by plant hormones. The hormones abscisic acid, ethylene, jasmonic acid and salicylic acid are accepted by most scientists as promoters of senescence, but at least one source lists gibberellins, brassinosteroids and strigolactone as also being involved. Cytokinins help to maintain the plant cell and expression of cytokinin biosynthesis genes late in development prevents leaf senescence. A withdrawal of or inability of the cell to perceive cytokinin may cause it to undergo apoptosis or senescence. In addition, mutants that cannot perceive ethylene show delayed senescence. Genome-wide comparison of mRNAs expressed during dark-induced senescence versus those expressed during age-related developmental senescence demonstrate that jasmonic acid and ethylene are more important for dark-induced (stress-related) senescence while salicylic acid is more important for developmental senescence. Annual versus perennial benefits Some plants have evolved into annuals which die off at the end of each season and leave seeds for the next, whereas closely related plants in the same family have evolved to live as perennials. This may be a programmed "strategy" for the plants. The benefit of an annual strategy may be genetic diversity, as one set of genes does continue year after year, but a new mix is produced each year. Secondly, being annual m
https://en.wikipedia.org/wiki/Electrostatic%20generator
An electrostatic generator, or electrostatic machine, is an electrical generator that produces static electricity, or electricity at high voltage and low continuous current. The knowledge of static electricity dates back to the earliest civilizations, but for millennia it remained merely an interesting and mystifying phenomenon, without a theory to explain its behavior and often confused with magnetism. By the end of the 17th century, researchers had developed practical means of generating electricity by friction, but the development of electrostatic machines did not begin in earnest until the 18th century, when they became fundamental instruments in the studies about the new science of electricity. Electrostatic generators operate by using manual (or other) power to transform mechanical work into electric energy, or using electric currents. Manual electrostatic generators develop electrostatic charges of opposite signs rendered to two conductors, using only electric forces, and work by using moving plates, drums, or belts to carry electric charge to a high potential electrode. Description Electrostatic machines are typically used in science classrooms to safely demonstrate electrical forces and high voltage phenomena. The elevated potential differences achieved have been also used for a variety of practical applications, such as operating X-ray tubes, particle accelerators, spectroscopy, medical applications, sterilization of food, and nuclear physics experiments. Electrostatic generators such as the Van de Graaff generator, and variations as the Pelletron, also find use in physics research. Electrostatic generators can be divided into categories depending on how the charge is generated: Friction machines use the triboelectric effect (electricity generated by contact or friction) Influence machines use electrostatic induction Others Friction machines History The first electrostatic generators are called friction machines because of the friction in the genera
https://en.wikipedia.org/wiki/Hilbert%27s%20fourth%20problem
In mathematics, Hilbert's fourth problem in the 1900 list of Hilbert's problems is a foundational question in geometry. In one statement derived from the original, it was to find — up to an isomorphism — all geometries that have an axiomatic system of the classical geometry (Euclidean, hyperbolic and elliptic), with those axioms of congruence that involve the concept of the angle dropped, and `triangle inequality', regarded as an axiom, added. If one assumes the continuity axiom in addition, then, in the case of the Euclidean plane, we come to the problem posed by Jean Gaston Darboux: "To determine all the calculus of variation problems in the plane whose solutions are all the plane straight lines." There are several interpretations of the original statement of David Hilbert. Nevertheless, a solution was sought, with the German mathematician Georg Hamel being the first to contribute to the solution of Hilbert's fourth problem. A recognized solution was given by Soviet mathematician Aleksei Pogorelov in 1973.<ref name="Pogorelov1973">А. В. Погорелов, Полное решение IV проблемы Гильберта, ДАН СССР № 208, т.1 (1973), 46–49. English translation: {{cite journal | last1=Pogorelov | first1=A. V. | title=A complete solution of "Hilbert's fourth problem| journal=Doklady Akademii Nauk SSSR | volume=208 | issue=1 | date=1973 | pages=48–52}}</ref> In 1976, Armenian mathematician Rouben V. Ambartzumian proposed another proof of Hilbert's fourth problem. Original statement Hilbert discusses the existence of non-Euclidean geometry and non-Archimedean geometry ...a geometry in which all the axioms of ordinary euclidean geometry hold, and in particular all the congruence axioms except the one of the congruence of triangles (or all except the theorem of the equality of the base angles in the isosceles triangle), and in which, besides, the proposition that in every triangle the sum of two sides is greater than the third is assumed as a particular axiom. Due to the idea that a
https://en.wikipedia.org/wiki/Hamilton%20O.%20Smith
Hamilton Othanel Smith (born August 23, 1931) is an American microbiologist and Nobel laureate. Smith graduated from University Laboratory High School of Urbana, Illinois. He attended the University of Illinois at Urbana-Champaign, but in 1950 transferred to the University of California, Berkeley, where he earned his B.A. in Mathematics in 1952 . He received his medical degree from Johns Hopkins School of Medicine in 1956. Between 1956 and 1957 Smith worked for the Washington University in St. Louis Medical Service. In 1975, he was awarded a Guggenheim Fellowship he spent at the University of Zurich. In 1970, Smith and Kent W. Wilcox discovered the first type II restriction enzyme, that is now called as HindII. Smith went on to discover DNA methylases that constitute the other half of the bacterial host restriction and modification systems, as hypothesized by Werner Arber of Switzerland. He was awarded the Nobel Prize in Physiology or Medicine in 1978 for discovering type II restriction enzymes with Werner Arber and Daniel Nathans as co-recipients. He later became a leading figure in the nascent field of genomics, when in 1995 he and a team at The Institute for Genomic Research sequenced the first bacterial genome, that of Haemophilus influenzae. H. influenza was the same organism in which Smith had discovered restriction enzymes in the late 1960s. He subsequently played a key role in the sequencing of many of the early genomes at The Institute for Genomic Research, and in the assembly of the human genome at Celera Genomics, which he joined when it was founded in 1998. More recently, he has directed a team at the J. Craig Venter Institute that works towards creating a partially synthetic bacterium, Mycoplasma laboratorium. In 2003 the same group synthetically assembled the genome of a virus, Phi X 174 bacteriophage. Smith is scientific director of privately held Synthetic Genomics, which was founded in 2005 by Craig Venter to continue this work. Synthetic Gen
https://en.wikipedia.org/wiki/BioBlitz
A BioBlitz, also written without capitals as bioblitz, is an intense period of biological surveying in an attempt to record all the living species within a designated area. Groups of scientists, naturalists, and volunteers conduct an intensive field study over a continuous time period (e.g., usually 24 hours). There is a public component to many BioBlitzes, with the goal of getting the public interested in biodiversity. To encourage more public participation, these BioBlitzes are often held in urban parks or nature reserves close to cities. Research into the best practices for a successful BioBlitz has found that collaboration with local natural history museums can improve public participation. As well, BioBlitzes have been shown to be a successful tool in teaching post-secondary students about biodiversity. Features A BioBlitz has different opportunities and benefits than a traditional, scientific field study. Some of these potential benefits include: Enjoyment – Instead of a highly structured and measured field survey, this sort of event has the atmosphere of a festival. The short time frame makes the search more exciting. Local – The concept of biodiversity tends to be associated with coral reefs or tropical rainforests. A BioBlitz offers the chance for people to visit a nearby setting and see that local parks have biodiversity and are important to conserve. Science – These one-day events gather basic taxonomic information on some groups of species. Meet the Scientists – A BioBlitz encourages people to meet working scientists and ask them questions. Identifying rare and unique species/groups – When volunteers and scientists work together, they are able to identify uncommon or special habitats for protection and management and, in some cases, rare species may be uncovered. Documenting species occurrence – BioBlitzes do not provide a complete species inventory for a site, but they provide a species list which makes a basis for a more complete inventory and will of
https://en.wikipedia.org/wiki/Cardinality%20of%20the%20continuum
In set theory, the cardinality of the continuum is the cardinality or "size" of the set of real numbers , sometimes called the continuum. It is an infinite cardinal number and is denoted by (lowercase Fraktur "c") or . The real numbers are more numerous than the natural numbers . Moreover, has the same number of elements as the power set of Symbolically, if the cardinality of is denoted as , the cardinality of the continuum is This was proven by Georg Cantor in his uncountability proof of 1874, part of his groundbreaking study of different infinities. The inequality was later stated more simply in his diagonal argument in 1891. Cantor defined cardinality in terms of bijective functions: two sets have the same cardinality if, and only if, there exists a bijective function between them. Between any two real numbers a < b, no matter how close they are to each other, there are always infinitely many other real numbers, and Cantor showed that they are as many as those contained in the whole set of real numbers. In other words, the open interval (a,b) is equinumerous with This is also true for several other infinite sets, such as any n-dimensional Euclidean space (see space filling curve). That is, The smallest infinite cardinal number is (aleph-null). The second smallest is (aleph-one). The continuum hypothesis, which asserts that there are no sets whose cardinality is strictly between and means that . The truth or falsity of this hypothesis is undecidable and cannot be proven within the widely used Zermelo–Fraenkel set theory with axiom of choice (ZFC). Properties Uncountability Georg Cantor introduced the concept of cardinality to compare the sizes of infinite sets. He famously showed that the set of real numbers is uncountably infinite. That is, is strictly greater than the cardinality of the natural numbers, : In practice, this means that there are strictly more real numbers than there are integers. Cantor proved this statement in several different
https://en.wikipedia.org/wiki/Shear%20modulus
In materials science, shear modulus or modulus of rigidity, denoted by G, or sometimes S or μ, is a measure of the elastic shear stiffness of a material and is defined as the ratio of shear stress to the shear strain: where = shear stress is the force which acts is the area on which the force acts = shear strain. In engineering , elsewhere is the transverse displacement is the initial length of the area. The derived SI unit of shear modulus is the pascal (Pa), although it is usually expressed in gigapascals (GPa) or in thousand pounds per square inch (ksi). Its dimensional form is M1L−1T−2, replacing force by mass times acceleration. Explanation The shear modulus is one of several quantities for measuring the stiffness of materials. All of them arise in the generalized Hooke's law: Young's modulus E describes the material's strain response to uniaxial stress in the direction of this stress (like pulling on the ends of a wire or putting a weight on top of a column, with the wire getting longer and the column losing height), the Poisson's ratio ν describes the response in the directions orthogonal to this uniaxial stress (the wire getting thinner and the column thicker), the bulk modulus K describes the material's response to (uniform) hydrostatic pressure (like the pressure at the bottom of the ocean or a deep swimming pool), the shear modulus G describes the material's response to shear stress (like cutting it with dull scissors). These moduli are not independent, and for isotropic materials they are connected via the equations The shear modulus is concerned with the deformation of a solid when it experiences a force parallel to one of its surfaces while its opposite face experiences an opposing force (such as friction). In the case of an object shaped like a rectangular prism, it will deform into a parallelepiped. Anisotropic materials such as wood, paper and also essentially all single crystals exhibit differing material response to stress or st
https://en.wikipedia.org/wiki/Series%20expansion
In mathematics, a series expansion is a technique that expresses a function as an infinite sum, or series, of simpler functions. It is a method for calculating a function that cannot be expressed by just elementary operators (addition, subtraction, multiplication and division). The resulting so-called series often can be limited to a finite number of terms, thus yielding an approximation of the function. The fewer terms of the sequence are used, the simpler this approximation will be. Often, the resulting inaccuracy (i.e., the partial sum of the omitted terms) can be described by an equation involving Big O notation (see also asymptotic expansion). The series expansion on an open interval will also be an approximation for non-analytic functions. Types of series expansions There are several kinds of series expansions, listed below. Taylor series A Taylor series is a power series based on a function's derivatives at a single point. More specifically, if a function is infinitely differentiable around a point , then the Taylor series of f around this point is given by under the convention . The Maclaurin series of f is its Taylor series about . Laurent series A Laurent series is a generalization of the Taylor series, allowing terms with negative exponents; it takes the form and converges in an annulus. In particular, a Laurent series can be used to examine the behavior of a complex function near a singularity by considering the series expansion on an annulus centered at the singularity. Dirichlet series A general Dirichlet series is a series of the form One important special case of this is the ordinary Dirichlet series Used in number theory. Fourier series A Fourier series is an expansion of periodic functions as a sum of many sine and cosine functions. More specifically, the Fourier series of a function of period is given by the expressionwhere the coefficients are given by the formulae Other series In acoustics, e.g., the fundamental tone and
https://en.wikipedia.org/wiki/Stereotype%20%28UML%29
A stereotype is one of three types of extensibility mechanisms in the Unified Modeling Language (UML), the other two being tags and constraints. They allow designers to extend the vocabulary of UML in order to create new model elements, derived from existing ones, but that have specific properties that are suitable for a particular domain or otherwise specialized usage. The nomenclature is derived from the original meaning of stereotype, used in printing. For example, when modeling a network you might need to have symbols for representing routers and hubs. By using stereotyped nodes you can make these things appear as primitive building blocks. Graphically, a stereotype is rendered as a name enclosed by guillemets (« » or, if guillemets proper are unavailable, << >>) and placed above the name of another element. In addition or alternatively it may be indicated by a specific icon. The icon image may even replace the entire UML symbol. For instance, in a class diagram stereotypes can be used to classify method behavior, e.g. with «constructor» and «getter» and refine the classifier itself, e.g. with «interface». One alternative to stereotypes, suggested by Peter Coad in his book Java Modeling in Color with UML: Enterprise Components and Process is the use of colored archetypes. The archetypes indicated by different-colored UML boxes can be used in combination with stereotypes. This added definition of meaning indicates the role that the UML object plays within the larger software system. Stereotype attributes From version 2.0 the previously independent tagged value is considered to be a stereotype attribute. The name tagged value is still kept. Each stereotype has zero or more tag definitions, and all stereotyped UML elements have the corresponding number of tagged values. UML-defined stereotypes Become In UML, become is a keyword for a specific UML stereotype, and applies to a dependency (modeled as a dashed arrow). Become shows that the source modeling el
https://en.wikipedia.org/wiki/R%C3%B6ssler%20attractor
The Rössler attractor is the attractor for the Rössler system, a system of three non-linear ordinary differential equations originally studied by Otto Rössler in the 1970s. These differential equations define a continuous-time dynamical system that exhibits chaotic dynamics associated with the fractal properties of the attractor. Rössler interpreted it as a formalization of a taffy-pulling machine. Some properties of the Rössler system can be deduced via linear methods such as eigenvectors, but the main features of the system require non-linear methods such as Poincaré maps and bifurcation diagrams. The original Rössler paper states the Rössler attractor was intended to behave similarly to the Lorenz attractor, but also be easier to analyze qualitatively. An orbit within the attractor follows an outward spiral close to the plane around an unstable fixed point. Once the graph spirals out enough, a second fixed point influences the graph, causing a rise and twist in the -dimension. In the time domain, it becomes apparent that although each variable is oscillating within a fixed range of values, the oscillations are chaotic. This attractor has some similarities to the Lorenz attractor, but is simpler and has only one manifold. Otto Rössler designed the Rössler attractor in 1976, but the originally theoretical equations were later found to be useful in modeling equilibrium in chemical reactions. Definition The defining equations of the Rössler system are: Rössler studied the chaotic attractor with , , and , though properties of , , and have been more commonly used since. Another line of the parameter space was investigated using the topological analysis. It corresponds to , , and was chosen as the bifurcation parameter. How Rössler discovered this set of equations was investigated by Letellier and Messager. Stability analysis Some of the Rössler attractor's elegance is due to two of its equations being linear; setting , allows examination of the behavior on th
https://en.wikipedia.org/wiki/Incidence%20geometry
In mathematics, incidence geometry is the study of incidence structures. A geometric structure such as the Euclidean plane is a complicated object that involves concepts such as length, angles, continuity, betweenness, and incidence. An incidence structure is what is obtained when all other concepts are removed and all that remains is the data about which points lie on which lines. Even with this severe limitation, theorems can be proved and interesting facts emerge concerning this structure. Such fundamental results remain valid when additional concepts are added to form a richer geometry. It sometimes happens that authors blur the distinction between a study and the objects of that study, so it is not surprising to find that some authors refer to incidence structures as incidence geometries. Incidence structures arise naturally and have been studied in various areas of mathematics. Consequently, there are different terminologies to describe these objects. In graph theory they are called hypergraphs, and in combinatorial design theory they are called block designs. Besides the difference in terminology, each area approaches the subject differently and is interested in questions about these objects relevant to that discipline. Using geometric language, as is done in incidence geometry, shapes the topics and examples that are normally presented. It is, however, possible to translate the results from one discipline into the terminology of another, but this often leads to awkward and convoluted statements that do not appear to be natural outgrowths of the topics. In the examples selected for this article we use only those with a natural geometric flavor. A special case that has generated much interest deals with finite sets of points in the Euclidean plane and what can be said about the number and types of (straight) lines they determine. Some results of this situation can extend to more general settings since only incidence properties are considered. Incidence stru
https://en.wikipedia.org/wiki/Sir%20Dystic
Josh Buchbinder, better known as Sir Dystic, has been a member of Cult of the Dead Cow (cDc) since May 1997, and is the author of Back Orifice. He has also written several other hacker tools, including SMBRelay, NetE, and NBName. Sir Dystic has appeared at multiple hacker conventions, both as a member of panels and speaking on his own. He has also been interviewed on several television and radio programs and in an award-winning short film about hacker culture in general and cDc in particular. Dystic's pseudonym is taken from a somewhat obscure 1930s bondage comic character named "Sir Dystic D'Arcy." According to the cDc's Sir Dystic, his namesake "tried to do evil things but always bungles it and ends up doing good inadvertently." Software Back Orifice Back Orifice (often shortened to BO) is a controversial computer program designed for remote system administration. It enables a user to control a computer running the Microsoft Windows operating system from a remote location. The name is a pun on Microsoft BackOffice Server software. The program debuted at DEF CON 6 on August 1, 1998. It was the brainchild of Sir Dystic, a member of the U.S. hacker organization Cult of the Dead Cow. According to the group, its purpose was to demonstrate the lack of security in Microsoft's operating system Windows 98. According to Sir Dystic, "BO was supposed to be a statement about the fact that people feel secure and safe, although there are wide, gaping holes in both the operating system they're using and the means of defense they're using against hostile code. I mean, that was my message and BO2K really has a different message." Vnunet.com reported Sir Dystic's claim that this message was privately commended by employees of Microsoft. SMBRelay & SMBRelay2 SMBRelay and SMBRelay2 are computer programs that can be used to carry out SMB man in the middle (mitm) attacks on Windows machines. They were written by Sir Dystic and released 21 March 2001 at the @lantacon convention in
https://en.wikipedia.org/wiki/Enlightenment%20Foundation%20Libraries
The Enlightenment Foundation Libraries (EFL) are a set of graphics libraries that grew out of the development of Enlightenment, a window manager and Wayland compositor. The project's focus is to make the EFL a flexible yet powerful and easy to use set of tools to extend the capabilities of both the Enlightenment window manager and other software projects based on the EFL. The libraries are meant to be portable and optimized to be functional even on mobile devices such as smart phones and tablets. The libraries were created for version 0.17 of the window manager. EFL is developed by Enlightenment.org with some sponsorship from Samsung, ProFUSION and Free.fr. EFL is free and open source software. Core components Evas Evas is the EFL canvas library, for creating areas, or windows, that applications can draw on in an X Window System. The EFL uses hardware-acceleration where possible to allow it to work faster, but is also designed to work on lower-end hardware, falling back to lower color and quality for graphics if necessary. Unlike most canvas libraries, it is primarily image-based (as opposed to vector-based) and fully state-aware (the vast majority of canvases are stateless, requiring the programmer to keep track of state). Edje Edje is a library that attempts to separate the user interface from the application. It allows applications to be skinnable, so that it is possible to change the GUI of an application without changing the application itself. Edje-based applications use files which contain the specifications for the GUI layout that is to be used. Edje themes are contained using EET generated files. Ecore Ecore is an event abstraction, and modular convenience library, intended to simplify a number of common tasks. It is modular, so applications need only call the minimal required libraries for a job. Ecore simplifies working with X, Evas, and also a few other things, such as network communications and threads. Embryo Embryo implements a scripting la
https://en.wikipedia.org/wiki/Mike%20%26%20Mike
Mike & Mike (formerly Mike & Mike in the Morning) was an American sports-talk radio show that was hosted by Mike Greenberg and Mike Golic on ESPN networks from 2000–2017. The show aired on ESPN Radio, and was simulcast on television, first on ESPNews starting in 2004, and later moving to ESPN2 in 2006. The show primarily focused on the day's biggest sports topics and the humorous banter between the Mikes. It acted as the morning show for both the radio and television sides of the production. Outside of a few radio stations that are able to move or decline carriage of the show for their own local morning productions (or for daytime-only operations, may not be able to carry), Mike & Mike was effectively a compulsory element of the ESPN Radio schedule, which all affiliates of the network were required to carry. On May 7, 2007, the show moved from its longtime radio studio home to the television studio used for Sunday NFL Countdown and Baseball Tonight, and began broadcasting in high-definition in 2007 as well. A daily "best-of" show aired daily on ESPN2 and a weekly radio recap aired Saturday mornings at 6:00 a.m. to 7:00 a.m. and then moved to 5 a.m. ET before being discontinued in October 2009. The radio version of "best-of" returned in February 2010 in the 5am timeslot. In addition, there was a "best-of" podcast distributed every weekday as well. The show reaired on ESPNEWS immediately after the live simulcast. On March 6, 2015, the duo celebrated 15 years of doing the show together. Mike & Mike were inducted into the NAB Broadcasting Hall of Fame on April 19, 2016. On May 16, 2017, and after reports of acrimony between the hosts, it was announced that the show would be replaced by a new morning drive show hosted by Golic and Trey Wingo, with Greenberg moving to the main ESPN channel, where he hosts a morning show called Get Up!, which premiered on April 2, 2018. Greenberg's final day as co-host was November 17, 2017. On November 27, 2017, Mike & Mikes successo
https://en.wikipedia.org/wiki/Electronic%20Frontier%20Canada
Electronic Frontier Canada (EFC) was a Canadian on-line civil rights organization founded to ensure that the principles embodied in the Canadian Charter of Rights and Freedoms remain protected as new computing, communications, and information technologies are introduced into Canadian society. As of 2005, the organization is no longer active. EFC was founded in January, 1994 and later became incorporated under the Canada Corporations Act as a Federal non-profit corporation. The letters patent was submitted December 29, 1994, and recorded on January 18, 1995. EFC is not formally affiliated with the Electronic Frontier Foundation (EFF), which is based in San Francisco, although it shares many of their goals about which the groups communicate from time to time. EFC is focused on issues directly affecting Canadians, whereas the EFF has a clear American focus. Briefly, EFC's mandate is to conduct research into issues and promote public awareness in Canada regarding the application of the Charter of Rights and Freedoms to new computer, communication, and information technologies, such as the Internet. The aim is protect freedom of expression and the right to privacy in cyberspace. As of 2011, has included a submission to the Canadian government concerning the Internet Service Provider wiretapping legislation reforms known as the Lawful Access proposals, and intervention in the BMG Canada Inc. and others v. Doe and others file-sharing case, where an Ontario Court refused to allow the Canadian Recording Industry Association and several major record labels from obtaining the subscriber information of ISP customers alleged to have been infringing copyright. The EFC appears to have been inactive since 2004 (based on the last updates to their website). Similar Organizations OpenMedia.ca Online Rights Canada External links Foundations based in Canada Politics and technology Computer law organizations Internet privacy organizations Privacy organizations Organizations es
https://en.wikipedia.org/wiki/Network%20planning%20and%20design
Network planning and design is an iterative process, encompassing topological design, network-synthesis, and network-realization, and is aimed at ensuring that a new telecommunications network or service meets the needs of the subscriber and operator. The process can be tailored according to each new network or service. A network planning methodology A traditional network planning methodology in the context of business decisions involves five layers of planning, namely: need assessment and resource assessment short-term network planning IT resource long-term and medium-term network planning operations and maintenance. Each of these layers incorporates plans for different time horizons, i.e. the business planning layer determines the planning that the operator must perform to ensure that the network will perform as required for its intended life-span. The Operations and Maintenance layer, however, examines how the network will run on a day-to-day basis. The network planning process begins with the acquisition of external information. This includes: forecasts of how the new network/service will operate; the economic information concerning costs, and the technical details of the network’s capabilities. Planning a new network/service involves implementing the new system across the first four layers of the OSI Reference Model. Choices must be made for the protocols and transmission technologies. The network planning process involves three main steps: Topological design: This stage involves determining where to place the components and how to connect them. The (topological) optimization methods that can be used in this stage come from an area of mathematics called graph theory. These methods involve determining the costs of transmission and the cost of switching, and thereby determining the optimum connection matrix and location of switches and concentrators. Network-synthesis: This stage involves determining the size of the components used, subject
https://en.wikipedia.org/wiki/The%20Herd%20with%20Colin%20Cowherd
The Herd with Colin Cowherd is an American sports talk radio show hosted by Colin Cowherd on Fox Sports Radio and Fox Sports 1. The show features commentary on the day's sports news, perspective on other news stories, and interviews with celebrities, sports analysts and sports figures. History KFXX AM, ESPN Radio & ESPNU (2001-2015) The Herd first aired on KFXX AM in 2001. The show joined ESPN Radio in 2004, rebranding as The Herd With Colin Cowherd, and four years later in 2008 would later be simulcast on ESPNU and ESPNews. During its run on ESPN, Cowherd was joined by on-air by producers Vincent Kates, David Fisch and Tom Wassell, and guest hosted by personalities such as Doug Gottlieb. ESPN Radio SportsCenter updates during the show were performed by Dan Davis. The show was heavily sponsored by Subway, with the guest caller line being dubbed the "Subway Fresh Take Hotline". On his March 5, 2010 show, Colin Cowherd announced that Amanda Gifford would be leaving The Herd to become a "suit". Additionally, the show was cut back one hour, airing three hours, from 10:00 a.m. to 1:00 p.m. Fox Sports Radio & FS1 (2015-Present) Following controversial statements regarding the level of intelligence needed to understand the game of baseball and the education level of players from countries like the Dominican Republic, The Herd pulled from ESPN Radio and ESPNU on July 24, 2015, as Cowherd exited the network. After Cowherd joined Fox Sports, The Herd moved to the Premiere Networks-distributed Fox Sports Radio network, airing from 12:00 to 3:00 p.m. ET. Its television simulcast also moved to FS1. Fox Sports 1 airs a daily highlight show, The Best Thing I Herd, while a weekly highlight show, The Best Thing I Herd This Week, is posted on the program's YouTube channel. On April 25, 2018, co-host Kristine Leahy announced her departure from the show to host her own program on FS1. Her final episode was April 26, 2018. Following her departure, Joy Taylor became the full-time c
https://en.wikipedia.org/wiki/Buell%20dryer
The Buell dryer, also known as the "turbo shelf" dryer, is an indirect heated industrial dryer once widely used in the Cornwall and Devon china clay mining industry. The Buell dryer was introduced to the china clay industry by English Clays Lovering Pochin & Co. Ltd for their china clay drying plants in Cornwall and Devon, as part of the mechanization and modernization of the industry, which up to that point had been using the same primitive processing methods for almost 100 years. History The industry's first attempt to mechanize its drying process, an oil-fired rotary dryer installed at Rockhill near Stenalees in 1939, had been halted before it could be commissioned by the outbreak of war, with the Board of Trade exercising its wartime powers to place restrictions on the industry, rationing in particular the use of oil and steel. To circumvent these restrictions, in 1944 a Buell dryer was purchased second hand from a Fluorspar mine in Derbyshire, and was installed in an existing building at ECLP's Drinnick site in Nanpean, heated by exhaust steam from Drinnick power plant. As such, it became the first operating mechanical dryer in the Cornish china clay industry, despite not being the first to be constructed. A 1948 Board Of Trade Working Party report into the China Clay industry concluded restrictions on the industry should be relaxed to allow mechanization to begin. The 1948 report led to Parliament ordering an end to Board Of Trade restrictions on the china clay industry, leading to a period of rapid mechanization. In the 25 years that followed, additional Buell dryers were constructed at Kernick, Drinnick, Rocks, Blackpool near Burngullow, Marsh Mills, Parkandillack, Par Harbour, and Goonvean & Rostowrack Ltd's Trelavour site. The Drinnick dryer site was also expanded to include several more steam-heated dryers. Construction The dryer itself is composed of a large upright cylindrical chamber, inside of which are 25 to 30 layers of trays or "hearths". Indi
https://en.wikipedia.org/wiki/Symmetric%20graph
In the mathematical field of graph theory, a graph is symmetric (or arc-transitive) if, given any two pairs of adjacent vertices and of , there is an automorphism such that and In other words, a graph is symmetric if its automorphism group acts transitively on ordered pairs of adjacent vertices (that is, upon edges considered as having a direction). Such a graph is sometimes also called -transitive or flag-transitive. By definition (ignoring and ), a symmetric graph without isolated vertices must also be vertex-transitive. Since the definition above maps one edge to another, a symmetric graph must also be edge-transitive. However, an edge-transitive graph need not be symmetric, since might map to , but not to . Star graphs are a simple example of being edge-transitive without being vertex-transitive or symmetric. As a further example, semi-symmetric graphs are edge-transitive and regular, but not vertex-transitive. Every connected symmetric graph must thus be both vertex-transitive and edge-transitive, and the converse is true for graphs of odd degree. However, for even degree, there exist connected graphs which are vertex-transitive and edge-transitive, but not symmetric. Such graphs are called half-transitive. The smallest connected half-transitive graph is Holt's graph, with degree 4 and 27 vertices. Confusingly, some authors use the term "symmetric graph" to mean a graph which is vertex-transitive and edge-transitive, rather than an arc-transitive graph. Such a definition would include half-transitive graphs, which are excluded under the definition above. A distance-transitive graph is one where instead of considering pairs of adjacent vertices (i.e. vertices a distance of 1 apart), the definition covers two pairs of vertices, each the same distance apart. Such graphs are automatically symmetric, by definition. A is defined to be a sequence of vertices, such that any two consecutive vertices in the sequence are adjacent, and with any repeated
https://en.wikipedia.org/wiki/Hille%E2%80%93Yosida%20theorem
In functional analysis, the Hille–Yosida theorem characterizes the generators of strongly continuous one-parameter semigroups of linear operators on Banach spaces. It is sometimes stated for the special case of contraction semigroups, with the general case being called the Feller–Miyadera–Phillips theorem (after William Feller, Isao Miyadera, and Ralph Phillips). The contraction semigroup case is widely used in the theory of Markov processes. In other scenarios, the closely related Lumer–Phillips theorem is often more useful in determining whether a given operator generates a strongly continuous contraction semigroup. The theorem is named after the mathematicians Einar Hille and Kōsaku Yosida who independently discovered the result around 1948. Formal definitions If X is a Banach space, a one-parameter semigroup of operators on X is a family of operators indexed on the non-negative real numbers {T(t)} t ∈ [0, ∞) such that The semigroup is said to be strongly continuous, also called a (C0) semigroup, if and only if the mapping is continuous for all x ∈ X, where [0, ∞) has the usual topology and X has the norm topology. The infinitesimal generator of a one-parameter semigroup T is an operator A defined on a possibly proper subspace of X as follows: The domain of A is the set of x ∈ X such that has a limit as h approaches 0 from the right. The value of Ax is the value of the above limit. In other words, Ax is the right-derivative at 0 of the function The infinitesimal generator of a strongly continuous one-parameter semigroup is a closed linear operator defined on a dense linear subspace of X. The Hille–Yosida theorem provides a necessary and sufficient condition for a closed linear operator A on a Banach space to be the infinitesimal generator of a strongly continuous one-parameter semigroup. Statement of the theorem Let A be a linear operator defined on a linear subspace D(A) of the Banach space X, ω a real number, and M > 0. Then A generates a strongly
https://en.wikipedia.org/wiki/Hill%27s%20muscle%20model
In biomechanics, Hill's muscle model refers to the 3-element model consisting of a contractile element (CE) in series with a lightly-damped elastic spring element (SE) and in parallel with lightly-damped elastic parallel element (PE). Within this model, the estimated force-velocity relation for the CE element is usually modeled by what is commonly called Hill's equation, which was based on careful experiments involving tetanized muscle contraction where various muscle loads and associated velocities were measured. They were derived by the famous physiologist Archibald Vivian Hill, who by 1938 when he introduced this model and equation had already won the Nobel Prize for Physiology. He continued to publish in this area through 1970. There are many forms of the basic "Hill-based" or "Hill-type" models, with hundreds of publications having used this model structure for experimental and simulation studies. Most major musculoskeletal simulation packages make use of this model. AV Hill's force-velocity equation for tetanized muscle This is a popular state equation applicable to skeletal muscle that has been stimulated to show Tetanic contraction. It relates tension to velocity with regard to the internal thermodynamics. The equation is where is the tension (or load) in the muscle is the velocity of contraction is the maximum isometric tension (or load) generated in the muscle coefficient of shortening heat is the maximum velocity, when Although Hill's equation looks very much like the van der Waals equation, the former has units of energy dissipation, while the latter has units of energy. Hill's equation demonstrates that the relationship between F and v is hyperbolic. Therefore, the higher the load applied to the muscle, the lower the contraction velocity. Similarly, the higher the contraction velocity, the lower the tension in the muscle. This hyperbolic form has been found to fit the empirical constant only during isotonic contractions near resting
https://en.wikipedia.org/wiki/Gabor%20filter
In image processing, a Gabor filter, named after Dennis Gabor, who first proposed it as a 1D filter. The Gabor filter was first generalized to 2D by Gösta Granlund, by adding a reference direction. The Gabor filter is a linear filter used for texture analysis, which essentially means that it analyzes whether there is any specific frequency content in the image in specific directions in a localized region around the point or region of analysis. Frequency and orientation representations of Gabor filters are claimed by many contemporary vision scientists to be similar to those of the human visual system. They have been found to be particularly appropriate for texture representation and discrimination. In the spatial domain, a 2D Gabor filter is a Gaussian kernel function modulated by a sinusoidal plane wave (see Gabor transform). Some authors claim that simple cells in the visual cortex of mammalian brains can be modeled by Gabor functions. Thus, image analysis with Gabor filters is thought by some to be similar to perception in the human visual system. Definition Its impulse response is defined by a sinusoidal wave (a plane wave for 2D Gabor filters) multiplied by a Gaussian function. Because of the multiplication-convolution property (Convolution theorem), the Fourier transform of a Gabor filter's impulse response is the convolution of the Fourier transform of the harmonic function (sinusoidal function) and the Fourier transform of the Gaussian function. The filter has a real and an imaginary component representing orthogonal directions. The two components may be formed into a complex number or used individually. Complex Real Imaginary where and . In this equation, represents the wavelength of the sinusoidal factor, represents the orientation of the normal to the parallel stripes of a Gabor function, is the phase offset, is the sigma/standard deviation of the Gaussian envelope and is the spatial aspect ratio, and specifies the ellipticity of the sup
https://en.wikipedia.org/wiki/OpenServer
Xinuos OpenServer, previously SCO UNIX and SCO Open Desktop (SCO ODT), is a closed source computer operating system developed by Santa Cruz Operation (SCO), later acquired by SCO Group, and now owned by Xinuos. Early versions of OpenServer were based on UNIX System V, while the later OpenServer 10 is based on FreeBSD 10. However, OpenServer 10 has not received any updates since 2018 and is no longer marketed on Xinuos's website, while OpenServer 5 Definitive and 6 Definitive are still supported. History SCO UNIX/SCO Open Desktop SCO UNIX was the successor to the Santa Cruz Operation's variant of Microsoft Xenix, derived from UNIX System V Release 3.2 with an infusion of Xenix device drivers and utilities. SCO UNIX System V/386 Release 3.2.0 was released in 1989, as the commercial successor to SCO Xenix. The base operating system did not include TCP/IP networking or X Window System graphics; these were available as optional extra-cost add-on packages. Shortly after the release of this bare OS, SCO shipped an integrated product under the name of SCO Open Desktop, or ODT. 1994 saw the release of SCO MPX, an add-on SMP package. At the same time, AT&T completed its merge of Xenix, BSD, SunOS, and UNIX System V Release 3 features into UNIX System V Release 4. SCO UNIX remained based on System V Release 3, but eventually added home-grown versions of most of the features of Release 4. The 1992 releases of SCO UNIX 3.2v4.0 and Open Desktop 2.0 added support for long file names and symbolic links. The next major version, OpenServer Release 5.0.0, released in 1995, added support for ELF executables and dynamically linked shared objects, and made many kernel structures dynamic. SCO OpenServer SCO OpenServer 5, released in 1995, would become SCO's primary product and serve as the basis for products like PizzaNet (the first Internet-based food delivery system done in partnership with Pizza Hut) and SCO Global Access, an Internet gateway server based on Open Desktop Lite. Due
https://en.wikipedia.org/wiki/Grandmother%20hypothesis
The grandmother hypothesis is a hypothesis to explain the existence of menopause in human life history by identifying the adaptive value of extended kin networking. It builds on the previously postulated "mother hypothesis" which states that as mothers age, the costs of reproducing become greater, and energy devoted to those activities would be better spent helping her offspring in their reproductive efforts. It suggests that by redirecting their energy onto those of their offspring, grandmothers can better ensure the survival of their genes through younger generations. By providing sustenance and support to their kin, grandmothers not only ensure that their genetic interests are met, but they also enhance their social networks which could translate into better immediate resource acquisition. This effect could extend past kin into larger community networks and benefit wider group fitness. Background One explanation to this was presented by G.C. Williams who was the first to posit that menopause might be an adaptation. Williams suggested that at some point it became more advantageous for women to redirect reproductive efforts into increased support of existing offspring. Since a female's dependent offspring would die as soon as she did, he argued, older mothers should stop producing new offspring and focus on those existing. In so doing, they would avoid the age-related risks associated with reproduction and thereby eliminate a potential threat to the continued survival of current offspring. The evolutionary reasoning behind this is driven by related theories. Kin selection Kin selection provides the framework for an adaptive strategy by which altruistic behavior is bestowed on closely related individuals because easily identifiable markers exist to indicate them as likely to reciprocate. Kin selection is implicit in theories regarding the successful propagation of genetic material through reproduction, as helping an individual more likely to share one's genetic
https://en.wikipedia.org/wiki/Merge%20%28software%29
Merge is a software system which allows a user to run DOS/Windows 3.1 on SCO UNIX, in an 8086 virtual machine. History Merge was originally developed to run DOS under UNIX System V Release 2 on an AT&T 6300 Plus personal computer. Development of the virtual machine began in late 1984, and AT&T announced the availability of the machine on 9 October 1985, referring to the bundled Merge software as Simultask. (The PC 6300 Plus shipped with MS-DOS in 1985 though, because its Unix System V distribution was not ready until the end of March 1986.) Merge was developed by engineers at Locus Computing Corporation, with collaboration from AT&T hardware and software engineers, particularly on aspects of the system that were specific to the 6300 Plus (in contrast to a standard IBM PC/AT). The AT&T 6300 Plus contained an Intel 80286 processor, which did not include the support for 8086 virtual machines (virtual 8086 mode) found in the 80386 and later processors in the x86 family. On the 80286, the DOS program had to run in real mode. The 6300 Plus was designed with special hardware on the bus that would suppress and capture bus cycles from the DOS program if they were directed toward addresses not assigned for direct access by the DOS virtual machine. Various system registers, such as the programmable interrupt controller (PIC), and the video controller, had to be emulated in software for the DOS process, and a watchdog timer was implemented to recover from DOS programs that would clear the interrupt flag and then hang for too long. The hardware used the non-maskable interrupt (NMI) to take control back to the emulation code. Later, Merge was enhanced to make use of the virtual 8086 mode provided by the 80386 processor; that version was offered with Microport SVR3 starting in 1987, and subsequently with SCO Unix. There was also a Merge/286 version that ran on an unmodified PC/AT (without any special I/O trapping hardware); it ran as long as the PC program was reasonably well-b
https://en.wikipedia.org/wiki/File%20synchronization
File synchronization (or syncing) in computing is the process of ensuring that computer files in two or more locations are updated via certain rules. In one-way file synchronization, also called mirroring, updated files are copied from a source location to one or more target locations, but no files are copied back to the source location. In two-way file synchronization, updated files are copied in both directions, usually with the purpose of keeping the two locations identical to each other. In this article, the term synchronization refers exclusively to two-way file synchronization. File synchronization is commonly used for home backups on external hard drives or updating for transport on USB flash drives. BitTorrent Sync, Dropbox, SKYSITE, Nextcloud, OneDrive, Google Drive and iCloud are prominent products. Some backup software also support real-time file sync. The automatic process prevents copying already identical files and thus can be faster and save much time versus a manual copy, and is less error prone. However this suffers from the limit that the synchronized files must physically fit in the portable storage device. Synchronization software that only keeps a list of files and the changed files eliminates this problem (e.g. the "snapshot" feature in Beyond Compare or the "package" feature in Synchronize It!). It is especially useful for mobile workers, or others that work on multiple computers. It is possible to synchronize multiple locations by synchronizing them one pair at a time. The Unison Manual describes how to do this: If you need to do this, the most reliable way to set things up is to organize the machines into a "star topology," with one machine designated as the "hub" and the rest as "spokes," and with each spoke machine synchronizing only with the hub. The big advantage of the star topology is that it eliminates the possibility of confusing "spurious conflicts" arising from the fact that a separate archive is maintained by Unison for every
https://en.wikipedia.org/wiki/Quentin%20Stafford-Fraser
James Quentin Stafford-Fraser is a computer scientist and entrepreneur based in Cambridge, England. He was one of the team that created the first webcam, the Trojan room coffee pot. Quentin pointed a camera at the coffee pot and wrote the XCoffee client program which allowed the image of the pot to be displayed on a workstation screen. When web browsers gained the ability to display images, the system was modified to make the coffee pot images available over HTTP and thus became the first webcam. Quentin wrote the original VNC client (viewer) and server for the Windows operating system, while at the Olivetti Research Laboratory. He is a regular public speaker and his work has attracted significant media coverage. Quentin is also a part-time Senior Research Associate at the University of Cambridge Computer Lab. In 2013 he was a member of the winning team on Christmas University Challenge, representing Gonville & Caius College, Cambridge. Companies founded Quentin has founded or co-founded various companies and other organisations including: Newnham Research (now DisplayLink) Exbiblio The Ndiyo project Telemarq Ltd (of which he is currently CEO) Earlier history Quentin was educated at Haileybury before studying Computer Science at the University of Cambridge and in 1989 became the first Cambridge college Computer Officer, at his old college, Gonville and Caius College, before joining the Systems Research Group in the university's Computer Lab. Quentin is credited with operating the first web-server in the University of Cambridge, in 1992. He created the Brightboard Interactive whiteboard project at Xerox EuroPARC in Cambridge, as part of his Ph.D. thesis. References External links Personal website Computer programmers Quentin Stafford-Fraser Alumni of Gonville and Caius College, Cambridge People educated at Haileybury and Imperial Service College Living people Year of birth missing (living people)
https://en.wikipedia.org/wiki/Vark
Vark (also varak Waraq or warq) is a fine filigree foil sheet of pure metal, typically silver but sometimes gold, used to decorate South Asian sweets and food. The silver and gold are edible, though flavorless. Vark is made by pounding silver into sheets less than one micrometre (μm) thick, typically 0.2 μm-0.8 μm. The silver sheets are typically packed between layers of paper for support; this paper is peeled away before use. It is fragile and breaks into smaller pieces if handled with direct skin contact. Leaf that is 0.2 μm thick tends to stick to skin if handled directly. Vark sheets are laid or rolled over some South Asian sweets, confectionery, dry fruits and spices. It is also placed onto mounds of saffron rice on platters. For safety and ethical reasons, the government of India has issued food safety and product standards guidelines for manufacturers of silver foil. History Etymology Varaka means cloth, cloak or a thing that covers something else. Vark is sometimes spelled Varaq, varq, vark, varkh, varakh, varkha, or waraq (, ). In Persian, varaqa (borrowed from Arabic waraq), means a sheet, leaf or foil. Product Manufacturing Vark is made by placing the pure metal dust between parchment sheets, then pounding the sheets until the metal dust molds into a foil, usually less than one micrometre (μm) thick, typically 0.2 μm-0.8 μm. The sheets are typically packed with paper for support; this paper is peeled away before use. it generally takes 2 hours to pound the silver particles into foils. Particles were traditionally manually pounded between the layers of ox gut or cow hide. It is easier to separate the silver leaf from the animal tissue than to separate it from the paper. Due to the concerns of the vegetarian population of India, manufacturers have switched to the modern technologies that have evolved for the production of silver leaves in India, Germany, Russia and China. Modern technologies include beating over sheets of black special treated pap
https://en.wikipedia.org/wiki/Ideographic%20Research%20Group
The Ideographic Research Group (IRG), formerly called the Ideographic Rapporteur Group, is a subgroup of the ISO/IEC Joint Technical Committee, responsible for developing aspects of The Unicode Standard pertaining to CJK unified ideographs.The IRG is composed of representatives from the Unicode Consortium, as well as experts from China, Japan, South Korea, Vietnam, and other regions that have historically used Chinese characters, as well as experts. The group holds two meetings every year lasting 4-5 days each, subsequently reporting its activities to its parent ISO/IEC JTC 1/SC 2 (WG2) committee. History The precursor to the IRG was the CJK Joint Research Group (CJK-JRG), established in 1990. In October 1993, this group was re-established with its present initials as a subgroup of WG2. In June 2019, the subgroup acquired its current name. The IRG rapporteur from 1993 to 2004 was Zhang Zhoucai (), who had been convenor and chief editor of CJK-JRG from 1990 to 1993. Since 2004, the IRG rapporteur has been Hong Kong Polytechnic University professor Lu Qin () In June 2018, the title of "rapporteur" was changed to "convenor". Overview The IRG is responsible for reviewing proposals to add new CJK unified ideographs to the Universal Multiple-Octet Coded Character Set (ISO/IEC 10646), and equivalently the Unicode Standard, and submitting consolidated proposals for sets of unified ideographs to WG2, which are then processed for encoding in the respective standards by SC2 and the Unicode Technical Committee. National and liaison bodies represented in IRG include China, Hong Kong and Macau, Japan, North and South Korea, Singapore, the Taipei Computer Association as representatives on Taiwan, the United Kingdom, Vietnam, and the Unicode Consortium. As of Unicode version 15.1, the IRG has been responsible for submitting the following blocks of CJK unified and compatibility ideographs for encoding: CJK Unified Ideographs and CJK Compatibility Ideographs (version 1.0) CJ
https://en.wikipedia.org/wiki/SECG
In cryptography, the Standards for Efficient Cryptography Group (SECG) is an international consortium founded by Certicom in 1998. The group exists to develop commercial standards for efficient and interoperable cryptography based on elliptic curve cryptography (ECC). Links and documents SECG home page SEC 1: Elliptic Curve Cryptography (Version 1.0 - Superseded by Version 2.0) SEC 1: Elliptic Curve Cryptography (Version 2.0) SEC 2: Recommended Elliptic Curve Domain Parameters (Version 1.0 - Superseded by Version 2.0 SEC 2: Recommended Elliptic Curve Domain Parameters (Version 2.0) Certicom Patent Letter See also Elliptic curve cryptography Cryptography organizations Cryptography standards
https://en.wikipedia.org/wiki/Kubuntu
Kubuntu ( ) is an official flavor of the Ubuntu operating system that uses the KDE Plasma Desktop instead of the GNOME desktop environment. As part of the Ubuntu project, Kubuntu uses the same underlying systems. Kubuntu shares the same repositories as Ubuntu and is released regularly on the same schedule as Ubuntu. Kubuntu was sponsored by Canonical Ltd. until 2012, and then directly by Blue Systems. Now, employees of Blue Systems contribute upstream to KDE and Debian, and Kubuntu development is led by community contributors. During the changeover, Kubuntu retained the use of Ubuntu project servers and existing developers. Name "Kubuntu" is a registered trademark held by Canonical. It is derived from the name Ubuntu, prefixing a K to represent the KDE platform that Kubuntu is built upon (following a widespread naming convention of prefixing K to the name of any software released for use on KDE platforms), as well as the KDE community. is a Bantu term translating roughly to 'humanity'. Since Bantu grammar involves prefixes to form noun classes, and the prefix has the meaning 'toward' in Bemba, is therefore also a meaningful Bemba word or phrase translating to 'toward humanity'. Reportedly, the same word, by coincidence, also takes the meaning of 'free' (in the sense of 'without payment') in Kirundi. Comparison with Ubuntu Kubuntu typically differs from Ubuntu in graphical applications and tools: History Development started back in December 2004 at the Ubuntu Mataró Conference in Mataró, Spain when a Canonical employee Andreas Mueller, from Gnoppix, had the idea to make an Ubuntu KDE variant and got the approval from Mark Shuttleworth to start the first Ubuntu variant, called Kubuntu. On the same evening Chris Halls from the OpenOffice.org project and Jonathan Riddell from KDE started volunteering on the newborn project. Shortly after Ubuntu was started, Mark Shuttleworth stated in an interview that he recognized the need for KDE-based distribution in order
https://en.wikipedia.org/wiki/Protected%20Extensible%20Authentication%20Protocol
PEAP is also an acronym for Personal Egress Air Packs. The Protected Extensible Authentication Protocol, also known as Protected EAP or simply PEAP, is a protocol that encapsulates the Extensible Authentication Protocol (EAP) within an encrypted and authenticated Transport Layer Security (TLS) tunnel. The purpose was to correct deficiencies in EAP; EAP assumed a protected communication channel, such as that provided by physical security, so facilities for protection of the EAP conversation were not provided. PEAP was jointly developed by Cisco Systems, Microsoft, and RSA Security. PEAPv0 was the version included with Microsoft Windows XP and was nominally defined in draft-kamath-pppext-peapv0-00. PEAPv1 and PEAPv2 were defined in different versions of draft-josefsson-pppext-eap-tls-eap. PEAPv1 was defined in draft-josefsson-pppext-eap-tls-eap-00 through draft-josefsson-pppext-eap-tls-eap-05, and PEAPv2 was defined in versions beginning with draft-josefsson-pppext-eap-tls-eap-06. The protocol only specifies chaining multiple EAP mechanisms and not any specific method. However, use of the EAP-MSCHAPv2 and EAP-GTC methods are the most commonly supported. Overview PEAP is similar in design to EAP-TTLS, requiring only a server-side PKI certificate to create a secure TLS tunnel to protect user authentication, and uses server-side public key certificates to authenticate the server. It then creates an encrypted TLS tunnel between the client and the authentication server. In most configurations, the keys for this encryption are transported using the server's public key. The ensuing exchange of authentication information inside the tunnel to authenticate the client is then encrypted and user credentials are safe from eavesdropping. As of May 2005, there were two PEAP sub-types certified for the updated WPA and WPA2 standard. They are: PEAPv0/EAP-MSCHAPv2 PEAPv1/EAP-GTC PEAPv0 and PEAPv1 both refer to the outer authentication method and are the mechanisms that create t
https://en.wikipedia.org/wiki/Barnaviridae
Barnaviridae is a family of non-enveloped, positive-strand RNA viruses. Cultivated mushrooms serve as natural hosts. The family has one genus, Barnavirus, which contains one species: Mushroom bacilliform virus. Diseases associated with this family includes La France disease. Structure Viruses in Barnaviridae are non-enveloped, with icosahedral and Bacilliform geometries, and T=1 symmetry. These viruses are about 50 nm long. Genome Genomes are linear, around 4kb in length. The genome has 4 open reading frames. Genomic RNA serves as both the genome and viral messenger RNA. ORF2 is a polyprotein which is possibly auto-cleaved by the ORF2 viral protease. ORF3 encodes the RNA dependent RNA polymerase and may be translated by ribosomal frameshifting as an ORF2-ORF3 polyprotein. The single capsid protein (ORF4) is translated from a subgenomic RNA. Life cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell and passing it down. Replication follows the positive stranded RNA virus replication model. Positive stranded RNA virus transcription is the method of transcription. The virus is released horizontal via mycelium and basidiospores. Cultivated mushroom, Agaricus bisporus, serves as the natural host. References External links Viralzone: Barnaviridae ICTV Mycology Virus families Riboviria
https://en.wikipedia.org/wiki/Comparison%20of%20platform%20virtualization%20software
Platform virtualization software, specifically emulators and hypervisors, are software packages that emulate the whole physical computer machine, often providing multiple virtual machines on one physical platform. The table below compares basic information about platform virtualization hypervisors. General Features Providing any virtual environment usually requires some overhead of some type or another. Native usually means that the virtualization technique does not do any CPU level virtualization (like Bochs), which executes code more slowly than when it is directly executed by a CPU. Some other products such as VMware and Virtual PC use similar approaches to Bochs and QEMU, however they use a number of advanced techniques to shortcut most of the calls directly to the CPU (similar to the process that JIT compiler uses) to bring the speed to near native in most cases. However, some products such as coLinux, Xen, z/VM (in real mode) do not suffer the cost of CPU-level slowdowns as the CPU-level instructions are not proxied or executing against an emulated architecture since the guest OS or hardware is providing the environment for the applications to run under. However access to many of the other resources on the system, such as devices and memory may be proxied or emulated in order to broker those shared services out to all the guests, which may cause some slow downs as compared to running outside of virtualization. OS-level virtualization is described as "native" speed, however some groups have found overhead as high as 3% for some operations, but generally figures come under 1%, so long as secondary effects do not appear. See for a paper comparing performance of paravirtualization approaches (e.g. Xen) with OS-level virtualization Requires patches/recompiling. Exceptional for lightweight, paravirtualized, single-user VM/CMS interactive shell: largest customers run several thousand users on even single prior models. For multiprogramming OSes like Linux
https://en.wikipedia.org/wiki/Advanced%20Dungeons%20%26%20Dragons%3A%20Heroes%20of%20the%20Lance
Advanced Dungeons & Dragons: Heroes of the Lance is a video game released in 1988 for various home computer systems and consoles. The game is based on the first Dragonlance campaign module for the Dungeons & Dragons fantasy role-playing game, Dragons of Despair, and the first Dragonlance novel Dragons of Autumn Twilight. Heroes of the Lance focuses on the journey of eight heroes through the ruined city of Xak Tsaroth, where they must face the ancient dragon Khisanth and retrieve the relic, the Disks of Mishakal. Gameplay Heroes of the Lance is a side-scrolling action game. Even if it is a faithful representation of a portion of the novel Dragons of Autumn Twilight, it was a departure from the role-playing game module Dragons of Despair the book itself is based on. The eight heroes from the Dragonlance series are assembled for the quest, but only one is visible on the screen at a time; when the on-screen hero dies, the next in line appears. Heroes of the Lance uses Dungeons & Dragons game statistics, with character statistics taken exactly from the rule books. Three characters have special abilities (healing magic, wizard magic, and trap removal), but the other five merely act as "lives" for the player as in traditional action-platforming games. Plot Characters The eight heroes that make up the party are: Goldmoon, a princess who brandishes the Blue Crystal Staff, an artifact whose powers she seeks to fully understand. Sturm Brightblade, a powerful and solemn knight. Caramon Majere, A warrior who makes up for his lack of intelligence with pure strength and fighting prowess. Raistlin Majere, Caramon's twin brother; a sly and brilliant, but frail, mage. Tanis Half-Elven, the 'natural leader' of the heroes, and good with a bow. Tasslehoff Burrfoot, a kender pickpocket. He fights with a sling weapon known as a . Riverwind, Goldmoon's betrothed. He is a noble and wise warrior. Flint Fireforge, a grizzled dwarven warrior. Development Heroes of the Lance was based on
https://en.wikipedia.org/wiki/Lunar%20Pool
Lunar Pool (known as in Japan) is a sports video game. It was developed by Compile for the Nintendo Entertainment System and MSX. The game combines pool (pocket billiards) with aspects of miniature golf. The object is to knock each ball into a pocket using a cue ball. The game offers sixty levels, and the friction of the table is adjustable (thus the lunar reference in the title, along with Moon-related background imagery within the game). The game was re-released for the Wii on the North American Virtual Console on October 22, 2007. Gameplay Lunar Pool is played in boards of different shapes. The player has to shoot the cue ball to knock other colored balls into the pockets. One life is lost whenever the player either fails to pocket a ball on three consecutive shots or pockets the cue ball. Completing a level awards one extra life, or two if the player has pocketed at least one ball on every shot. The value of each ball is determined by its number and the displayed "Rate" value, which starts at 1 and increases after every shot in which the player pockets at least one ball. Failing to do so resets the Rate to 1. Bonus points are awarded for completing a level without a miss. The game ends after all lives are lost or 60 levels have been completed, whichever occurs first. Modes Lunar Pool can either be played alone, against another player, or against the computer. If the game is played against another player or the computer, players take turns shooting the cue ball. If one player fails to knock at least one of the colored balls into a pocket, or pockets their own cue ball, then it becomes the opponent's turn. The game includes an adjustable friction setting, which determines the rate at which balls slow down after being hit. Legacy In the Mexican soap opera María la del Barrio, José María (Roberto Blandón) plays Lunar Pool on the NES. References External links 1985 video games Compile (company) games Cue sports video games Fantasy sports video games Miniat
https://en.wikipedia.org/wiki/Extensible%20Authentication%20Protocol
Extensible Authentication Protocol (EAP) is an authentication framework frequently used in network and internet connections. It is defined in , which made obsolete, and is updated by . EAP is an authentication framework for providing the transport and usage of material and parameters generated by EAP methods. There are many methods defined by RFCs, and a number of vendor-specific methods and new proposals exist. EAP is not a wire protocol; instead it only defines the information from the interface and the formats. Each protocol that uses EAP defines a way to encapsulate by the user EAP messages within that protocol's messages. EAP is in wide use. For example, in IEEE 802.11 (WiFi) the WPA and WPA2 standards have adopted IEEE 802.1X (with various EAP types) as the canonical authentication mechanism. Methods EAP is an authentication framework, not a specific authentication mechanism. It provides some common functions and negotiation of authentication methods called EAP methods. There are currently about 40 different methods defined. Methods defined in IETF RFCs include EAP-MD5, EAP-POTP, EAP-GTC, EAP-TLS, EAP-IKEv2, EAP-SIM, EAP-AKA, and EAP-AKA'. Additionally, a number of vendor-specific methods and new proposals exist. Commonly used modern methods capable of operating in wireless networks include EAP-TLS, EAP-SIM, EAP-AKA, LEAP and EAP-TTLS. Requirements for EAP methods used in wireless LAN authentication are described in . The list of type and packets codes used in EAP is available from the IANA EAP Registry. The standard also describes the conditions under which the AAA key management requirements described in can be satisfied. Lightweight Extensible Authentication Protocol (LEAP) The Lightweight Extensible Authentication Protocol (LEAP) method was developed by Cisco Systems prior to the IEEE ratification of the 802.11i security standard. Cisco distributed the protocol through the CCX (Cisco Certified Extensions) as part of getting 802.1X and dynamic WEP ad
https://en.wikipedia.org/wiki/Smart%20Battery%20System
Smart Battery System (SBS) is a specification for managing a smart battery, usually for a portable computer. It allows operating systems to perform power management operations via a smart battery charger based on remaining estimated run times by determining accurate state of charge readings. Through this communication, the system also controls the battery charge rate. Communication is carried over an SMBus two-wire communication bus. The specification originated with the Duracell and Intel companies in 1994, but was later adopted by several battery and semiconductor makers. The Smart Battery System defines the SMBus connection, the data that can be sent over the connection (Smart Battery Data or SBD), the Smart Battery Charger, and a computer BIOS interface for control. In principle, any battery operated product can use SBS. A special integrated circuit in the battery pack (called a fuel gauge or battery management system) monitors the battery and reports information to the SMBus. This information might include battery type, model number, manufacturer, characteristics, charge/discharge rate, predicted remaining capacity, an almost-discharged alarm so that the PC or other device can shut down gracefully, and temperature and voltage to provide safe fast-charging. See also List of battery types Power Management Bus (PMBus) References External links SBS-IF Smart Battery System Implementers Forum Battery Firmware Hacking Inside the innards of a Smart Battery Rechargeable batteries Battery charging
https://en.wikipedia.org/wiki/Test%20plan
A test plan is a document detailing the objectives, resources, and processes for a specific test session for a software or hardware product. The plan typically contains a detailed understanding of the eventual workflow. Test plans A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan is usually prepared by or with significant input from test engineers. Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may include a strategy for one or more of the following: Design verification or compliance test – to be performed during the development or approval stages of the product, typically on a small sample of units. Manufacturing test or production test – to be performed during preparation or assembly of the product in an ongoing manner for purposes of performance verification and quality control. Acceptance test or commissioning test – to be performed at the time of delivery or installation of the product. Service and repair test – to be performed as required over the service life of the product. Regression test – to be performed on an existing operational product, to verify that existing functionality was not negatively affected when other aspects of the environment were changed (e.g., upgrading the platform on which an existing application runs). A complex system may have a high-level test plan to address the overall requirements and supporting test plans to address the design details of subsystems and components. Test plan document formats can be as varied as the products and organizations to which they apply. There are three major elements that should be described in the test plan: test coverage, test methods, and test responsibilities. These are also used in a formal test strategy. Test coverage Test coverage in the test plan states what requirements will be verified during what stag
https://en.wikipedia.org/wiki/Novel%20food
A novel food is a type of food that does not have a significant history of consumption or is produced by a method that has not previously been used for food. Designer food Designer food is a type of novel food that has not existed on any regional or global consumer market before. Instead it has been "designed" using biotechnological / bioengineering methods (e.g. genetically modified food) or "enhanced" using engineered additives. Examples like designer egg, designer milk, designer grains, probiotics, and enrichment with micro- and macronutrients and designer proteins have been cited. The enhancement process is called food fortification or nutrification. Designer novel food often comes with sometimes unproven health claims ("superfoods"). Designer food is distinguished from food design, the aesthetic arrangement of food items for marketing purposes. European Union Novel foods or novel food ingredients have no history of "significant" consumption in the European Union prior to 15 May 1997. Any food or food ingredient that falls within this definition must be authorised according to the Novel Food legislation, Regulation (EC) No 258/97 of the European Parliament and of the Council. Applicants can consult the guidance document compiled by the European Commission, which highlights the scientific information and the safety assessment report required in each case. The Novel Food regulation stipulates that foods and food ingredients falling within the scope of this regulation must not: present a danger for the consumer; mislead the consumer; or differ from foods or food ingredients which they are intended to replace to such an extent that their normal consumption would be nutritionally disadvantageous for the consumer. There are two possible routes for authorization under the Novel Food legislation: a full application and a simplified application. The simplified application route is only applicable where the EU member national competent authority, e.g. Food Standard
https://en.wikipedia.org/wiki/Vessel%20emergency%20codes
In addition to distress signals like Mayday and pan-pan, most vessels, especially passenger ships, use some emergency signals to alert the crew on board. In some cases, the signals may alert the passengers to danger, but, in others, the objective is to conceal the emergency from unaffected passengers so as to avoid panic or undue alarm. Signals can be in the form of blasts on alarm bells, sounds on the ship's whistle or code names paged over the PA system. Alpha, Alpha, Alpha is the code for a medical emergency aboard Royal Caribbean ships. Alpha Team, Alpha Team, Alpha Team is the code for a fire emergency aboard Carnival Cruise Line ships. Assemble at Muster Stations (General Emergency Signal), seven short blasts followed by one long blast of the ships horn and internal alarm bell system. Bravo, Bravo, Bravo is used by many cruise lines to alert crew to a fire or other serious incident on board without alarming passengers. Operation Brightstar designates a medical emergency, such as cardiac or stroke on Carnival and Disney Cruise Line vessels. It can only be requested to be announced by one of the medical team or an officer with advanced medical training. The spoken word Brightstar over the PA, sometimes supplemented by a group signal on the pager system will alert the medical team including all doctors and nurses to attend the location. The ventilation officer (VO) is also alerted during a Brightstar. The VO will start the power to the cooling in the morgue (presuming it is not already in use) as a precaution. Charlie, Charlie, Charlie is the code for a security threat aboard Royal Caribbean ships and the code for upcoming helicopter winch operations aboard c-bed accommodation vessels. Code Blue usually means a medical emergency. Delta, Delta, Delta is the code for a possible bio-hazard among some cruise lines. More commonly used to alert crew to hull damage on board some lines as well. Echo, Echo, Echo is the code for a possible collision with another ship or t
https://en.wikipedia.org/wiki/Anethole
Anethole (also known as anise camphor) is an organic compound that is widely used as a flavoring substance. It is a derivative of the aromatic compound allylbenzene and occurs widely in plants in essential oils. It is in the class of phenylpropanoid organic compounds. It contributes a large component of the odor and flavor of anise and fennel (both in the botanical family Apiaceae), anise myrtle (Myrtaceae), liquorice (Fabaceae), magnolia blossoms, and star anise (Schisandraceae). Closely related to anethole is its isomer estragole, which is abundant in tarragon (Asteraceae) and basil (Lamiaceae),and has a flavor reminiscent of anise. It is a colorless, fragrant, mildly volatile liquid. Anethole is only slightly soluble in water but exhibits high solubility in ethanol. This trait causes certain anise-flavored liqueurs to become opaque when diluted with water; the ouzo effect. Structure and production Anethole is an aromatic, unsaturated ether related to lignols. It exists as both cis–trans isomers (see also E–Z notation), involving the double bond outside the ring. The more abundant isomer, and the one preferred for use, is the trans or E isomer. Like related compounds, anethole is poorly soluble in water. Historically, this property was used to detect adulteration in samples. Most anethole is obtained from turpentine-like extracts from trees. Of only minor commercial significance, anethole can also be isolated from essential oils. Currently Banwari Chemicals Pvt Ltd situated in Bhiwadi, Rajasthan, India is the leading manufacturer of anethole. It is prepared commercially from 4-methoxypropiophenone, which is prepared from anisole. Uses Flavoring Anethole is distinctly sweet, measuring 13 times sweeter than sugar. It is perceived as being pleasant to the taste even at higher concentrations. It is used in alcoholic drinks ouzo, rakı, anisette and absinthe, among others. It is also used in seasoning and confectionery applications, such as German Lebkuchen, ora
https://en.wikipedia.org/wiki/Smart%20key
A smart key is an electronic access and authorization system that is available either as standard equipment, or as an option in several car designs. It was first developed by Siemens in 1995 and introduced by Mercedes-Benz under the name "Keyless-Go" in 1998 on the W220 S-Class, after the design patent was filed by Daimler-Benz on May 17, 1997. Operation The smart key allows the driver to keep the key fob pocketed when unlocking, locking and starting the vehicle. The key is identified via one of several antennas in the car's bodywork and a radio pulse generator in the key housing. Depending on the system, the vehicle is automatically unlocked when a button or sensor on the door handle or trunk release is pressed. Vehicles with a smart-key system have a mechanical backup, usually in the form of a spare key blade supplied with the vehicle. Some manufacturers hide the backup lock behind a cover for styling. Vehicles with a smart-key system can disengage the immobilizer and activate the ignition without inserting a key in the ignition, provided the driver has the key inside the car. On most vehicles, this is done by pressing a starter button or twisting an ignition switch. When leaving a vehicle that is equipped with a smart-key system, the vehicle is locked by either pressing a button on a door handle, touching a capacitive area on a door handle, or simply walking away from the vehicle. The method of locking varies across models. Some vehicles automatically adjust settings based on the smart key used to unlock the car. User preferences such as seat positions, steering wheel position, exterior mirror settings, climate control (e.g. temperature) settings, and stereo presets are popular adjustments. Some models, such as the Ford Escape, even have settings to prevent the vehicle from exceeding a maximum speed if it has been started with a certain key. Insurance standard In 2005, the UK motor insurance research expert Thatcham introduced a standard for keyless entry,
https://en.wikipedia.org/wiki/Vendange%20tardive
Vendange tardive ("VT") means "late harvest" in French. The phrase refers to a style of dessert wine where the grapes are allowed to hang on the vine until they start to dehydrate. This process, called passerillage, concentrates the sugars in the juice and changes the flavours within it. The name is sometimes written as the plural form, vendanges tardives, referring to the fact that several runs through the vineyard are often necessary to produce such wines. In other countries such as Germany or Austria the term Spätlese is used to describe wine using the same making process. Alsace wines were the first to be described as vendange tardive but the term is now used in other regions of France. Since 1984, the term has been legally defined in Alsace and may only be applied to wines that exceed a minimum must weight and pass blind tasting by the INAO. Sélection de Grains Nobles ("SGN") is an even sweeter category, for grapes affected by noble rot. Vendange tardive is also an official wine designation in Luxembourg. Hugel's Law Jean Hugel first described a wine as vendange tardive after the long hot summer of 1976. He drafted rules for vendange tardive wine that were eventually accepted by the INAO on 1 March 1984, and known unofficially as Hugel's Law in recognition of his crusade. The minimum sugar levels were increased in 2001. The criteria are : A declaration to the INAO in advance of the intention to harvest late and the vineyards specified. A physical check of the grapes and of the quality of the juice. A minimum must weight equivalent to 15.3% potential alcohol for Gewurztraminer and Pinot gris, and 14% potential alcohol for Riesling and Muscat. For Sélection de Grains Nobles, the minimums are 18.2% and 16.4%, respectively. No chaptalisation nor acidification. The wine certified by INAO officials. The wine may not be released without a blind tasting by the INAO, at least 18 months after it is made. Between 1981 and 1989, the number of producers rose from 11 to
https://en.wikipedia.org/wiki/Beer%20distribution%20game
The beer distribution game (also known as the beer game) is an educational game that is used to experience typical coordination problems of a supply chain process. It reflects a role-play simulation where several participants play with each other. The game represents a supply chain with a non-coordinated process where problems arise due to lack of information sharing. This game outlines the importance of information sharing, supply chain management and collaboration throughout a supply chain process. Due to lack of information, suppliers, manufacturers, sales people and customers often have an incomplete understanding of what the real demand of an order is. The most interesting part of the game is that each group has no control over another part of the supply chain. Therefore, each group has only significant control over their own part of the supply chain. Each group can highly influence the entire supply chain by ordering too much or too little which can lead to a bullwhip effect. Therefore, the order taking of a group also highly depends on decisions of the other groups. History The Beer Game was invented by Jay Wright Forrester at the MIT Sloan School of Management in 1960. The beer game was a result of his work on system dynamics. Rules In the beer game participants enact a four-stage supply chain. The task is to produce and deliver units of beer: the factory produces, and the other three stages deliver the beer units until it reaches the customer at the downstream end of the chain. The goal of the game is to meet customer demand with minimal expenditure on back orders and inventory. The game is played in 24 rounds and in each round of the game the following four steps have to be performed: Check deliveries: How many units of beer are being delivered to the player from the wholesaler. Check orders: How many units the customer has ordered. Deliver beer: Deliver as much beer as a player can to satisfy the demand (in this game the step is performed auto
https://en.wikipedia.org/wiki/Young%20symmetrizer
In mathematics, a Young symmetrizer is an element of the group algebra of the symmetric group, constructed in such a way that, for the homomorphism from the group algebra to the endomorphisms of a vector space obtained from the action of on by permutation of indices, the image of the endomorphism determined by that element corresponds to an irreducible representation of the symmetric group over the complex numbers. A similar construction works over any field, and the resulting representations are called Specht modules. The Young symmetrizer is named after British mathematician Alfred Young. Definition Given a finite symmetric group Sn and specific Young tableau λ corresponding to a numbered partition of n, and consider the action of given by permuting the boxes of . Define two permutation subgroups and of Sn as follows: and Corresponding to these two subgroups, define two vectors in the group algebra as and where is the unit vector corresponding to g, and is the sign of the permutation. The product is the Young symmetrizer corresponding to the Young tableau λ. Each Young symmetrizer corresponds to an irreducible representation of the symmetric group, and every irreducible representation can be obtained from a corresponding Young symmetrizer. (If we replace the complex numbers by more general fields the corresponding representations will not be irreducible in general.) Construction Let V be any vector space over the complex numbers. Consider then the tensor product vector space (n times). Let Sn act on this tensor product space by permuting the indices. One then has a natural group algebra representation on (i.e. is a right module). Given a partition λ of n, so that , then the image of is For instance, if , and , with the canonical Young tableau . Then the corresponding is given by For any product vector of we then have Thus the set of all clearly spans and since the span we obtain , where we wrote informally . Notice also how t
https://en.wikipedia.org/wiki/Heawood%20conjecture
In graph theory, the Heawood conjecture or Ringel–Youngs theorem gives a lower bound for the number of colors that are necessary for graph coloring on a surface of a given genus. For surfaces of genus 0, 1, 2, 3, 4, 5, 6, 7, ..., the required number of colors is 4, 7, 8, 9, 10, 11, 12, 12, .... , the chromatic number or Heawood number. The conjecture was formulated in 1890 by Percy John Heawood and proven in 1968 by Gerhard Ringel and Ted Youngs. One case, the non-orientable Klein bottle, proved an exception to the general formula. An entirely different approach was needed for the much older problem of finding the number of colors needed for the plane or sphere, solved in 1976 as the four color theorem by Haken and Appel. On the sphere the lower bound is easy, whereas for higher genera the upper bound is easy and was proved in Heawood's original short paper that contained the conjecture. In other words, Ringel, Youngs and others had to construct extreme examples for every genus g = 1,2,3,.... If g = 12s + k, the genera fall into 12 cases according as k = 0,1,2,3,4,5,6,7,8,9,10,11. To simplify, suppose that case k has been established if only a finite number of g's of the form 12s + k are in doubt. Then the years in which the twelve cases were settled and by whom are the following: 1954, Ringel: case 5 1961, Ringel: cases 3,7,10 1963, Terry, Welch, Youngs: cases 0,4 1964, Gustin, Youngs: case 1 1965, Gustin: case 9 1966, Youngs: case 6 1967, Ringel, Youngs: cases 2,8,11 The last seven sporadic exceptions were settled as follows: 1967, Mayer: cases 18, 20, 23 1968, Ringel, Youngs: cases 30, 35, 47, 59, and the conjecture was proved. Formal statement Percy John Heawood conjectured in 1890 that for a given genus g > 0, the minimum number of colors necessary to color all graphs drawn on an orientable surface of that genus (or equivalently to color the regions of any partition of the surface into simply connected regions) is given by where is the floor function. Re
https://en.wikipedia.org/wiki/Matt%20Mullenweg
Matthew Charles Mullenweg (born January 11, 1984) is an American entrepreneur and web developer living in Houston. He is known for developing and founding the free and open-source web software WordPress, and its parent company Automattic. After dropping out of the University of Houston, he worked at CNET Networks from 2004 to 2006 until he quit and founded Automattic, an internet company whose brands include WordPress.com, Akismet, Gravatar, VaultPress, IntenseDebate, Crowdsignal, and Tumblr. Early life and education Mullenweg was born in Houston, Texas, and attended the High School for the Performing and Visual Arts where he studied jazz saxophone. He studied at the University of Houston, majoring in political science, before he dropped out in 2004 to pursue a job at CNET Networks. Mullenweg was raised Catholic. Career In January 2003, Mullenweg and Mike Little started WordPress as a fork of b2. They were soon joined by original b2 developer Michel Valdrighi. Mullenweg was 19 years old, and a freshman at the University of Houston at the time. In March 2004, he co-founded the Global Multimedia Protocols Group (GMPG) with Eric Meyer and Tantek Çelik. GMPG wrote the first of the Microformats. In April 2004, with fellow WordPress developers, they launched Ping-O-Matic, a hub for notifying or "pinging" blog search engines, like Technorati, about blog updates. The following month, WordPress competitor Movable Type announced a radical price change, driving thousands of users to seek another blogging platform; this is widely seen as the tipping point in WordPress's popularity. In October 2004, he was recruited by CNET to work on WordPress for them and help them with blogs and new media offerings. He dropped out of college and moved to San Francisco from Houston, Texas, the following month. Mullenweg announced bbPress in December, Mullenweg and the WordPress team released WordPress 1.5 "Strayhorn" in February 2005, which had over 900,000 downloads. The release introduc
https://en.wikipedia.org/wiki/RL%20circuit
A resistor–inductor circuit (RL circuit), or RL filter or RL network, is an electric circuit composed of resistors and inductors driven by a voltage or current source. A first-order RL circuit is composed of one resistor and one inductor, either in series driven by a voltage source or in parallel driven by a current source. It is one of the simplest analogue infinite impulse response electronic filters. Introduction The fundamental passive linear circuit elements are the resistor (R), capacitor (C) and inductor (L). These circuit elements can be combined to form an electrical circuit in four distinct ways: the RC circuit, the RL circuit, the LC circuit and the RLC circuit, with the abbreviations indicating which components are used. These circuits exhibit important types of behaviour that are fundamental to analogue electronics. In particular, they are able to act as passive filters. In practice, however, capacitors (and RC circuits) are usually preferred to inductors since they can be more easily manufactured and are generally physically smaller, particularly for higher values of components. Both RC and RL circuits form a single-pole filter. Depending on whether the reactive element (C or L) is in series with the load, or parallel with the load will dictate whether the filter is low-pass or high-pass. Frequently RL circuits are used as DC power supplies for RF amplifiers, where the inductor is used to pass DC bias current and block the RF getting back into the power supply. Complex impedance The complex impedance (in ohms) of an inductor with inductance (in henrys) is The complex frequency is a complex number, where represents the imaginary unit: , is the exponential decay constant (in radians per second), and is the angular frequency (in radians per second). Eigenfunctions The complex-valued eigenfunctions of any linear time-invariant (LTI) system are of the following forms: From Euler's formula, the real-part of these eigenfunctions are expo
https://en.wikipedia.org/wiki/Boarding%20pass
A boarding pass or boarding card is a document provided by an airline during airport check-in, giving a passenger permission to enter the restricted area of an airport (also known as the airside portion of the airport) and to board the airplane for a particular flight. At a minimum, it identifies the passenger, the flight number, the date, and scheduled time for departure. A boarding pass may also indicate details of the perks a passenger is entitled to (e.g., lounge access, priority boarding) and is thus presented at the entrance of such facilities to show eligibility. In some cases, flyers can check in online and print the boarding passes themselves. There are also codes that can be saved to an electronic device or from the airline's app that are scanned during boarding. A boarding pass may be required for a passenger to enter a secure area of an airport. Generally, a passenger with an electronic ticket will only need a boarding pass. If a passenger has a paper airline ticket, that ticket (or flight coupon) may be required to be attached to the boarding pass for the passenger to board the aircraft. For "connecting flights", a boarding pass is required for each new leg (distinguished by a different flight number), regardless of whether a different aircraft is boarded or not. The paper boarding pass (and ticket, if any), or portions thereof, are sometimes collected and counted for cross-check of passenger counts by gate agents, but more frequently are scanned (via barcode or magnetic strip) and returned to the passengers in their entirety. The standards for bar codes and magnetic stripes on boarding passes are published by the IATA. The bar code standard (Bar Coded Boarding Pass) defines the 2D bar code printed on paper boarding passes or sent to mobile phones for electronic boarding passes. The magnetic stripe standard (ATB2) expired in 2010. Most airports and airlines have automatic readers that will verify the validity of the boarding pass at the jetwa
https://en.wikipedia.org/wiki/Traffic%20policing%20%28communications%29
In communications, traffic policing is the process of monitoring network traffic for compliance with a traffic contract and taking steps to enforce that contract. Traffic sources which are aware of a traffic contract may apply traffic shaping to ensure their output stays within the contract and is thus not discarded. Traffic exceeding a traffic contract may be discarded immediately, marked as non-compliant, or left as-is, depending on administrative policy and the characteristics of the excess traffic. Effects The recipient of traffic that has been policed will observe packet loss distributed throughout periods when incoming traffic exceeded the contract. If the source does not limit its sending rate (for example, through a feedback mechanism), this will continue, and may appear to the recipient as if link errors or some other disruption is causing random packet loss. The received traffic, which has experienced policing en route, will typically comply with the contract, although jitter may be introduced by elements in the network downstream of the policer. With reliable protocols, such as TCP as opposed to UDP, the dropped packets will not be acknowledged by the receiver, and therefore will be resent by the emitter, thus generating more traffic. Impact on congestion-controlled sources Sources with feedback-based congestion control mechanisms (for example TCP) typically adapt rapidly to static policing, converging on a rate just below the policed sustained rate. Co-operative policing mechanisms, such as packet-based discard facilitate more rapid convergence, higher stability and more efficient resource sharing. As a result, it may be hard for endpoints to distinguish TCP traffic that has been merely policed from TCP traffic that has been shaped. Impact in the case of ATM Where cell-level dropping is enforced (as opposed to that achieved through packet-based policing) the impact is particularly severe on longer packets. Since cells are typically much shorter
https://en.wikipedia.org/wiki/Domain%20privacy
Domain privacy (often called Whois privacy) is a service offered by a number of domain name registrars. A user buys privacy from the company, who in turn replaces the user's information in the WHOIS with the information of a forwarding service (for email and sometimes postal mail, it is done by a proxy server). Level of anonymity Registrars typically collect personal information to provide the service. Some registrars take little persuasion to release the so-called 'private' information to the world, requiring only a phone request or a cease and desist letter. Others, however, handle privacy with more precaution, using measures including hosting domain names offshore and accepting cryptocurrencies for payment so that the registrar has no knowledge of the domain name owner's personal information (which would otherwise be transmitted with credit card transactions). It is debatable whether or not this practice is at odds with the domain registration requirement of the Internet Corporation for Assigned Names and Numbers (ICANN). Privacy by default Some top-level domains have privacy caveats: .al: No information about the owner is disclosed. .at, .co.at, .or.at: Since May 21, 2010, contact data (defined as phone number, fax number, e-mail address) is hidden by the registrar and must be explicitly made public. .ca: Since June 10, 2008, the Canadian Internet Registration Authority no longer posts registration details of individuals associated with .ca domains. .ch and .li : Since 1st January 2021 Whois information is private by default and can be obtained only in limited cases .de: Since May 25, 2018, the German Internet Registration Authority denic put extensive changes into force for the Whois Lookup Service. With a few exceptions, third parties can no longer access domain ownership data. .eu: If the registrant is a natural person, only the e-mail address is shown in the public whois records unless specified otherwise. .fi: Individual persons' data is not publis
https://en.wikipedia.org/wiki/Metabolic%20engineering
Metabolic engineering is the practice of optimizing genetic and regulatory processes within cells to increase the cell's production of a certain substance. These processes are chemical networks that use a series of biochemical reactions and enzymes that allow cells to convert raw materials into molecules necessary for the cell's survival. Metabolic engineering specifically seeks to mathematically model these networks, calculate a yield of useful products, and pin point parts of the network that constrain the production of these products. Genetic engineering techniques can then be used to modify the network in order to relieve these constraints. Once again this modified network can be modeled to calculate the new product yield. The ultimate goal of metabolic engineering is to be able to use these organisms to produce valuable substances on an industrial scale in a cost-effective manner. Current examples include producing beer, wine, cheese, pharmaceuticals, and other biotechnology products. Some of the common strategies used for metabolic engineering are (1) overexpressing the gene encoding the rate-limiting enzyme of the biosynthetic pathway, (2) blocking the competing metabolic pathways, (3) heterologous gene expression, and (4) enzyme engineering. Since cells use these metabolic networks for their survival, changes can have drastic effects on the cells' viability. Therefore, trade-offs in metabolic engineering arise between the cells ability to produce the desired substance and its natural survival needs. Therefore, instead of directly deleting and/or overexpressing the genes that encode for metabolic enzymes, the current focus is to target the regulatory networks in a cell to efficiently engineer the metabolism. History and applications In the past, to increase the productivity of a desired metabolite, a microorganism was genetically modified by chemically induced mutation, and the mutant strain that overexpressed the desired metabolite was then chosen. Ho
https://en.wikipedia.org/wiki/Cis-3-Hexen-1-ol
cis-3-Hexen-1-ol, also known as (Z)-3-hexen-1-ol and leaf alcohol, is a colorless oily liquid with an intense grassy-green odor of freshly cut green grass and leaves. It is produced in small amounts by most plants and it acts as an attractant to many predatory insects. cis-3-Hexen-1-ol is a very important aroma compound that is used in fruit and vegetable flavors and in perfumes. The yearly production is about 30 tonnes. cis-3-Hexen-1-ol is an alcohol and its esters are also important flavor and fragrance raw materials. The related aldehyde cis-3-hexenal (leaf aldehyde) has a similar and even stronger smell but is relatively unstable and isomerizes into the conjugated trans-2-hexenal. This compound has been recognized as a semiochemical involved in mechanisms and behaviors of attraction in diverse animals such as insects and mammals. However, there is no scientific evidence of its aphrodisiac effects in humans. The popular Mexican alcoholic beverage, mezcal, is found to have enhanced concentrations of this compound when a maguey worm is served in the glass. Human odor perception A pair of two single-nucleotide polymorphisms, both in the gene for the OR2J3 odor receptor, strongly reduce sensitivity to this odorant. References External links Pheromone database Molecule of the Month: Hexenal That Worm at the Bottom of Your Mezcal Isn’t a Total Lie Flavors Alkenols
https://en.wikipedia.org/wiki/Dessin%20d%27enfant
In mathematics, a dessin d'enfant is a type of graph embedding used to study Riemann surfaces and to provide combinatorial invariants for the action of the absolute Galois group of the rational numbers. The name of these embeddings is French for a "child's drawing"; its plural is either dessins d'enfant, "child's drawings", or dessins d'enfants, "children's drawings". A dessin d'enfant is a graph, with its vertices colored alternately black and white, embedded in an oriented surface that, in many cases, is simply a plane. For the coloring to exist, the graph must be bipartite. The faces of the embedding are required be topological disks. The surface and the embedding may be described combinatorially using a rotation system, a cyclic order of the edges surrounding each vertex of the graph that describes the order in which the edges would be crossed by a path that travels clockwise on the surface in a small loop around the vertex. Any dessin can provide the surface it is embedded in with a structure as a Riemann surface. It is natural to ask which Riemann surfaces arise in this way. The answer is provided by Belyi's theorem, which states that the Riemann surfaces that can be described by dessins are precisely those that can be defined as algebraic curves over the field of algebraic numbers. The absolute Galois group transforms these particular curves into each other, and thereby also transforms the underlying dessins. For a more detailed treatment of this subject, see or . History 19th century Early proto-forms of dessins d'enfants appeared as early as 1856 in the icosian calculus of William Rowan Hamilton; in modern terms, these are Hamiltonian paths on the icosahedral graph. Recognizable modern dessins d'enfants and Belyi functions were used by Felix Klein. Klein called these diagrams Linienzüge (German, plural of Linienzug "line-track", also used as a term for polygon); he used a white circle for the preimage of 0 and a '+' for the preimage of 1, rather than
https://en.wikipedia.org/wiki/Menstrual%20synchrony
Menstrual synchrony, also called the McClintock effect, or the Wellesley effect, is a contested process whereby women who begin living together in close proximity would experience their menstrual cycle onsets (the onset of menstruation or menses) becoming more synchronized together in time than when previously living apart. "For example, the distribution of onsets of seven female lifeguards was scattered at the beginning of the summer, but after 3 months spent together, the onset of all seven cycles fell within a 4-day period." Martha McClintock's 1971 paper, published in Nature, says that menstrual cycle synchronization happens when the menstrual cycle onsets of two or more women become closer together in time than they were several months earlier. After the initial studies, several papers were published reporting methodological flaws in studies reporting menstrual synchrony including McClintock's study. In addition, other studies were published that failed to find synchrony. The proposed mechanisms have also received scientific criticism. Reviews in 2006 and 2013 concluded that menstrual synchrony likely does not exist. Overview Original study by Martha McClintock Martha McClintock published the first study on menstrual synchrony among women living together in dormitories at Wellesley College, a women's liberal arts college in Massachusetts, US. Proposed causes McClintock hypothesized that pheromones could cause menstrual cycle synchronization. However, other mechanisms have been proposed, most prominently synchronization with lunar phases. Efforts to replicate McClintock's results No scientific evidence supports the lunar hypothesis, and doubt has been cast on pheromone mechanisms. After the initial studies reporting menstrual synchrony began to appear in the scientific literature, other researchers began reporting the failure to find menstrual synchrony. These studies were followed by critiques of the methods used in early studies, which argued that bias
https://en.wikipedia.org/wiki/Fr%C3%A9chet%20filter
In mathematics, the Fréchet filter, also called the cofinite filter, on a set is a certain collection of subsets of (that is, it is a particular subset of the power set of ). A subset of belongs to the Fréchet filter if and only if the complement of in is finite. Any such set is said to be , which is why it is alternatively called the cofinite filter on . The Fréchet filter is of interest in topology, where filters originated, and relates to order and lattice theory because a set's power set is a partially ordered set under set inclusion (more specifically, it forms a lattice). The Fréchet filter is named after the French mathematician Maurice Fréchet (1878-1973), who worked in topology. Definition A subset of a set is said to be cofinite in if its complement in (that is, the set ) is finite. If the empty set is allowed to be in a filter, the Fréchet filter on , denoted by is the set of all cofinite subsets of . That is: If is a finite set, then every cofinite subset of is necessarily not empty, so that in this case, it is not necessary to make the empty set assumption made before. This makes a on the lattice the power set of with set inclusion, given that denotes the complement of a set in the following two conditions hold: Intersection condition If two sets are finitely complemented in then so is their intersection, since and Upper-set condition If a set is finitely complemented in then so are its supersets in . Properties If the base set is finite, then since every subset of and in particular every complement, is then finite. This case is sometimes excluded by definition or else called the improper filter on Allowing to be finite creates a single exception to the Fréchet filter's being free and non-principal since a filter on a finite set cannot be free and a non-principal filter cannot contain any singletons as members. If is infinite, then every member of is infinite since it is simply minus finitely many of its member
https://en.wikipedia.org/wiki/Weighting%20curve
A weighting curve is a graph of a set of factors, that are used to 'weight' measured values of a variable according to their importance in relation to some outcome. An important example is frequency weighting in sound level measurement where a specific set of weighting curves known as A-, B-, C- and D-weighting as defined in IEC 61672 are used. Unweighted measurements of sound pressure do not correspond to perceived loudness because the human ear is less sensitive at low and high frequencies, with the effect more pronounced at lower sound levels. The four curves are applied to the measured sound level, for example by the use of a weighting filter in a sound level meter, to arrive at readings of loudness in phons or in decibels (dB) above the threshold of hearing (see A-weighting). Weighting curves in electronic engineering, audio and broadcasting Although A-weighting with a slow RMS detector, as commonly used in sound level meters is frequently used when measuring noise in audio circuits, a different weighting curve, ITU-R 468 weighting uses a psophometric weighting curve and a quasi-peak detector. This method, formerly known as CCIR weighting, is preferred by the telecommunications industry, broadcasters, and some equipment manufacturers as it reflects more accurately the audibility of pops and short bursts of random noise as opposed to pure tones. Psophometric weighting is used in telephony and telecommunications where narrow-band circuits are common. Hearing weighting curves are also used for sound in water. Other applications of weighting Acoustics is by no means the only subject which finds use for weighting curves however, and they are widely used in deriving measures of effect for sun exposure, gamma radiation exposure, and many other things. In the measurement of gamma rays or other ionising radiation, a radiation monitor or dosimeter will commonly use a filter to attenuate those energy levels or wavelengths that cause the least damage to the human body,
https://en.wikipedia.org/wiki/Vaalharts%20Irrigation%20Scheme
The Vaalharts Irrigation Scheme is one of the largest irrigation schemes in the world covering 369.50 square kilometres in the Northern Cape Province of South Africa. It is named after the Vaal River and the Harts River, the Vaal River being its major tributary. Water from a diversion weir in the Vaal River, near Warrenton, flows through a 1,176 km long network of canals. This system provides irrigation water to a total of 39,820ha scheduled land, industrial water to six towns and other industrial water users. This farmland is divided into individual blocks which each have their own letter, or letter group for identification. The blocks are divided into streets which have numbers that count up from the first one out. The canals divide into all of the blocks and the streets. There are a total of 6 plots per street and each plot also has a number from one to six. To reference a specific plot, you take the plot number, follow it up with the block letter and then the street number. e.g. 3 G 13. Each plot feeds off of the canal into their own dams with their own hatch from the canal. The water then goes through pumps to be sprinkled across the farmland. The most popular method to do this is with pivots. See also Hartswater Windsorton References External links Vaalharts Irrigation Scheme Study, South Africa Northern Cape Irrigation projects Irrigation in South Africa Vaal River
https://en.wikipedia.org/wiki/Neal%20Koblitz
Neal I. Koblitz (born December 24, 1948) is a Professor of Mathematics at the University of Washington. He is also an adjunct professor with the Centre for Applied Cryptographic Research at the University of Waterloo. He is the creator of hyperelliptic curve cryptography and the independent co-creator of elliptic curve cryptography. Biography Koblitz received his B.A. in mathematics from Harvard University in 1969. While at Harvard, he was a Putnam Fellow in 1968. He received his Ph.D. from Princeton University in 1974 under the direction of Nick Katz. From 1975 to 1979 he was an instructor at Harvard University. In 1979 he began working at the University of Washington. Koblitz's 1981 article "Mathematics as Propaganda" criticized the misuse of mathematics in the social sciences and helped motivate Serge Lang's successful challenge to the nomination of political scientist Samuel P. Huntington to the National Academy of Sciences. In The Mathematical Intelligencer, Koblitz, Steven Weintraub, and Saunders Mac Lane later criticized the arguments of Herbert A. Simon, who had attempted to defend Huntington's work. He co-invented Elliptic-curve cryptography in 1985, with Victor S. Miller and for this was awarded the Levchin Prize in 2021. With his wife Ann Hibner Koblitz, he in 1985 founded the Kovalevskaia Prize, to honor women scientists in developing countries. It was financed from the royalties of Ann Hibner Koblitz's 1983 biography of Sofia Kovalevskaia. Although the awardees have ranged over many fields of science, one of the 2011 winners was a Vietnamese mathematician, Lê Thị Thanh Nhàn. Koblitz is an atheist. See also List of University of Waterloo people Gross–Koblitz formula Selected publications References External links Neal Koblitz's home page 1948 births Living people 20th-century American mathematicians 21st-century American mathematicians American atheists Modern cryptographers Public-key cryptographers Putnam Fellows Number theorists Harvard
https://en.wikipedia.org/wiki/Virtual%20private%20database
A virtual private database or VPD masks data in a larger database so that only a subset of the data appears to exist, without actually segregating data into different tables, schemas or databases. A typical application is constraining sites, departments, individuals, etc. to operate only on their own records and at the same time allowing more privileged users and operations (e.g. reports, data warehousing, etc.) to access on the whole table. The term is typical of the Oracle DBMS, where the implementation is very general: tables can be associated to SQL functions, which return a predicate as a SQL expression. Whenever a query is executed, the relevant predicates for the involved tables are transparently collected and used to filter rows. SELECT, INSERT, UPDATE and DELETE can have different rules. External links Using Virtual Private Database to Implement Application Security Policies http://www.oracle-base.com/articles/8i/VirtualPrivateDatabases.php Data security Types of databases
https://en.wikipedia.org/wiki/Z1%20%28computer%29
The Z1 was a motor-driven mechanical computer designed by Konrad Zuse from 1936 to 1937, which he built in his parents' home from 1936 to 1938. It was a binary electrically driven mechanical calculator with limited programmability, reading instructions from punched celluloid film. The “Z1” was the first freely programmable computer in the world that used Boolean logic and binary floating-point numbers, however, it was unreliable in operation. It was completed in 1938 and financed completely by private funds. This computer was destroyed in the bombardment of Berlin in December 1943, during World War II, together with all construction plans. The Z1 was the first in a series of computers that Zuse designed. Its original name was "V1" for Versuchsmodell 1 (meaning Experimental Model 1). After WW2, it was renamed "Z1" to differentiate it from the flying bombs designed by Robert Lusser. The Z2 and Z3 were follow-ups based on many of the same ideas as the Z1. Design The Z1 contained almost all the parts of a modern computer, i.e. control unit, memory, micro sequences, floating-point logic, and input-output devices. The Z1 was freely programmable via punched tape and a punched tape reader. There was a clear separation between the punched tape reader, the control unit for supervising the whole machine and the execution of the instructions, the arithmetic unit, and the input and output devices. The input tape unit read perforations in 35-millimeter film. The Z1 was a 22-bit floating-point value adder and subtractor, with some control logic to make it capable of more complex operations such as multiplication (by repeated additions) and division (by repeated subtractions). The Z1's instruction set had eight instructions and it took between one and twenty-one cycles per instruction. The Z1 had a 16-word floating point memory, where each word of memory could be read from – and written to – the control unit. The mechanical memory units were unique in their design and were
https://en.wikipedia.org/wiki/Pain%20asymbolia
Pain asymbolia, also called pain dissociation, is a condition in which pain is experienced without unpleasantness. This usually results from injury to the brain, lobotomy, cingulotomy or morphine analgesia. Preexisting lesions of the insula may abolish the aversive quality of painful stimuli while preserving the location and intensity aspects. Typically, patients report that they have pain but are not bothered by it; they recognize the sensation of pain but are mostly or completely immune to suffering from it. The pathophysiology of this disease revolves around a disconnect between the insular cortex secondary to damage and the limbic system, specifically the cingulate gyrus whose prime response to the pain perceived by insular cortex is to tether it with an agonizing emotional response thus signaling the individual of its propensity to inflict actual harm. However, a disconnect is not the only prime causative factor, as damage to these aforementioned cortical structures also results in the same symptomology. See also Physical pain Psychological pain Suffering Congenital insensitivity to pain References External links http://www.archipel.uqam.ca/2996/1/Frak.PDF Pain Neuroscience
https://en.wikipedia.org/wiki/SNDCP
SNDCP, Sub Network Dependent Convergence Protocol, is part of layer 3 of a GPRS protocol specification. SNDCP interfaces to the Internet Protocol at the top, and to the GPRS-specific Logical Link Control (LLC) protocol at the bottom. In the spirit of the GPRS specifications, there can be many implementations of SNDCP, supporting protocols such as X.25. However, in reality, IP (Internet Protocol) is such an overwhelming standard that X.25 has become irrelevant for modern applications, so all implementations of SNDCP for GPRS only support IP as the payload type. The SNDCP layer is relevant to the protocol stack of the mobile station and that of the SGSN, and works when a PDP Context is established and the quality of service has been negotiated. Services offered by SNDCP The SNDCP layer primarily converts, encapsulates and segments external network formats (like Internet Protocol Datagrams) into sub-network formats (called SNPDUs). It also performs compression of NPDUs to make for efficient Data transmission. It performs the multiple PDP Context PDU transfers and it also ensures that NPDUs from each PDP Context are transmitted to the LLC layer in sufficient time to maintain the QoS. SNDCP provides services to the higher layers which may include connectionless and connection-oriented mode, compression, multiplexing and segmentation. References 3GPP TS 44.065 "Mobile Station (MS) - Serving GPRS Support Node (SGSN); Subnetwork Dependent Convergence Protocol (SNDCP)" Network protocols
https://en.wikipedia.org/wiki/Combinatorial%20principles
In proving results in combinatorics several useful combinatorial rules or combinatorial principles are commonly recognized and used. The rule of sum, rule of product, and inclusion–exclusion principle are often used for enumerative purposes. Bijective proofs are utilized to demonstrate that two sets have the same number of elements. The pigeonhole principle often ascertains the existence of something or is used to determine the minimum or maximum number of something in a discrete context. Many combinatorial identities arise from double counting methods or the method of distinguished element. Generating functions and recurrence relations are powerful tools that can be used to manipulate sequences, and can describe if not resolve many combinatorial situations. Rule of sum The rule of sum is an intuitive principle stating that if there are a possible outcomes for an event (or ways to do something) and b possible outcomes for another event (or ways to do another thing), and the two events cannot both occur (or the two things can't both be done), then there are a + b total possible outcomes for the events (or total possible ways to do one of the things). More formally, the sum of the sizes of two disjoint sets is equal to the size of their union. Rule of product The rule of product is another intuitive principle stating that if there are a ways to do something and b ways to do another thing, then there are a · b ways to do both things. Inclusion–exclusion principle The inclusion–exclusion principle relates the size of the union of multiple sets, the size of each set, and the size of each possible intersection of the sets. The smallest example is when there are two sets: the number of elements in the union of A and B is equal to the sum of the number of elements in A and B, minus the number of elements in their intersection. Generally, according to this principle, if A1, …, An are finite sets, then Rule of division The rule of division states that there ar
https://en.wikipedia.org/wiki/Unimodality
In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object. Unimodal probability distribution In statistics, a unimodal probability distribution or unimodal distribution is a probability distribution which has a single peak. The term "mode" in this context refers to any peak of the distribution, not just to the strict definition of mode which is usual in statistics. If there is a single mode, the distribution function is called "unimodal". If it has more modes it is "bimodal" (2), "trimodal" (3), etc., or in general, "multimodal". Figure 1 illustrates normal distributions, which are unimodal. Other examples of unimodal distributions include Cauchy distribution, Student's t-distribution, chi-squared distribution and exponential distribution. Among discrete distributions, the binomial distribution and Poisson distribution can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability. Figure 2 and Figure 3 illustrate bimodal distributions. Other definitions Other definitions of unimodality in distribution functions also exist. In continuous distributions, unimodality can be defined through the behavior of the cumulative distribution function (cdf). If the cdf is convex for x < m and concave for x > m, then the distribution is unimodal, m being the mode. Note that under this definition the uniform distribution is unimodal, as well as any other distribution in which the maximum distribution is achieved for a range of values, e.g. trapezoidal distribution. Usually this definition allows for a discontinuity at the mode; usually in a continuous distribution the probability of any single value is zero, while this definition allows for a non-zero probability, or an "atom of probability", at the mode. Criteria for unimodality can also be defined through the characteristic function of the distribution o
https://en.wikipedia.org/wiki/Cassini%20and%20Catalan%20identities
__notoc__ Cassini's identity (sometimes called Simson's identity) and Catalan's identity are mathematical identities for the Fibonacci numbers. Cassini's identity, a special case of Catalan's identity, states that for the nth Fibonacci number, Note here is taken to be 0, and is taken to be 1. Catalan's identity generalizes this: Vajda's identity generalizes this: History Cassini's formula was discovered in 1680 by Giovanni Domenico Cassini, then director of the Paris Observatory, and independently proven by Robert Simson (1753). However Johannes Kepler presumably knew the identity already in 1608. Catalan's identity is named after Eugène Catalan (1814–1894). It can be found in one of his private research notes, entitled "Sur la série de Lamé" and dated October 1879. However, the identity did not appear in print until December 1886 as part of his collected works . This explains why some give 1879 and others 1886 as the date for Catalan's identity . The Hungarian-British mathematician Steven Vajda (1901–95) published a book on Fibonacci numbers (Fibonacci and Lucas Numbers, and the Golden Section: Theory and Applications, 1989) which contains the identity carrying his name. However the identity was already published in 1960 by Dustan Everman as problem 1396 in The American Mathematical Monthly. Proof of Cassini identity Proof by matrix theory A quick proof of Cassini's identity may be given by recognising the left side of the equation as a determinant of a 2×2 matrix of Fibonacci numbers. The result is almost immediate when the matrix is seen to be the th power of a matrix with determinant −1: Proof by induction Consider the induction statement: The base case is true. Assume the statement is true for . Then: so the statement is true for all integers . Proof of Catalan identity We use Binet's formula, that , where and . Hence, and . So, Using , and again as , The Lucas number is defined as , so Because Cancelling the 's gives the result
https://en.wikipedia.org/wiki/Cusp%20neighborhood
In mathematics, a cusp neighborhood is defined as a set of points near a cusp singularity. Cusp neighborhood for a Riemann surface The cusp neighborhood for a hyperbolic Riemann surface can be defined in terms of its Fuchsian model. Suppose that the Fuchsian group G contains a parabolic element g. For example, the element t ∈ SL(2,Z) where is a parabolic element. Note that all parabolic elements of SL(2,C) are conjugate to this element. That is, if g ∈ SL(2,Z) is parabolic, then for some h ∈ SL(2,Z). The set where H is the upper half-plane has for any where is understood to mean the group generated by g. That is, γ acts properly discontinuously on U. Because of this, it can be seen that the projection of U onto H/G is thus . Here, E is called the neighborhood of the cusp corresponding to g. Note that the hyperbolic area of E is exactly 1, when computed using the canonical Poincaré metric. This is most easily seen by example: consider the intersection of U defined above with the fundamental domain of the modular group, as would be appropriate for the choice of T as the parabolic element. When integrated over the volume element the result is trivially 1. Areas of all cusp neighborhoods are equal to this, by the invariance of the area under conjugation. References Hyperbolic geometry Riemann surfaces
https://en.wikipedia.org/wiki/Inversion%20of%20control
In software engineering, inversion of control (IoC) is a design pattern in which custom-written portions of a computer program receive the flow of control from a generic framework. The term "inversion" is historical: a software architecture with this design "inverts" control as compared to procedural programming. In procedural programming, a program's custom code calls reusable libraries to take care of generic tasks, but with inversion of control, it is the framework that calls the custom code. Inversion of control has been widely used by application development frameworks since the rise of GUI environments and continues to be used both in GUI environments and in web server application frameworks. Inversion of control makes the framework extensible by the methods defined by the application programmer. Event-driven programming is often implemented using IoC so that the custom code need only be concerned with the handling of events, while the event loop and dispatch of events/messages is handled by the framework or the runtime environment. In web server application frameworks, dispatch is usually called routing, and handlers may be called endpoints. The phrase "inversion of control" has separately also come to be used in the community of Java programmers to refer specifically to the patterns of injecting objects' dependencies that occur with "IoC containers" in Java frameworks such as the Spring framework. In this different sense, "inversion of control" refers to granting the framework control over the implementations of dependencies that are used by application objects rather than to the original meaning of granting the framework control flow (control over the time of execution of application code e.g. callbacks). Overview As an example, with traditional programming, the main function of an application might make function calls into a menu library to display a list of available commands and query the user to select one. The library thus would return the chosen
https://en.wikipedia.org/wiki/Nemawashi
Nemawashi () is a Japanese business informal process of quietly laying the foundation for some proposed change or project by talking to the people concerned and gathering support and feedback before a formal announcement. It is considered an important element in any major change in the Japanese business environment before any formal steps are taken. Successful nemawashi enables changes to be carried out with the consent of all sides, avoiding embarrassment. Nemawashi literally translates as "turning the roots", from ne (, "root") and mawasu (, "to turn something, to put something around something else"). Its original meaning was literal: in preparation for transplanting a tree, one would carefully dig around a tree some time before transplanting, and trim the roots to encourage the growth of smaller roots that will help the tree become established in its new location. Nemawashi is often cited as an example of a Japanese word which is difficult to translate effectively, because it is tied so closely to Japanese culture itself, although it is often translated as "laying the groundwork." In Japan, high-ranking people expect to be let in on new proposals prior to an official meeting. If they find out about something for the first time during the meeting, they will feel that they have been ignored, and they may reject it for that reason alone. Thus, it's important to approach these people individually before the meeting. This provides an opportunity to introduce the proposal to them and gauge their reaction. This is also a good chance to hear their input. This process is referred to as nemawashi. See also Lobbying Toyota Production System References External links Kirai, a geek in Japan: Nemawashi Japanese words and phrases Japanese business terms Economy of Japan Lean manufacturing
https://en.wikipedia.org/wiki/Insulating%20concrete%20form
Insulating concrete form or insulated concrete form (ICF) is a system of formwork for reinforced concrete usually made with a rigid thermal insulation that stays in place as a permanent interior and exterior substrate for walls, floors, and roofs. The forms are interlocking modular units that are dry-stacked (without mortar) and filled with concrete. The units lock together somewhat like Lego bricks and create a form for the structural walls or floors of a building. ICF construction has become commonplace for both low rise commercial and high performance residential construction as more stringent energy efficiency and natural disaster resistant building codes are adopted. Development The first expanded polystyrene ICF Wall forms were developed in the late 1960s with the expiration of the original patent and the advent of modern foam plastics by BASF. Canadian contractor Werner Gregori filed the first patent for a foam concrete form in 1966 with a block "measuring 16 inches high by 48 inches long with a tongue-and-groove interlock, metal ties, and a waffle-grid core." It is right to point out that a primordial form of ICF formwork dates back to 1907, as evidenced by the patent entitled “building-block”, inventor L. R. Franklin. This patent claimed a parallelepiped-shaped brick having a central cylindrical cavity, connected to the upper and lower faces by countersink. The adoption of ICF construction has steadily increased since the 1970s, though it was initially hampered by lack of awareness, building codes, and confusion caused by many different manufacturers selling slightly different ICF designs rather than focusing on industry standardization. ICF construction is now part of most building codes and accepted in most jurisdictions in the developed world. Construction Insulating concrete forms are manufactured from any of the following materials: Polystyrene foam (most commonly expanded or extruded) Polyurethane foam (including soy-based foam) Cement-bonded wood