source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Intel%20LANSpool
LANSPool was network printer administration software developed by Intel. The package was designed specifically for the Novell NetWare network operating system. The software allowed users to share printers and faxes and for administrators to modify LAN printing operations. The software takes its name from the acronym for local area network (LAN) and the spooling technique by which computers send information to slow peripherals such as printers. In March 1992, Intel announced that users of version 3.01 of the software might be at risk from the Michelangelo virus as the manufacturer had found the virus on master copies of the 5¼-inch floppy disks. External links retroSoftware - LANSPool LANSpool and Michelangelo virus LANSpool LANSpool LANSpool
https://en.wikipedia.org/wiki/History%20of%20computer%20hardware%20in%20Yugoslavia
The Socialist Federal Republic of Yugoslavia (SFRY) was a socialist country that existed in the second half of the 20th century. Being socialist meant that strict technology import rules and regulations shaped the development of computer history in the country, unlike in the Western world. However, since it was a non-aligned country, it had no ties to the Soviet Bloc either. One of the major ideas contributing to the development of any technology in SFRY was the apparent need to be independent of foreign suppliers for spare parts, fueling domestic computer development. Development Early computers In former Yugoslavia, at the end of 1962 there were 30 installed electronic computers, in 1966, there were 56, and in 1968 there were 95. Having received training in the European computer centres (Paris 1954 and 1955, Darmstadt 1959, Wien 1960, Cambridge 1961 and London 1964), engineers from the BK.Institute-Vinča and the Mihailo Pupin Institute- Belgrade, led by Prof. dr Tihomir Aleksić, started a project of designing the first "domestic" digital computer at the end of the 1950s. This was to become a line of CER (), starting with the model CER-10 in 1960, a primarily vacuum tube and electronic relays-based computer. By 1964, CER-20 computer was designed and completed as "electronic bookkeeping machine", as the manufacturer recognized increasing need in accounting market. This special-purpose trend continued with the release of CER-22 in 1967, which was intended for on-line "banking" applications. There were more CER models, such as CER-11, CER-12, and CER-200, but there is currently little information here available on them. In the late 1970s, "Ei-Niš Računarski Centar" from Niš, Serbia, started assembling Mainframe computers H6000 under Honeywell license, mainly for banking businesses. Computer initially had a great success that later led into local limited parts production. In addition, the company produced models such as H6 and H66 and was alive as late as early 2
https://en.wikipedia.org/wiki/Intel%20Hub%20Architecture
Intel Hub Architecture (IHA), also known as Accelerated Hub Architecture (AHA) was Intel's architecture for the 8xx family of chipsets, starting in 1999 with the Intel 810. It uses a memory controller hub (MCH) that is connected to an I/O controller hub (ICH) via a 266 MB/s bus. The MCH chip supports memory and AGP (replaced by PCI Express in 9xx series chipsets), while the ICH chip provides connectivity for PCI (revision 2.2 before ICH5 series and revision 2.3 since ICH5 series), USB (version 1.1 before ICH4 series and version 2.0 since ICH4 series), sound (originally AC'97, Azalia added in ICH6 series), IDE hard disks (supplemented by Serial ATA since ICH5 series, fully replaced IDE since ICH8 series for desktops and ICH9 series for notebooks) and LAN (uncommonly activated on desktop motherboards and notebooks, usually independent LAN controller were placed instead of PHY chip). Intel claimed that, because of the high-speed channel between the sections, the IHA was faster than the earlier northbridge/southbridge design, which hooked all low-speed ports to the PCI bus. The IHA also optimized data transfer based on data type. Next generation Intel Hub Interface 2.0 was employed in Intel's line of E7xxx server chipsets. This new revision allowed for dedicated data paths for transferring greater than 1.0 GB/s of data to and from the MCH, which support I/O segments with greater reliability and faster access to high-speed networks. Current status IHA is now considered obsolete and no longer used, being superseded by the Direct Media Interface architecture. The Platform Controller Hub (PCH) providing most of the features previously seen in ICH chips while moving memory, graphics and PCI Express controllers to the CPU, introduced with the Intel 5 Series chipsets in 2009. This chipset architecture is still used in desktops, in some notebooks it is going to be replaced by SoC processor designs.
https://en.wikipedia.org/wiki/Loudness%20monitoring
Loudness monitoring of programme levels is needed in radio and television broadcasting, as well as in audio post production. Traditional methods of measuring signal levels, such as the peak programme meter and VU meter, do not give the subjectively valid measure of loudness that many would argue is needed to optimise the listening experience when changing channels or swapping disks. The need for proper loudness monitoring is apparent in the loudness war that is now found everywhere in the audio field, and the extreme compression that is now applied to programme levels. Loudness meters Meters have been introduced that aim to measure the human perceived loudness by taking account of the equal-loudness contours and other factors, such as audio spectrum, duration, compression and intensity. One such device was developed by CBS Laboratories in the 1980s. Complaints to broadcasters about the intrusive level of interstitials programs (advertisements, commercials) has resulted in projects to develop such meters. Based on loudness metering, many manufacturers have developed real-time audio processors that adjust the audio signal to match a specified target loudness level that preserves volume consistency at home listeners. EBU Mode meters In August 2010, the European Broadcasting Union published a new metering specification EBU Tech 3341, which builds on ITU-R BS.1770. To make sure meters from different manufacturers provide the same reading in LUFS units, EBU Tech 3341 specifies the EBU Mode, which includes a Momentary (400ms), Short term (3s) and Integrated (from start to stop) meter and a set of audio signals to test the meters. See also Audio normalization
https://en.wikipedia.org/wiki/Default%20constructor
In computer programming languages, the term default constructor can refer to a constructor that is automatically generated by the compiler in the absence of any programmer-defined constructors (e.g. in Java), and is usually a nullary constructor. In other languages (e.g. in C++) it is a constructor that can be called without having to provide any arguments, irrespective of whether the constructor is auto-generated or user-defined. Note that a constructor with formal parameters can still be called without arguments if default arguments were provided in the constructor's definition. C++ In C++, the standard describes the default constructor for a class as a constructor that can be called with no arguments (this includes a constructor whose parameters all have default arguments). For example: class MyClass { public: MyClass(); // constructor declared private: int x; }; MyClass::MyClass() : x(100) // constructor defined { } int main() { MyClass m; // at runtime, object m is created, and the default constructor is called } When allocating memory dynamically, the constructor may be called by adding parenthesis after the class name. In a sense, this is an explicit call to the constructor: int main() { MyClass * pointer = new MyClass(); // at runtime, an object is created, and the // default constructor is called } If the constructor does have one or more parameters, but they all have default values, then it is still a default constructor. Remember that each class can have at most one default constructor, either one without parameters, or one whose all parameters have default values, such as in this case: class MyClass { public: MyClass (int i = 0, std::string s = ""); // constructor declared private: int x; int y; std::string z; }; MyClass::MyClass(int i, std::string s) // constructor defined { x = 100; y = i; z = s; } In C++, default constructors are significant because
https://en.wikipedia.org/wiki/Refinable%20function
In mathematics, in the area of wavelet analysis, a refinable function is a function which fulfils some kind of self-similarity. A function is called refinable with respect to the mask if This condition is called refinement equation, dilation equation or two-scale equation. Using the convolution (denoted by a star, *) of a function with a discrete mask and the dilation operator one can write more concisely: It means that one obtains the function, again, if you convolve the function with a discrete mask and then scale it back. There is a similarity to iterated function systems and de Rham curves. The operator is linear. A refinable function is an eigenfunction of that operator. Its absolute value is not uniquely defined. That is, if is a refinable function, then for every the function is refinable, too. These functions play a fundamental role in wavelet theory as scaling functions. Properties Values at integral points A refinable function is defined only implicitly. It may also be that there are several functions which are refinable with respect to the same mask. If shall have finite support and the function values at integer arguments are wanted, then the two scale equation becomes a system of simultaneous linear equations. Let be the minimum index and be the maximum index of non-zero elements of , then one obtains Using the discretization operator, call it here, and the transfer matrix of , named , this can be written concisely as This is again a fixed-point equation. But this one can now be considered as an eigenvector-eigenvalue problem. That is, a finitely supported refinable function exists only (but not necessarily), if has the eigenvalue 1. Values at dyadic points From the values at integral points you can derive the values at dyadic points, i.e. points of the form , with and . The star denotes the convolution of a discrete filter with a function. With this step you can compute the values at points of the form . By replacing iterated
https://en.wikipedia.org/wiki/Essentially%20surjective%20functor
In mathematics, specifically in category theory, a functor is essentially surjective (or dense) if each object of is isomorphic to an object of the form for some object of . Any functor that is part of an equivalence of categories is essentially surjective. As a partial converse, any full and faithful functor that is essentially surjective is part of an equivalence of categories. Notes
https://en.wikipedia.org/wiki/Eikonal%20equation
An eikonal equation (from Greek εἰκών, image) is a non-linear first-order partial differential equation that is encountered in problems of wave propagation. The classical eikonal equation in geometric optics is a differential equation of the form where lies in an open subset of , is a positive function, denotes the gradient, and is the Euclidean norm. The function is given and one seeks solutions . In the context of geometric optics, the function is the refractive index of the medium. More generally, an eikonal equation is an equation of the form where is a function of variables. Here the function is given, and is the solution. If , then equation () becomes (). Eikonal equations naturally arise in the WKB method and the study of Maxwell's equations. Eikonal equations provide a link between physical (wave) optics and geometric (ray) optics. One fast computational algorithm to approximate the solution to the eikonal equation is the fast marching method. History The term "eikonal" was first used in the context of geometric optics by Heinrich Bruns. However, the actual equation appears earlier in the seminal work of William Rowan Hamilton on geometric optics. Physical interpretation Continuous shortest-path problems Suppose that is an open set with suitably smooth boundary . The solution to the eikonal equation can be interpreted as the minimal amount of time required to travel from to , where is the speed of travel, and is an exit-time penalty. (Alternatively this can be posed as a minimal cost-to-exit by making the right-side and an exit-cost penalty.) In the special case when , the solution gives the signed distance from . By assuming that exists at all points, it is easy to prove that corresponds to a time-optimal control problem using Bellman's optimality principle and a Taylor expansion. Unfortunately, it is not guaranteed that exists at all points, and more advanced techniques are necessary to prove this. This led to the d
https://en.wikipedia.org/wiki/Alignment%20level
The alignment level in an audio signal chain or on an audio recording is a defined anchor point that represents a reasonable or typical level. Analogue In analogue systems, alignment level is commonly 0 dBu (0.775 Volts RMS) in broadcast chains and in professional audio is commonly 0 VU, which is +4 dBu or 1.228 Volts RMS. Under normal situations, the 0 VU reference allows for a headroom of 18 dB or more above the reference level without significant distortion. This is largely due to the use of slow-responding VU meters in almost all analogue professional audio equipment which, by their design, and by specification respond to an average level, not peak levels. Digital In digital systems alignment level commonly is at −18 dBFS (18 dB below digital full scale), in accordance with EBU recommendations. Digital equipment must use peak reading metering systems to avoid severe digital distortion caused by the signal going beyond digital full scale. 24-bit original or master recordings commonly have an alignment level at −24 dBFS to allow extra headroom, which can then be reduced to match the available headroom of the final medium by audio level compression. FM broadcasts usually have only 9 dB of headroom as recommended by the EBU, but digital broadcasts, which could operate with 18 dB of headroom, given their low noise floor even in difficult reception areas, currently operate in a state of confusion, with some transmitting at maximum level while others operate at a much lower level even though they carry material that has been compressed for compatibility with the lower dynamic range of FM transmissions. EBU In EBU documents alignment level defines -18 dBFS as the level of the alignment signal, a 1 kHz sine tone for analog applications and 997 Hz in digital applications. Motivation Using alignment level rather than maximum permitted level as the reference point allows more sensible headroom management throughout the audio signal chain compression happens only whe
https://en.wikipedia.org/wiki/Pindjur
Pindjur or pinjur or pinđur (, , , ) is a relish form and is commonly used as a summer spread. Pindjur is commonly prepared in Bosnia and Herzegovina, Croatia, Bulgaria, Serbia and North Macedonia. The traditional ingredients include red bell peppers, tomatoes, garlic, vegetable oil, salt, and often eggplant. Pindjur is similar to ajvar, but the latter is smoother, usually has a stronger taste, and is rarely made with eggplant. In some regions the words are used interchangeably. The creation of this traditional relish is a rather long process which involves baking some of the ingredients for hours, as well as roasting the peppers and peeling them. See also Kyopolou, a similar relish in Bulgarian and Turkish cuisines Ljutenica, a similar relish in Bulgarian, Macedonian and Serbian cuisines Zacuscă, a similar relish in Romanian cuisine Malidzano List of eggplant dishes List of dips List of sauces List of spreads Bulgarian cuisine Serbian cuisine Croatian cuisine Macedonian cuisine Condiments Eggplant dishes
https://en.wikipedia.org/wiki/Statistical%20machine%20translation
Statistical machine translation (SMT) was a machine translation approach, that superseded the previous, rule-based approach because it required explicit description of each and every linguistic rule, which was costly, and which often did not generalize to other languages. Since 2003, the statistical approach itself has been gradually superseded by the deep learning-based neural network approach. The first ideas of statistical machine translation were introduced by Warren Weaver in 1949, including the ideas of applying Claude Shannon's information theory. Statistical machine translation was re-introduced in the late 1980s and early 1990s by researchers at IBM's Thomas J. Watson Research Center Basis The idea behind statistical machine translation comes from information theory. A document is translated according to the probability distribution that a string in the target language (for example, English) is the translation of a string in the source language (for example, French). The problem of modeling the probability distribution has been approached in a number of ways. One approach which lends itself well to computer implementation is to apply Bayes Theorem, that is , where the translation model is the probability that the source string is the translation of the target string, and the language model is the probability of seeing that target language string. This decomposition is attractive as it splits the problem into two subproblems. Finding the best translation is done by picking up the one that gives the highest probability: . For a rigorous implementation of this one would have to perform an exhaustive search by going through all strings in the native language. Performing the search efficiently is the work of a machine translation decoder that uses the foreign string, heuristics and other methods to limit the search space and at the same time keeping acceptable quality. This trade-off between quality and time usage can also be found in speech recogn
https://en.wikipedia.org/wiki/Tacet
Tacet is Latin which translates literally into English as "(it) is silent" (pronounced: , , or ). It is a musical term to indicate that an instrument or voice does not sound, also known as a rest. In vocal polyphony and in orchestral scores, it usually indicates a long period of time, typically an entire movement. In more modern music such as jazz, tacet tends to mark considerably shorter breaks. Multirests, or multiple-measure rests, are rests which last multiple measures (or multiple rests, each of which lasts an entire measure). It was common for early symphonies to leave out the brass or percussion in certain movements, especially in slow (second) movements, and this is the instruction given in the parts for the player to wait until the end of the movement. It is also commonly used in accompaniment music to indicate that the instrument does not play on a certain run through a portion of the music, e.g. "Tacet 1st time." The phrase tacet al fine is used to indicate that the performer should remain silent for the remainder of the piece (or portion thereof), and need not, for example, count rests. Tacet may be appropriate when a particular instrument/voice/section, "is to rest for an entire section, movement, or composition." "Partial rests, of course, in every case must be written in. Even though it means 'silent,' the term tacet...is not a wise substitution for a lengthy rest within a movement...The term tacet, therefore, should be used only to indicate that a player rests throughout an . "N.C." ("no chord") is often used in guitar tablature or chord charts to indicate tacets, rests, or caesuras in the accompaniment. Uses of tacet The earliest known usage of the term is 1724. A unique usage of this term is in John Cage's 1952 composition 4′33″. Tacet is indicated for all three movements, for all instruments. The piece's first performance lasted a total of 4 minutes and 33 seconds, without a note being played. See also Latin influence in English
https://en.wikipedia.org/wiki/Turgay%20Uzer
Ahmet Turgay Uzer is a Turkish-born American theoretical physicist and nature photographer. Regents' Professor Emeritus at Georgia Institute of Technology following Joseph Ford (physicist). He has contributed in the field of atomic and molecular physics, nonlinear dynamics and chaos significantly. His research on interplay between quantum dynamics and classical mechanics, in the context of chaos is considered to be novel in molecular and theoretical physics and chemistry. Academic career Turgay Uzer completed his bachelor's degree at Turkey's prestigious Middle East Technical University. According to Harvard University Library his doctoral thesis was entitled "Photon and electron interactions with diatomic molecules." He defended his dissertation and graduated from Harvard University in 1979. Before joining Georgia Tech in 1985 as an associate professor, he worked as a research fellow at University of Oxford 1979/81, Caltech 1982/1983, and as a research associate at University of Colorado 1983/85. Currently, he is a faculty member with the Center for Nonlinear Science and full professor of physics at Georgia Tech. His research areas are quite broad, but he has focused on the dynamics of intermolecular energy transfer, reaction dynamics, quantal manifestations of classical mechanics, quantization of nonlinear systems, computational physics, molecular physics, applied mathematics. Awards Uzer was Alexander von Humboldt-Stiftung Foundation Fellow in 1993–1994 at Max Planck Institute, Munich. Uzer is of Turkish origin and was also awarded the prestigious Science award for his contributions to physics from the Scientific and Technological Research Council (TÜBİTAK) in 1998. Selected publications Books The Physics and Chemistry of Wave Packets, with John Yeazell at books.google Lecture Notes on Atomic and Molecular Physics with Şakir Erkoç at books.google Some of the seminal papers Uzer has more than 80 referenced Journal articles, in a number of highly r
https://en.wikipedia.org/wiki/Hovm%C3%B6ller%20diagram
A Hovmöller diagram is a common way of plotting meteorological data to highlight the behavior of waves, particularly tropical waves. The axes of Hovmöller diagrams depict changes over time of scalar quantities such as temperature, density, and other values of constituents in the atmosphere or ocean, such as depth, height, or pressure. Typically in that case, time is recorded along the abscissa, or x-axis, while 'vertical' values (of depth, height, pressure, etc.) are plotted along the ordinate, or y-axis. The alternate orientation of axes may also be used, as a Hovmöller diagram may be plotted for longitude or latitude on the abscissa and for (advancing) time on the ordinate; then the contour values of a named physical field may be presented through color or shading. The Hovmöller diagram was introduced by Ernest Aabo Hovmöller (1912-2008), a Danish meteorologist, in a paper published in 1949. See also Tropical wave Heat map
https://en.wikipedia.org/wiki/Lysimeter
A lysimeter (from Greek λύσις (loosening) and the suffix -meter) is a measuring device which can be used to measure the amount of actual evapotranspiration which is released by plants (usually crops or trees). By recording the amount of precipitation that an area receives and the amount lost through the soil, the amount of water lost to evapotranspiration can be calculated. Lysimeters are of two types: weighing and non-weighing. General Usage A lysimeter is most accurate when vegetation is grown in a large soil tank which allows the rainfall input and water lost through the soil to be easily calculated. The amount of water lost by evapotranspiration can be worked out by calculating the difference between the weight before and after the precipitation input. For trees, lysimeters can be expensive and are a poor representation of conditions outside of a laboratory or orchard, as it would be impossible to use a lysimeter to calculate the water balance for a whole forest. But for farm crops, a lysimeter can represent field conditions well since the device is installed and used outside the laboratory. A weighing lysimeter, for example, reveals the amount of water crops use by constantly weighing a huge block of soil in a field to detect losses of soil moisture (as well as any gains from precipitation). An example of their use is in the development of new xerophytic apple tree cultivars in order to adapt to changing climate patterns of reduced rainfall in traditional apple growing regions. The University of Arizona's Biosphere 2 built the world's largest weighing lysimeters using a mixture of thirty 220,000 and 333,000 lb-capacity column load cells from Honeywell, Inc. as part of its Landscape Evolution Observatory project. Use in whole plant physiological phenotyping systems To date, physiology-based, high-throughput phenotyping systems (also known as plant functional phenotyping systems), which, used in combination with soil–plant–atmosphere continuum (SPAC) meas
https://en.wikipedia.org/wiki/Melanie%20Mitchell
Melanie Mitchell is an American scientist. She is the Davis Professor of Complexity at the Santa Fe Institute. Her major work has been in the areas of analogical reasoning, complex systems, genetic algorithms and cellular automata, and her publications in those fields are frequently cited. She received her PhD in 1990 from the University of Michigan under Douglas Hofstadter and John Holland, for which she developed the Copycat cognitive architecture. She is the author of "Analogy-Making as Perception", essentially a book about Copycat. She has also critiqued Stephen Wolfram's A New Kind of Science and showed that genetic algorithms could find better solutions to the majority problem for one-dimensional cellular automata. She is the author of An Introduction to Genetic Algorithms, a widely known introductory book published by MIT Press in 1996. She is also author of Complexity: A Guided Tour (Oxford University Press, 2009), which won the 2010 Phi Beta Kappa Science Book Award, and Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux). Life Melanie Mitchell was born and raised in Los Angeles, California. She attended Brown University in Providence, Rhode Island, where she studied physics, astronomy and mathematics. Her interest in artificial intelligence was spurred in college when she read Douglas Hofstadter's Gödel, Escher, Bach. After graduating, she worked as a high school math teacher in New York City. Deciding she "needed to be" in artificial intelligence, Mitchell tracked down Douglas Hofstadter, repeatedly asking to become one of his graduate students. After finding Hofstadter's phone number at MIT, a determined Mitchell made several calls, all of which went unanswered. She was ultimately successful in reaching Hofstadter after calling at 11 p.m., and secured an internship working on the development of Copycat. In the fall of 1984, Mitchell followed Hofstadter to the University of Michigan, submitting a "last minute" applica
https://en.wikipedia.org/wiki/Victor%20Kolyvagin
Victor Alexandrovich Kolyvagin (, born 11 March, 1955) is a Russian mathematician who wrote a series of papers on Euler systems, leading to breakthroughs on the Birch and Swinnerton-Dyer conjecture, and Iwasawa's conjecture for cyclotomic fields. His work also influenced Andrew Wiles's work on Fermat's Last Theorem. Career Kolyvagin received his Ph.D. in Mathematics in 1981 from Moscow State University, where his advisor was Yuri I. Manin. He then worked at Steklov Institute of Mathematics in Moscow until 1994. Since 1994 he has been a professor of mathematics in the United States. He was a professor at Johns Hopkins University until 2002 when he became the first person to hold the Mina Rees Chair in mathematics at the Graduate Center Faculty at The City University of New York. Awards In 1990 he received the of the USSR Academy of Sciences.
https://en.wikipedia.org/wiki/Frederick%20Jelinek
Frederick Jelinek (18 November 1932 – 14 September 2010) was a Czech-American researcher in information theory, automatic speech recognition, and natural language processing. He is well known for his oft-quoted statement, "Every time I fire a linguist, the performance of the speech recognizer goes up". Jelinek was born in Czechoslovakia before World War II and emigrated with his family to the United States in the early years of the communist regime. He studied engineering at the Massachusetts Institute of Technology and taught for 10 years at Cornell University before accepting a job at IBM Research. In 1961, he married Czech screenwriter Milena Jelinek. At IBM, his team advanced approaches to computer speech recognition and machine translation. After IBM, he went to head the Center for Language and Speech Processing at Johns Hopkins University for 17 years, where he was still working on the day he died. Personal life Jelinek was born on November 18, 1932, as Bedřich Jelínek in Kladno to Vilém and Trude Jelínek. His father was Jewish; his mother was born in Switzerland to Czech Catholic parents and had converted to Judaism. Jelínek senior, a dentist, had planned early to escape Nazi occupation and flee to England; he arranged for a passport, visa, and the shipping of his dentistry materials. The couple planned to send their son to an English private school. However, Vilém decided to stay at the last minute and was eventually sent to the Theresienstadt concentration camp, where he died in 1945. The family was forced to move to Prague in 1941, but Frederick, his sister and motherthanks to the latter's backgroundescaped the concentration camps. After the war, Jelinek entered in the gymnasium, despite having missed several years of schooling because education of Jewish children had been forbidden since 1942. His mother, anxious that her son should get a good education, made great efforts for their emigration, especially when it became clear he would not be allowed t
https://en.wikipedia.org/wiki/Shannon%20number
The Shannon number, named after the American mathematician Claude Shannon, is a conservative lower bound of the game-tree complexity of chess of 10120, based on an average of about 103 possibilities for a pair of moves consisting of a move for White followed by a move for Black, and a typical game lasting about 40 such pairs of moves. Shannon's calculation Shannon showed a calculation for the lower bound of the game-tree complexity of chess, resulting in about 10120 possible games, to demonstrate the impracticality of solving chess by brute force, in his 1950 paper "Programming a Computer for Playing Chess". (This influential paper introduced the field of computer chess.) Shannon also estimated the number of possible positions, "of the general order of , or roughly 1043". This includes some illegal positions (e.g., pawns on the first rank, both kings in check) and excludes legal positions following captures and promotions. After each player has moved a piece 5 times each (10 ply) there are 69,352,859,712,417 possible games that could have been played. Tighter bounds Upper Taking Shannon's numbers into account, Victor Allis calculated an upper bound of 5×1052 for the number of positions, and estimated the true number to be about 1050. Recent results improve that estimate, by proving an upper bound of 8.7x1045, and showing an upper bound 4×1037 in the absence of promotions. Lower Allis also estimated the game-tree complexity to be at least 10123, "based on an average branching factor of 35 and an average game length of 80". As a comparison, the number of atoms in the observable universe, to which it is often compared, is roughly estimated to be 1080. Accurate estimates John Tromp and Peter Österlund estimated the number of legal chess positions with a 95% confidence level at , based on an efficiently computable bijection between integers and chess positions. Number of sensible chess games As a comparison to the Shannon number, if chess is analyz
https://en.wikipedia.org/wiki/Cosmopolitan%20distribution
In biogeography, cosmopolitan distribution is the term for the range of a taxon that extends across all of (or most of) the world, in appropriate habitats; most cosmopolitan species are known to be highly adaptable to a range of climatic and environmental conditions, though this is not always so. Killer whales (orcas) are among the most well-known cosmopolitan species on the planet, as they maintain several different resident and transient (migratory) populations in every major oceanic body on Earth, from the Arctic Circle to Antarctica and every coastal and open-water region in-between. Such a taxon (usually a species) is said to have a cosmopolitan distribution, or exhibit cosmopolitanism, as a species; another example, the rock dove (commonly referred to as a 'pigeon'), in addition to having been bred domestically for centuries, now occurs in most urban areas across the world. The extreme opposite of a cosmopolitan species is an endemic (native) species, or one that is found only in a single geographical location. Endemism usually results in organisms with specific adaptations to one particular climate or region, and the species would likely face challenges if placed in a different environment. There are far more examples of endemic species than cosmopolitan species; one example being the snow leopard, a rare feline species found only in Central Asian mountain ranges, an environment the cats have adapted to over millennia. Qualification The caveat "in appropriate habitat" is used to qualify the term "cosmopolitan distribution", excluding in most instances polar regions, extreme altitudes, oceans, deserts, or small, isolated islands. For example, the housefly is highly cosmopolitan, yet is neither oceanic nor polar in its distribution. Related terms and concepts The term pandemism also is in use, but not all authors are consistent in the sense in which they use the term; some speak of pandemism mainly in referring to diseases and pandemics, and some as a term i
https://en.wikipedia.org/wiki/National%20Software%20Testing%20Laboratories
National Software Testing Laboratories (NSTL) was established by serial entrepreneur Joseph Segel in 1983 to test computer software. The company provides certification (such as WHQL and Microsoft Windows Mobile certification), quality assurance, and benchmarking services. NSTL was acquired by Intertek in 2007.
https://en.wikipedia.org/wiki/WIN-35428
(–)-2-β-Carbomethoxy-3-β-(4-fluorophenyl)tropane (β-CFT, WIN 35,428) is a stimulant drug used in scientific research. CFT is a phenyltropane based dopamine reuptake inhibitor and is structurally derived from cocaine. It is around 3-10x more potent than cocaine and lasts around 7 times longer based on animal studies. While the naphthalenedisulfonate salt is the most commonly used form in scientific research due to its high solubility in water, the free base and hydrochloride salts are known compounds and can also be produced. The tartrate is another salt form that is reported. Uses CFT was first reported by Clarke and co-workers in 1973. This drug is known to function as a "positive reinforcer" (although it is less likely to be self-administered by rhesus monkeys than cocaine). Tritiated CFT is frequently used to map binding of novel ligands to the DAT, although the drug also has some SERT affinity. Radiolabelled forms of CFT have been used in humans and animals to map the distribution of dopamine transporters in the brain. CFT was found to be particularly useful for this application as a normal fluorine atom can be substituted with the radioactive isotope 18F which is widely used in Positron emission tomography. Another radioisotope-substituted analog [11C]WIN 35,428 (where the carbon atom of either the N-methyl group, or the methyl from the 2-carbomethoxy group of CFT, has been replaced with 11C) is now more commonly used for this application, as it is quicker and easier in practice to make radiolabelled CFT by methylating nor-CFT or 2-desmethyl-CFT than by reacting methylecgonidine with parafluorophenylmagnesium bromide, and also avoids the requirement for a licence to work with the restricted precursor ecgonine. CFT is about as addictive as cocaine in animal studies, but is taken less often due to its longer duration of action. Potentially this could make it a suitable drug to be used as a substitute for cocaine, in a similar manner to how methadone is used a
https://en.wikipedia.org/wiki/Applied%20mechanics
Applied mechanics is the branch of science concerned with the motion of any substance that can be experienced or perceived by humans without the help of instruments. In short, when mechanics concepts surpass being theoretical and are applied and executed, general mechanics becomes applied mechanics. It is this stark difference that makes applied mechanics an essential understanding for practical everyday life. It has numerous applications in a wide variety of fields and disciplines, including but not limited to structural engineering, astronomy, oceanography, meteorology, hydraulics, mechanical engineering, aerospace engineering, nanotechnology, structural design, earthquake engineering, fluid dynamics, planetary sciences, and other life sciences. Connecting research between numerous disciplines, applied mechanics plays an important role in both science and engineering. Pure mechanics describes the response of bodies (solids and fluids) or systems of bodies to external behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics bridges the gap between physical theory and its application to technology. Composed of two main categories, Applied Mechanics can be split into classical mechanics; the study of the mechanics of macroscopic solids, and fluid mechanics; the study of the mechanics of macroscopic fluids. Each branch of applied mechanics contains subcategories formed through their own subsections as well. Classical mechanics, divided into statics and dynamics, are even further subdivided, with statics' studies split into rigid bodies and rigid structures, and dynamics' studies split into kinematics and kinetics. Like classical mechanics, fluid mechanics is also divided into two sections: statics and dynamics. Within the practical sciences, applied mechanics is useful in formulating new ideas and theories, discovering and interpreting phenomena, and developing experimental and computational tools.
https://en.wikipedia.org/wiki/Motion%20planning
Motion planning, also path planning (also known as the navigation problem or the piano mover's problem) is a computational problem to find a sequence of valid configurations that moves the object from the source to destination. The term is used in computational geometry, computer animation, robotics and computer games. For example, consider navigating a mobile robot inside a building to a distant waypoint. It should execute this task while avoiding walls and not falling down stairs. A motion planning algorithm would take a description of these tasks as input, and produce the speed and turning commands sent to the robot's wheels. Motion planning algorithms might address robots with a larger number of joints (e.g., industrial manipulators), more complex tasks (e.g. manipulation of objects), different constraints (e.g., a car that can only drive forward), and uncertainty (e.g. imperfect models of the environment or robot). Motion planning has several robotics applications, such as autonomy, automation, and robot design in CAD software, as well as applications in other fields, such as animating digital characters, video game, architectural design, robotic surgery, and the study of biological molecules. Concepts A basic motion planning problem is to compute a continuous path that connects a start configuration S and a goal configuration G, while avoiding collision with known obstacles. The robot and obstacle geometry is described in a 2D or 3D workspace, while the motion is represented as a path in (possibly higher-dimensional) configuration space. Configuration space A configuration describes the pose of the robot, and the configuration space C is the set of all possible configurations. For example: If the robot is a single point (zero-sized) translating in a 2-dimensional plane (the workspace), C is a plane, and a configuration can be represented using two parameters (x, y). If the robot is a 2D shape that can translate and rotate, the workspace is still 2-dimen
https://en.wikipedia.org/wiki/Dolbear%27s%20law
Dolbear's law states the relationship between the air temperature and the rate at which crickets chirp. It was formulated by Amos Dolbear and published in 1897 in an article called "The Cricket as a Thermometer". Dolbear's observations on the relation between chirp rate and temperature were preceded by an 1881 report by Margarette W. Brooks, although this paper went unnoticed until after Dolbear's publication. Dolbear did not specify the species of cricket which he observed, although subsequent researchers assumed it to be the snowy tree cricket, Oecanthus niveus. However, the snowy tree cricket was misidentified as O. niveus in early reports and the correct scientific name for this species is Oecanthus fultoni. The chirping of the more common field crickets is not as reliably correlated to temperature—their chirping rate varies depending on other factors such as age and mating success. In many cases, though, the Dolbear's formula is a close enough approximation for field crickets, too. Dolbear expressed the relationship as the following formula which provides a way to estimate the temperature in degrees Fahrenheit from the number of chirps per minute : This formula is accurate to within a degree or so when applied to the chirping of the field cricket. Counting can be sped up by simplifying the formula and counting the number of chirps produced in 15 seconds (): Reformulated to give the temperature in degrees Celsius (°C), it is: A shortcut method for degrees Celsius is to count the number of chirps in 8 seconds () and add 5 (this is fairly accurate between 5 and 30°C): The above formulae are expressed in terms of integers to make them easier to remember—they are not intended to be exact. In math classes Math textbooks will sometimes cite this as a simple example of where mathematical models break down, because at temperatures outside of the range that crickets live in, the total of chirps is zero as the crickets are dead. You can apply algebra to the e
https://en.wikipedia.org/wiki/Sclerotium
A sclerotium (; : sclerotia (), is a compact mass of hardened fungal mycelium containing food reserves. One role of sclerotia is to survive environmental extremes. In some higher fungi such as ergot, sclerotia become detached and remain dormant until favorable growth conditions return. Sclerotia initially were mistaken for individual organisms and described as separate species until Louis René Tulasne proved in 1853 that sclerotia are only a stage in the life cycle of some fungi. Further investigation showed that this stage appears in many fungi belonging to many diverse groups. Sclerotia are important in the understanding of the life cycle and reproduction of fungi, as a food source, as medicine (for example, ergotamine), and in agricultural blight management. Examples of fungi that form sclerotia are ergot (Claviceps purpurea), Polyporus tuberaster, Psilocybe mexicana, Agroathelia delphinii and many species in Sclerotiniaceae. Although not fungal, the plasmodium of slime molds can form sclerotia in adverse environmental conditions. Description Sclerotia are often composed of a thick, dense shell with thick and dark cells and a core of thin colorless cells. Sclerotia are rich in hyphae emergency supplies, especially oil. They contain a very small amount of water (5–10%) and can survive in a dry environment for several years without losing the ability to grow. In most cases, the sclerotium consists exclusively of fungal hyphae, whereas some may consist partly of fungal hyphae plexus and partly in between tissues of the substrate (ergot, Sclerotinia). In favorable conditions, sclerotia germinate to form fruiting bodies (basidiomycetes) or mycelium with conidia (in imperfect fungi). Sclerotia sizes can range from a fraction of a millimeter to a few tens of centimeters as, for example Laccocephalum mylittae, which has sclerotia with diameters up to 30 cm and weighing up to 20 kg. Sclerotia resemble cleistothecia in both their morphology and the genetic control of
https://en.wikipedia.org/wiki/Allergist
An allergist is a physician specially trained to manage and treat allergies, asthma and the other allergic diseases. They may also be called immunologists. Becoming an allergist Becoming an allergist/immunologist requires completion of at least nine years of training. After completing medical school and graduating with a medical degree, a physician will then undergo three years of training in internal medicine (to become an internist) or pediatrics (to become a pediatrician). Once physicians have finished training in one of these specialties, they must pass the exam of either the American Board of Pediatrics (ABP) or the American Board of Internal Medicine (ABIM). Internists or pediatricians who wish to focus on the sub-specialty of allergy-immunology then complete at least an additional two years of study, called a fellowship, in an allergy/immunology training program. Allergist/immunologists who are listed as ABAI-certified have successfully passed the certifying examination of the American Board of Allergy and Immunology (ABAI), following their fellowship. In the United States physicians who hold certification by the American Board of Allergy and Immunology (ABAI) have successfully completed an accredited educational program and an evaluation process, including a secure, proctored examination to demonstrate the knowledge, skills, and experience to the provision of patient care in allergy and immunology. In the United Kingdom, allergy is a subspecialty of general medicine or pediatrics. After obtaining postgraduate exams (MRCP or MRCPCH respectively) a doctor works for several years as a specialist registrar before qualifying for the General Medical Council specialist register. Allergy services may also be delivered by immunologists. Absence of allergists A 2003 Royal College of Physicians report presented a case for improvement of what were felt to be inadequate allergy services in the UK. In 2006, the House of Lords convened a subcommittee that reported i
https://en.wikipedia.org/wiki/Svara
Svara (Sanskrit: स्वर swara) is a word that connotes simultaneously a breath, a vowel, the sound of a musical note corresponding to its name, and the successive steps of the octave or saptaka. More comprehensively, it is the ancient Indian concept about the complete dimension of musical pitch. Most of the time a svara is identified as both musical note and tone, but a tone is a precise substitute for sur, related to tunefulness. Traditionally, Indians have just seven svaras/notes with short names, e.g. saa, re/ri, ga, ma, pa, dha, ni which Indian musicians collectively designate as saptak or saptaka. It is one of the reasons why svara is considered a symbolic expression for the number seven. Origins and history Etymology The word swara or svara (Sanskrit: स्वर) is derived from the root svr which means "to sound". To be precise, the svara is defined in the Sanskrit nirukta system as: svaryate iti svarah (स्वर्यते इति स्वरः, does breathing, shines, makes sound), svayam raajate iti svarah (स्वयं राजते इति स्वरः, appears on its own) and sva ranjayati iti svarah (स्व रञ्जयति इति स्वरः, that which colours itself in terms of appealing sound). The Kannada word swara and Tamil alphabet or letter suram do not represent a sound, but rather more generally the place of articulation (PoA) (பிறப்பிடம்), where one generates a sound, and the sounds made there can vary in pitch. In the Vedas The word is found in the Vedic literature, particularly the Samaveda, where it means accent and tone, or a musical note, depending on the context. The discussion there focusses on three accent pitch or levels: svarita (sounded, circumflex normal), udatta (high, raised) and anudatta (low, not raised). However, scholars question whether the singing of hymns and chants were always limited to three tones during the Vedic era. In the general sense swara means tone, and applies to chanting and singing. The basic swaras of Vedic chanting are udatta, anudatta and svarita. Vedic music has madhyama
https://en.wikipedia.org/wiki/Hordenine
Hordenine is an alkaloid of the phenethylamine class that occurs naturally in a variety of plants, taking its name from one of the most common, barley (Hordeum species). Chemically, hordenine is the N-methyl derivative of N-methyltyramine, and the N,N-dimethyl derivative of the well-known biogenic amine tyramine, from which it is biosynthetically derived and with which it shares some pharmacological properties (see below). , hordenine is widely sold as an ingredient of nutritional supplements, with the claims that it is a stimulant of the central nervous system, and has the ability to promote weight loss by enhancing metabolism. In experimental animals, given sufficiently large doses parenterally (by injection), hordenine does produce an increase in blood pressure, as well as other disturbances of the cardiovascular, respiratory, and nervous systems. These effects are generally not reproduced by oral administration of the drug in test animals, and virtually no scientific reports of the effects of hordenine in human beings have been published. Occurrence The first report of the isolation from a natural source of the compound which is now known as hordenine was made by Arthur Heffter in 1894, who extracted this alkaloid from the cactus Anhalonium fissuratus (now reclassified as Ariocarpus fissuratus), naming it "anhalin". Twelve years later, E. Léger independently isolated an alkaloid which he named hordenine from germinated barley (Hordeum vulgare) seeds. Ernst Späth subsequently showed that these alkaloids were identical and proposed the correct molecular structure for this substance, for which the name "hordenine" was ultimately retained. Hordenine is present in a fairly wide range of plants, notably amongst the cacti, but has also been detected in some algae and fungi. It occurs in grasses, and is found at significantly high concentrations in the seedlings of cereals such as barley (Hordeum vulgare) (about 0.2%, or 2000 μg/g), proso millet (Panicum miliaceum) (a
https://en.wikipedia.org/wiki/Surface%20subgroup%20conjecture
In mathematics, the surface subgroup conjecture of Friedhelm Waldhausen states that the fundamental group of every closed, irreducible 3-manifold with infinite fundamental group has a surface subgroup. By "surface subgroup" we mean the fundamental group of a closed surface not the 2-sphere. This problem is listed as Problem 3.75 in Robion Kirby's problem list. Assuming the geometrization conjecture, the only open case was that of closed hyperbolic 3-manifolds. A proof of this case was announced in the summer of 2009 by Jeremy Kahn and Vladimir Markovic and outlined in a talk August 4, 2009 at the FRG (Focused Research Group) Conference hosted by the University of Utah. A preprint appeared in the arxiv.org server in October 2009. Their paper was published in the Annals of Mathematics in 2012. In June 2012, Kahn and Markovic were given the Clay Research Awards by the Clay Mathematics Institute at a ceremony in Oxford. See also Virtually Haken conjecture Ehrenpreis conjecture
https://en.wikipedia.org/wiki/Out-of-band%20data
In computer networking, out-of-band data is the data transferred through a stream that is independent from the main in-band data stream. An out-of-band data mechanism provides a conceptually independent channel, which allows any data sent via that mechanism to be kept separate from in-band data. The out-of-band data mechanism should be provided as an inherent characteristic of the data channel and transmission protocol, rather than requiring a separate channel and endpoints to be established. The term "out-of-band data" probably derives from out-of-band signaling, as used in the telecommunications industry. Example case Consider a networking application that tunnels data from a remote data source to a remote destination. The data being tunneled may consist of any bit patterns. The sending end of the tunnel may at times have conditions that it needs to notify the receiving end about. However, it cannot simply insert a message to the receiving end because that end will not be able to distinguish the message from data sent by the data source. By using an out-of-band mechanism, the sending end can send the message to the receiving end out of band. The receiving end will be notified in some fashion of the arrival of out-of-band data, and it can read the out of band data and know that this is a message intended for it from the sending end, independent of the data from the data source. Implementations It is possible to implement out-of-band data transmission using a physically separate channel, but most commonly out-of-band data is a feature provided by a transmission protocol using the same channel as normal data. A typical protocol might divide the data to be transmitted into blocks, with each block having a header word that identifies the type of data being sent, and a count of the data bytes or words to be sent in the block. The header will identify the data as being in-band or out-of-band, along with other identification and routing information. At the rece
https://en.wikipedia.org/wiki/Ruby%20laser
A ruby laser is a solid-state laser that uses a synthetic ruby crystal as its gain medium. The first working laser was a ruby laser made by Theodore H. "Ted" Maiman at Hughes Research Laboratories on May 16, 1960. Ruby lasers produce pulses of coherent visible light at a wavelength of 694.3 nm, which is a deep red color. Typical ruby laser pulse lengths are on the order of a millisecond. Design A ruby laser most often consists of a ruby rod that must be pumped with very high energy, usually from a flashtube, to achieve a population inversion. The rod is often placed between two mirrors, forming an optical cavity, which oscillate the light produced by the ruby's fluorescence, causing stimulated emission. Ruby is one of the few solid state lasers that produce light in the visible range of the spectrum, lasing at 694.3 nanometers, in a deep red color, with a very narrow linewidth of 0.53 nm. The ruby laser is a three level solid state laser. The active laser medium (laser gain/amplification medium) is a synthetic ruby rod that is energized through optical pumping, typically by a xenon flashtube. Ruby has very broad and powerful absorption bands in the visual spectrum, at 400 and 550 nm, and a very long fluorescence lifetime of 3 milliseconds. This allows for very high energy pumping, since the pulse duration can be much longer than with other materials. While ruby has a very wide absorption profile, its conversion efficiency is much lower than other mediums. In early examples, the rod's ends had to be polished with great precision, such that the ends of the rod were flat to within a quarter of a wavelength of the output light, and parallel to each other within a few seconds of arc. The finely polished ends of the rod were silvered; one end completely, the other only partially. The rod, with its reflective ends, then acts as a Fabry–Pérot etalon (or a Gires-Tournois etalon). Modern lasers often use rods with antireflection coatings, or with the ends cut and polish
https://en.wikipedia.org/wiki/The%20Rowett%20Institute
The Rowett Institute is a research centre for studies into food and nutrition, located in Aberdeen, Scotland. History The institute was founded in 1913 when the University of Aberdeen and the North of Scotland College of Agriculture agreed that an "Institute for Research into Animal Nutrition" should be established in Scotland. The first director was John Boyd Orr, later to become Lord Boyd Orr, who moved from Glasgow to "the wilds of Aberdeenshire" in 1914. Orr drew up some plans for a nutrition research institute. Orr also donated £5000 for the building of a granite laboratory building at Craibstone, not far from the Bucksburn site of the Rowett. At the breakout of the Great War, Orr left the institute, but returned in 1919 with a staff of four to begin work in the new laboratory. Orr continued to push for a new research institute and finally the Government agreed to pay half the costs but stipulated that the other half was to be found from other sources. The extra money was donated by Dr John Quiller Rowett, a businessman and director of a wine and spirits merchants in London. Rowett's donation allowed the purchase of 41 acres of land for the institute to be built on. Rowett also contributed £10,000 towards the cost of the buildings. The money was donated with one very important stipulation from Rowett—"if any work done at the institute on animal nutrition were found to have a bearing on human nutrition, the institute would be allowed to follow up this work." The institute was formally opened in 1922 by Queen Mary. In 1927, the Rowett was given £5000 to carry out an investigation to test whether health could be improved by the consumption of milk. After some further tests on other groups, a bill was passed in the House of Commons enabling local authorities in Scotland to provide cheap or free milk to all school children. It was soon applied in England too. This helped reduce the surplus of milk at the time and also helped rescue the milk industry which was i
https://en.wikipedia.org/wiki/X-PLOR
X-PLOR is a computer software package for computational structural biology originally developed by Axel T. Brunger at Yale University. It was first published in 1987 as an offshoot of CHARMM - a similar program that ran on supercomputers made by Cray Inc. It is used in the fields of X-ray crystallography and nuclear magnetic resonance spectroscopy of proteins (NMR) analysis. X-PLOR is a highly sophisticated program that provides an interface between theoretical foundations and experimental data in structural biology, with specific emphasis on X-ray crystallography and nuclear magnetic resonance spectroscopy in solution of biological macro-molecules. It is intended mainly for researchers and students in the fields of computational chemistry, structural biology, and computational molecular biology. See also Comparison of software for molecular mechanics modeling Molecular mechanics
https://en.wikipedia.org/wiki/XPLOR-NIH
Xplor-NIH is a highly sophisticated and flexible biomolecular structure determination program which includes an interface to the legacy X-PLOR program. The main developers are Charles Schwieters and Marius Clore of the National Institutes of Health. Xplor-NIH is based on a C++ framework with an extensive Python interface enabling very powerful and easy scripting of complex structure determination and refinement protocols. Restraints derived from all current solution and many solid state nuclear magnetic resonance (NMR) and X-ray scattering experiments can be accommodated during structure calculations. Extensive facilities are also available for many types of ensemble calculations where the experimental data cannot be accounted for by a unique structure. Many of the structure calculation protocols involve the use of simulated annealing designed to overcome local minima on the path of the global minimum region of the target function. These calculations can be carried out using any combination of Cartesian, torsion angle and rigid body dynamics and minimization. Currently Xplor-NIH is the most versatile, comprehensive and widely used structure determination/refinement package in NMR structure determination.
https://en.wikipedia.org/wiki/Capacitance%E2%80%93voltage%20profiling
Capacitance–voltage profiling (or C–V profiling, sometimes CV profiling) is a technique for characterizing semiconductor materials and devices. The applied voltage is varied, and the capacitance is measured and plotted as a function of voltage. The technique uses a metal–semiconductor junction (Schottky barrier) or a p–n junction or a MOSFET to create a depletion region, a region which is empty of conducting electrons and holes, but may contain ionized donors and electrically active defects or traps. The depletion region with its ionized charges inside behaves like a capacitor. By varying the voltage applied to the junction it is possible to vary the depletion width. The dependence of the depletion width upon the applied voltage provides information on the semiconductor's internal characteristics, such as its doping profile and electrically active defect densities., Measurements may be done at DC, or using both DC and a small-signal AC signal (the conductance method , ), or using a large-signal transient voltage. Application Many researchers use capacitance–voltage (C–V) testing to determine semiconductor parameters, particularly in MOSCAP and MOSFET structures. However, C–V measurements are also widely used to characterize other types of semiconductor devices and technologies, including bipolar junction transistors, JFETs, III–V compound devices, photovoltaic cells, MEMS devices, organic thin-film transistor (TFT) displays, photodiodes, and carbon nanotubes (CNTs). These measurements' fundamental nature makes them applicable to a wide range of research tasks and disciplines. For example, researchers use them in university and semiconductor manufacturers' labs to evaluate new processes, materials, devices, and circuits. These measurements are extremely valuable to product and yield enhancement engineers who are responsible for improving processes and device performance. Reliability engineers also use these measurements to qualify the suppliers of the materials th
https://en.wikipedia.org/wiki/Proceptive%20phase
In biology and sexology, the proceptive phase is the initial period in a relationship when organisms are "courting" each other, prior to the acceptive phase when copulation occurs. Behaviors that occur during the proceptive phase depend very much on the species, but may include visual displays, movements, sounds and odors. The term proceptivity was introduced into general sexological use by Frank A. Beach in 1976 and refers to behavior enacted by a female to initiate, maintain, or escalate a sexual interaction. There are large species differences in proceptive behavior. The term has also been used to describe women's roles in human courtship, with a meaning very close to Beach's. A near synonym is proception. The term proceptive phase refers to pre-consummatory, that is, pre-ejaculatory, behavior and focuses attention on the active role played by the female organism in creating, maintaining, and escalating the sexual interaction. See also Mating
https://en.wikipedia.org/wiki/Hamming%20space
In statistics and coding theory, a Hamming space (named after American mathematician Richard Hamming) is usually the set of all binary strings of length N. It is used in the theory of coding signals and transmission. More generally, a Hamming space can be defined over any alphabet (set) Q as the set of words of a fixed length N with letters from Q. If Q is a finite field, then a Hamming space over Q is an N-dimensional vector space over Q. In the typical, binary case, the field is thus GF(2) (also denoted by Z2). In coding theory, if Q has q elements, then any subset C (usually assumed of cardinality at least two) of the N-dimensional Hamming space over Q is called a q-ary code of length N; the elements of C are called codewords. In the case where C is a linear subspace of its Hamming space, it is called a linear code. A typical example of linear code is the Hamming code. Codes defined via a Hamming space necessarily have the same length for every codeword, so they are called block codes when it is necessary to distinguish them from variable-length codes that are defined by unique factorization on a monoid. The Hamming distance endows a Hamming space with a metric, which is essential in defining basic notions of coding theory such as error detecting and error correcting codes. Hamming spaces over non-field alphabets have also been considered, especially over finite rings (most notably over Z4) giving rise to modules instead of vector spaces and ring-linear codes (identified with submodules) instead of linear codes. The typical metric used in this case the Lee distance. There exist a Gray isometry between (i.e. GF(22m)) with the Hamming distance and (also denoted as GR(4,m)) with the Lee distance.
https://en.wikipedia.org/wiki/Tameness%20theorem
In mathematics, the tameness theorem states that every complete hyperbolic 3-manifold with finitely generated fundamental group is topologically tame, in other words homeomorphic to the interior of a compact 3-manifold. The tameness theorem was conjectured by . It was proved by and, independently, by Danny Calegari and David Gabai. It is one of the fundamental properties of geometrically infinite hyperbolic 3-manifolds, together with the density theorem for Kleinian groups and the ending lamination theorem. It also implies the Ahlfors measure conjecture. History Topological tameness may be viewed as a property of the ends of the manifold, namely, having a local product structure. An analogous statement is well known in two dimensions, that is, for surfaces. However, as the example of Alexander horned sphere shows, there are wild embeddings among 3-manifolds, so this property is not automatic. The conjecture was raised in the form of a question by Albert Marden, who proved that any geometrically finite hyperbolic 3-manifold is topologically tame. The conjecture was also called the Marden conjecture or the tame ends conjecture. There had been steady progress in understanding tameness before the conjecture was resolved. Partial results had been obtained by Thurston, Brock, Bromberg, Canary, Evans, Minsky, Ohshika. An important sufficient condition for tameness in terms of splittings of the fundamental group had been obtained by Bonahon. The conjecture was proved in 2004 by Ian Agol, and independently, by Danny Calegari and David Gabai. Agol's proof relies on the use of manifolds of pinched negative curvature and on Canary's trick of "diskbusting" that allows to replace a compressible end with an incompressible end, for which the conjecture has already been proved. The Calegari–Gabai proof is centered on the existence of certain closed, non-positively curved surfaces that they call "shrinkwrapped". See also Tame topology
https://en.wikipedia.org/wiki/Energy%20landscape
An energy landscape is a mapping of possible states of a system. The concept is frequently used in physics, chemistry, and biochemistry, e.g. to describe all possible conformations of a molecular entity, or the spatial positions of interacting molecules in a system, or parameters and their corresponding energy levels, typically Gibbs free energy. Geometrically, the energy landscape is the graph of the energy function across the configuration space of the system. The term is also used more generally in geometric perspectives to mathematical optimization, when the domain of the loss function is the parameter space of some system. Applications The term is useful when examining protein folding; while a protein can theoretically exist in a nearly infinite number of conformations along its energy landscape, in reality proteins fold (or "relax") into secondary and tertiary structures that possess the lowest possible free energy. The key concept in the energy landscape approach to protein folding is the folding funnel hypothesis. In catalysis, when designing new catalysts or refining existing ones, energy landscapes are considered to avoid low-energy or high-energy intermediates that could halt the reaction or demand excessive energy to reach the final products. In glassing models, the local minima of an energy landscape correspond to metastable low temperature states of a thermodynamic system. In machine learning, artificial neural networks may be analyzed using analogous approaches. For example, a neural network may be able to perfectly fit the training set, corresponding to a global minimum of zero loss, but overfitting the model ("learning the noise" or "memorizing the training set"). Understanding when this happens can be studied using the geometry of the corresponding energy landscape. Formal definition Mathematically, an energy landscape is a continuous function associating each physical state with an energy, where is a topological space. In the continuou
https://en.wikipedia.org/wiki/Transfer%20matrix
In applied mathematics, the transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element theory. For the mask , which is a vector with component indexes from to , the transfer matrix of , we call it here, is defined as More verbosely The effect of can be expressed in terms of the downsampling operator "": Properties See also Hurwitz determinant
https://en.wikipedia.org/wiki/Maudsley%20Bipolar%20Twin%20Study
The Maudsley Bipolar Twin Study is an ongoing twin study of bipolar disorder running at the Institute of Psychiatry, King's College London since 2003. The study is investigating possible differences between people with a diagnosis of bipolar disorder and people without the diagnosis. In particular it is investigating difference in cognition and brain structure/function. The Maudsley Study of bipolar disorder investigates different aspects of thinking, such as memory and attention, in twins with and without bipolar disorder. The tasks participants complete involve defining words and solving different kinds of problems. With adequate numbers of twins participating in the study, the hope is to understand any differences between these two groups. The eventual aim is to increase understanding of this complex mood disorder and to enhance future therapies for it.
https://en.wikipedia.org/wiki/509th%20Composite%20Group
The 509th Composite Group (509 CG) was a unit of the United States Army Air Forces created during World War II and tasked with the operational deployment of nuclear weapons. It conducted the atomic bombings of Hiroshima and Nagasaki, Japan, in August 1945. The group was activated on 17 December 1944 at Wendover Army Air Field, Utah. It was commanded by Lieutenant Colonel Paul W. Tibbets. Because it contained flying squadrons equipped with Boeing B-29 Superfortress bombers, C-47 Skytrain, and C-54 Skymaster transport aircraft, the group was designated as a "composite", rather than a "bombardment" formation. It operated Silverplate B-29s, which were specially configured to enable them to carry nuclear weapons. The 509th Composite Group began deploying to North Field on Tinian, Northern Mariana Islands, in May 1945. In addition to the two nuclear bombing raids, it carried out 15 practice missions against Japanese-held islands, and 12 combat missions against targets in Japan dropping high-explosive pumpkin bombs. In the postwar era, the 509th Composite Group was one of the original ten bombardment groups assigned to Strategic Air Command on 21 March 1946 and the only one equipped with Silverplate B-29 Superfortress aircraft capable of delivering atomic bombs. It was standardized as a bombardment group and redesignated the 509th Bombardment Group, Very Heavy, on 10 July 1946. History Organization, training, and security The 509th Composite Group was constituted on 9 December 1944, and activated on 17 December 1944, at Wendover Army Air Field, Utah. It was commanded by Lieutenant Colonel Paul W. Tibbets, who received promotion to full colonel in January 1945. It was initially assumed that the group would divide in two, with half going to Europe and half to the Pacific. In the first week of September Tibbets was assigned to organize a combat group to develop the means of delivering an atomic weapon by airplane against targets in Germany and Japan, then command it in
https://en.wikipedia.org/wiki/Anderson%27s%20rule
Anderson's rule is used for the construction of energy band diagrams of the heterojunction between two semiconductor materials. Anderson's rule states that when constructing an energy band diagram, the vacuum levels of the two semiconductors on either side of the heterojunction should be aligned (at the same energy). It is also referred to as the electron affinity rule, and is closely related to the Schottky–Mott rule for metal–semiconductor junctions. Anderson's rule was first described by R. L. Anderson in 1960. Constructing energy band diagrams Once the vacuum levels are aligned it is possible to use the electron affinity and band gap values for each semiconductor to calculate the conduction band and valence band offsets. The electron affinity (usually given by the symbol in solid state physics) gives the energy difference between the lower edge of the conduction band and the vacuum level of the semiconductor. The band gap (usually given the symbol ) gives the energy difference between the lower edge of the conduction band and the upper edge of the valence band. Each semiconductor has different electron affinity and band gap values. For semiconductor alloys it may be necessary to use Vegard's law to calculate these values. Once the relative positions of the conduction and valence bands for both semiconductors are known, Anderson's rule allows the calculation of the band offsets of both the valence band () and the conduction band (). After applying Anderson's rule and discovering the bands' alignment at the junction, Poisson’s equation can then be used to calculate the shape of the band bending in the two semiconductors. Example: straddling gap Consider a heterojunction between semiconductor 1 and semiconductor 2. Suppose the conduction band of semiconductor 2 is closer to the vacuum level than that of semiconductor 1. The conduction band offset would then be given by the difference in electron affinity (energy from upper conducting band to vacuum level)
https://en.wikipedia.org/wiki/Brace%20notation
In several programming languages, such as Perl, brace notation is a faster way to extract bytes from a string variable. In pseudocode An example of brace notation using pseudocode which would extract the 82nd character from the string is: a_byte = a_string{82} The equivalent of this using a hypothetical function 'MID' is: a_byte = MID(a_string, 82, 1) In C In C, strings are normally represented as a character array rather than an actual string data type. The fact a string is really an array of characters means that referring to a string would mean referring to the first element in an array. Hence in C, the following is a legitimate example of brace notation: #include <stdio.h> #include <string.h> #include <stdlib.h> int main(int argc, char* argv[]) { char* a_string = "Test"; printf("%c", a_string[0]); // Would print "T" printf("%c", a_string[1]); // Would print "e" printf("%c", a_string[2]); // Would print "s" printf("%c", a_string[3]); // Would print "t" printf("%c", a_string[4]); // Would print the 'null' character (ASCII 0) for end of string return(0); } Note that each of a_string[n] would have a 'char' data type while a_string itself would return a pointer to the first element in the a_string character array. In C# C# handles brace notation differently. A string is a primitive type that returns a char when encountered with brace notation: String var = "Hello World"; char h = var[0]; char e = var[1]; String hehe = h.ToString() + e.ToString(); // string "he" hehe += hehe; // string "hehe" To change the char type to a string in C#, use the method ToString(). This allows joining individual characters with the addition symbol + which acts as a concatenation symbol when dealing with strings. In Python In Python, strings are immutable, so it's hard to modify an existing string, but it's easy to extract and concatenate strings to each other: Extracting characters is even easier: >>> var = 'hello world' >>> var[0] # Return the first c
https://en.wikipedia.org/wiki/Brahmagupta%27s%20problem
This problem was given in India by the mathematician Brahmagupta in 628 AD in his treatise Brahma Sputa Siddhanta: Solve the Pell's equation for integers . Brahmagupta gave the smallest solution as . See also Brahmagupta Indian mathematics List of Indian mathematicians Pell's equation Indeterminate equation Diophantine equation External links Brahmagupta Diophantine equations
https://en.wikipedia.org/wiki/Room%20modes
Room modes are the collection of resonances that exist in a room when the room is excited by an acoustic source such as a loudspeaker. Most rooms have their fundamental resonances in the 20 Hz to 200 Hz region, each frequency being related to one or more of the room's dimensions or a divisor thereof. These resonances affect the low-frequency low-mid-frequency response of a sound system in the room and are one of the biggest obstacles to accurate sound reproduction. Mechanism of room resonances The input of acoustic energy to the room at the modal frequencies and multiples thereof causes standing waves. The nodes and antinodes of these standing waves result in the loudness of the particular resonant frequency being different at different locations of the room. These standing waves can be considered a temporary storage of acoustic energy as they take a finite time to build up and a finite time to dissipate once the sound energy source has been removed. Minimizing effect of room resonances A room with generally hard surfaces will exhibit high-Q, sharply tuned resonances. Absorbent material can be added to the room to damp such resonances which work by more quickly dissipating the stored acoustic energy. In order to be effective, a layer of porous, absorbent material has to be of the order of a quarter-wavelength thick if placed on a wall, which at low frequencies with their long wavelengths requires very thick absorbers. Absorption occurs through friction of the air motion against individual fibres, with kinetic energy converted to heat, and so the material must be of just the right 'density' in terms of fibre packing. Too loose, and sound will pass through, but too firm and reflection will occur. Technically it is a matter of impedance matching between air motion and the individual fibres. Glass fibre, as used for thermal insulation, is very effective, but needs to be very thick (perhaps four to six inches) if the result is not to be a room that sounds unnatural
https://en.wikipedia.org/wiki/CTQ%20tree
CTQ trees (critical-to-quality trees) are the key measurable characteristics of a product or process whose performance standards or specification limits must be met in order to satisfy the customer. They align improvement or design efforts with customer requirements. CTQs are used to decompose broad customer requirements into more easily quantified elements. CTQ trees are often used as part of Six Sigma methodology to help prioritize such requirements. CTQs represent the product or service characteristics as defined by the customer/user. Customers may be surveyed to elicit quality, service and performance data. They may include upper and lower specification limits or any other factors. A CTQ must be an actionable, quantitative business specification. CTQs reflect the expressed needs of the customer. The CTQ practitioner converts them to measurable terms using tools such as DFMEA. Services and products are typically not monolithic. They must be decomposed into constituent elements (tasks in the cases of services). See also Business process Design for Six Sigma Total quality management Total productive maintenance External links Six Sigma CTQ
https://en.wikipedia.org/wiki/Kilner%20jar
A Kilner jar is a rubber-sealed, glass jar used for preserving (bottling) food. It was first produced in 1900 by John Kilner & Co., Yorkshire, England. History The Kilner Jar was originally invented by John Kilner (1792–1857) and associates, and made by a firm of glass bottlemakers from Yorkshire called Kilner which he set up. The original Kilner bottlemakers operated from 1842, when the company was first founded, until 1937, when the company went into liquidation. In 2003, The Rayware Group purchased the Ravenhead name, including the design, patent and trademark of the original Kilner jar and continues to produce them today in China. Company names The various names of the Kilner companies were: John Kilner and Co, Castleford, Yorkshire, 1842–44 John Kilner and Sons, Wakefield, Yorkshire, 1847–57 Kilner Brothers Glass Co, Thornhill Lees, Yorkshire, 1857–73 also at Conisbrough, Yorkshire, 1863–1873 Kilner Brothers Ltd, Thornhill Lees, Yorkshire 1873–1920 also at Conisbrough, Yorkshire, 1873–1937. See also Mason jar Weck jar Fowler's Vacola jar Food preservation Home canning Screw cap Sterilisation
https://en.wikipedia.org/wiki/Post%20hoc%20analysis
In a scientific study, post hoc analysis (from Latin post hoc, "after this") consists of statistical analyses that were specified after the data were seen. They are usually used to uncover specific differences between three or more group means when an analysis of variance (ANOVA) test is significant. This typically creates a multiple testing problem because each potential analysis is effectively a statistical test. Multiple testing procedures are sometimes used to compensate, but that is often difficult or impossible to do precisely. Post hoc analysis that is conducted and interpreted without adequate consideration of this problem is sometimes called data dredging by critics because the statistical associations that it finds are often spurious. Common post hoc tests Some common post hoc tests include: Holm-Bonferroni Procedure Newman-Keuls Rodger’s Method Scheffé’s Method Tukey’s Test (see also: Studentized Range Distribution) Causes Sometimes the temptation to engage in post hoc analysis is motivated by a desire to produce positive results or see a project as successful. In the case of pharmaceutical research, there may be significant financial consequences to a failed trial. See also HARKing Testing hypotheses suggested by the data Nemenyi test
https://en.wikipedia.org/wiki/XQuery%20and%20XPath%20Data%20Model
The XQuery and XPath Data Model (XDM) is the data model shared by the XPath 2.0, XSLT 2.0, XQuery, and XForms programming languages. It is defined in a W3C recommendation. Originally, it was based on the XPath 1.0 data model which in turn is based on the XML Information Set. The XDM consists of flat sequences of zero or more items which can be typed or untyped, and are either atomic values or XML nodes (of seven kinds: document, element, attribute, text, namespace, processing instruction, and comment). Instances of the XDM can optionally be XML schema-validated.
https://en.wikipedia.org/wiki/Syntex
Laboratorios Syntex SA (later Syntex Laboratories, Inc.) was a pharmaceutical company formed in Mexico City in January 1944 by Russell Marker, Emeric Somlo, and Federico Lehmann to manufacture therapeutic steroids from the Mexican yams called cabeza de negro (Dioscorea mexicana) and Barbasco (Dioscorea composita). The demand for barbasco by Syntex initiated the Mexican barbasco trade. As the American Chemistry Society later explained: “In early 1944, the new Mexican company was chartered and named Syntex, S.A. (‘Synthesis and Mexico’). Russell Marker, shortly thereafter, left Syntex on account of his ruthless cofounder. Luis E. Miramontes, George Rosenkranz and Carl Djerassi's successful synthesis of norethisterone (also known as norethindrone) — which was later proven to be an effective pregnancy inhibitor — led to an infusion of capital into Syntex and the Mexican steroid pharmaceutical industry. George Rosenkranz and Carl Djerassi went on to synthesize cortisone from diosgenin, the same phytosteroid contained in Mexican yams used to synthesize progesterone and norethindrone. The synthesis was more economical than the previous Merck & Co. synthesis, which started with bile acids. In 1959, Syntex moved its operating headquarters to Palo Alto, California, United States, and evolved into a transnational corporation. Its foreign scientists had become frustrated with bureaucratic delays on the part of the Mexican government in granting work visas and approving necessary imports of pharmaceutical materials for their work. After 1959, Syntex was incorporated in Panama; its administration, research and marketing were conducted from Palo Alto; its manufacturing of bulk steroid intermediates remained in Mexico; and it also manufactured finished drugs at plants in Puerto Rico and the Bahamas. Syntex agreed to be acquired by the Roche group in May 1994. After the acquisition closed, Roche downsized Syntex's research and development facilities in the Stanford Research P
https://en.wikipedia.org/wiki/Salvage%20enzyme
Salvage enzymes are enzymes, nucleoside kinases, required during cell division to "salvage" nucleotides, present in body fluids, for the manufacture of DNA. They catalyze the phosphorylation of nucleosides to nucleoside - 5'-phosphates, that are further phosphorylated to triphosphates, that can be built into the growing DNA chain. The salvage enzymes are synthesized during the G1 phase in anticipation of DNA synthesis. After the cell division has been completed, the salvage enzymes, no longer required, are degraded. During interphase the cell derives its requirement of nucleoside-5'-phosphates by de novo synthesis, that leads directly to the 5'-monophosphate nucleotides. Cell cycle Enzymes
https://en.wikipedia.org/wiki/Pickled%20onion
Pickled onions are a food item consisting of onions (cultivars of Allium cepa) pickled in a solution of vinegar and salt, often with other preservatives and flavourings. There is a variety of small white pickled onions known as 'silverskin' onions; due to imperfections they are pickled instead of being wasted. They are frequently used as an essential component of the Martini cocktail variant known as a Gibson. Pickled onions are usually pickled in malt vinegar and the onions are about in diameter. Silverskin onions are pickled in white vinegar, and are much smaller. Full sized onions, e.g., Spanish onions, can be pickled if sliced first. By country In the United Kingdom, the onions are traditionally eaten alongside fish and chips and with a ploughman's lunch. In the Southern United States, pickled Vidalia onions can be served as a side dish. In Hong Kong, pickled onions are served in many Cantonese restaurants, especially around dinner time, as a small dish before the main course is served. In Switzerland, they are served to accompany raclette, along with pickled gherkins. In Italy, it is known as 'maggiolina'. In Mexican cuisine, one preparation, cebollas encurtidas, has sliced red onions pickled in a mixture of citrus juices and vinegar, which is served as a garnish or condiment. Sometimes cooked beets are added, producing a more strongly pink coloured dish. Pickled red onions in bitter orange juice are especially emblematic of Yucatán cuisine, where they are used as a garnish or condiment, especially for seafood. See also List of pickled foods
https://en.wikipedia.org/wiki/Grant%20Olney
Grant Olney Passmore (born October 18, 1983) is a singer-songwriter who has recorded on the Asian Man Records label. He is considered part of the New Weird America movement along with David Dondero, Devendra Banhart, Bright Eyes, and CocoRosie. His latest full-length album, Hypnosis for Happiness, was released in July 2013 on the Friendly Police UK label. His previous full-length album, Brokedown Gospel, was released on the Asian Man Records label in July 2004. He also releases music under the pseudonym Scout You Devil and as part of the songwriting duo Olney Clark. Alongside his music, Passmore is also a mathematician and theoretical computer scientist, formerly a student at the University of Texas at Austin, the Mathematical Research Institute in the Netherlands, and the University of Edinburgh, where he earned his PhD. He is a Life Member of Clare Hall, University of Cambridge and is cofounder of the artificial intelligence company Imandra Inc. (formerly known as Aesthetic Integration) which produces technology for the formal verification of algorithms. He was paired with artist Hito Steyerl in the 2016 Rhizome Seven on Seven. As a young child and early teenager, Passmore was involved in the development of the online Bulletin Board system scene, and under the name skaboy he was the author of many applications of importance to the Bulletin Board System community, including the Infusion Bulletin Board System, Empathy Image Editor, Avenger Packer Pro, and Impulse Tracker Tosser. Passmore was head programmer for ACiD Productions while working on many of these applications. Personal life Passmore married Barbara Galletly in 2014. They have three children. Discography Albums Hypnosis for Happiness – Grant Olney – (2013 · Friendly Police UK) Olney Clark – Olney Clark – (2010 · Friendly Police UK) Let Love Be (single) – Grant Olney – (2006 · Asian Man Records) Brokedown Gospel – Grant Olney – (2004 · Asian Man Records) Sweet Wine – Grant Olney – (2003 · MyAutomat
https://en.wikipedia.org/wiki/Chorleywood%20bread%20process
The Chorleywood bread process (CBP) is a method of efficient dough production to make yeasted bread quickly, producing a soft, fluffy loaf. Compared to traditional bread-making processes, CBP uses more yeast, added fats, chemicals, and high-speed mixing to allow the dough to be made with lower-protein wheat, and produces bread in a shorter time. It was developed by Bill Collins, George Elton and Norman Chamberlain of the British Baking Industries Research Association at Chorleywood in 1961. , 80% of bread made in the United Kingdom used the process. For millennia, bread had been made from wheat flour by manually kneading dough with a raising agent (typically yeast) leaving it to ferment before it was baked. In 1862 a cheaper industrial-scale process was developed by John Dauglish, using water with dissolved carbon dioxide instead of yeast. Dauglish's method, used by the Aerated Bread Company that he set up, dominated commercial bread baking for a century until the yeast-based Chorleywood process was developed. Some protein is lost during traditional bulk fermentation of bread; this does not occur to the same degree in mechanically developed doughs, allowing CBP to use lower-protein wheat. This feature had an important impact in the United Kingdom where, at the time, few domestic wheat varieties were of sufficient quality to make high-quality bread; the CBP permitted a much greater proportion of lower-protein domestic wheat to be used in the grist. Description The Chorleywood bread process allows the use of lower-protein wheats and reduces processing time, the system being able to produce a loaf of bread from flour to sliced and packaged form in about three and a half hours. This is achieved through the addition of Vitamin C, fat, yeast, and intense mechanical working by high-speed mixers, not feasible in a small-scale kitchen. Flour, water, yeast, salt, and fat (if used) are mixed together, along with minor ingredients common to many bread-making techniques, suc
https://en.wikipedia.org/wiki/Proximity%20marketing
Proximity marketing is the localized wireless distribution of advertising content associated with a particular place. Transmissions can be received by individuals in that location who wish to receive them and have the necessary equipment to do so. Distribution may be via a traditional localized broadcast, or more commonly is specifically targeted to devices known to be in a particular area. The location of a device may be determined by: A cellular phone being in a particular cell A Bluetooth- or Wi-Fi-enabled device being within range of a transmitter. An Internet enabled device with GPS enabling it to request localized content from Internet servers. A NFC enabled phone can read a RFID chip on a product or media and launch localized content from internet servers. Communications may be further targeted to specific groups within a given location, for example content in tourist hot spots may only be distributed to devices registered outside the local area. Communications may be both time and place specific, e.g. content at a conference venue may depend on the event in progress. Uses of proximity marketing include distribution of media at concerts, information (weblinks on local facilities), gaming and social applications, and advertising. Bluetooth-based systems Bluetooth, a short-range wireless system supported by many mobile devices, is one transmission medium used for proximity marketing. The process of Bluetooth-based proximity marketing involves setting up Bluetooth "broadcasting" equipment at a particular location and then sending information which can be text, images, audio or video to Bluetooth enabled devices within range of the broadcast server. These devices are often referred to as beacons. Other standard data exchange formats such as vCard can also be used. This form of proximity marketing is also referred to as close range marketing. It used to be the case that due to security fears, or a desire to save battery life, many users keep their Bluet
https://en.wikipedia.org/wiki/Acute%20pericarditis
Acute pericarditis is a type of pericarditis (inflammation of the sac surrounding the heart, the pericardium) usually lasting less than 6 weeks. It is the most common condition affecting the pericardium. Signs and symptoms Chest pain is one of the common symptoms of acute pericarditis. It is usually of sudden onset, occurring in the anterior chest and often has a sharp quality that worsens with breathing in or coughing, due to inflammation of the pleural surface at the same time. The pain may be reduced with sitting up and leaning forward while worsened with lying down, and also may radiate to the back, to one or both trapezius ridges. However, the pain can also be dull and steady, resembling the chest pain in an acute myocardial infarction. As with any chest pain, other causes must also be ruled out, such as GERD, pulmonary embolism, muscular pain, etc. A pericardial friction rub is a very specific sign of acute pericarditis, meaning the presence of this sign invariably indicates presence of disease. However, absence of this sign does not rule out disease. This rub can be best heard by the diaphragm of the stethoscope at the left sternal border arising as a squeaky or scratching sound, resembling the sound of leather rubbing against each other. This sound should be distinguished from the sound of a murmur, which is similar but sounds more like a "swish" sound than a scratching sound. The pericardial rub is said to be generated from the friction generated by the two inflamed layers of the pericardium; however, even a large pericardial effusion does not necessarily present a rub. The rub is best heard during the maximal movement of the heart within the pericardial sac, namely, during atrial systole, ventricular systole, and the filling phase of early ventricular diastole. Fever may be present since this is an inflammatory process. Causes There are several causes of acute pericarditis. In developed nations, the cause of most (80–90%) cases of acute pericarditis i
https://en.wikipedia.org/wiki/Spatial%20cutoff%20frequency
In optics, spatial cutoff frequency is a precise way to quantify the smallest object resolvable by an optical system. Due to diffraction at the image plane, all optical systems act as low pass filters with a finite ability to resolve detail. If it were not for the effects of diffraction, a 2" aperture telescope could theoretically be used to read newspapers on a planet circling Alpha Centauri, over four light-years distant. Unfortunately, the wave nature of light will never permit this to happen. The spatial cutoff frequency for a perfectly corrected incoherent optical system is given by where is the wavelength expressed in millimeters and is the lens' focal ratio. As an example, a telescope having an objective and imaging at 0.55 micrometers has a spatial cutoff frequency of 303 cycles/millimeter. High-resolution black-and-white film is capable of resolving details on the film as small as 3 micrometers or smaller, thus its cutoff frequency is about 150 cycles/millimeter. So, the telescope's optical resolution is about twice that of high-resolution film, and a crisp, sharp picture would result (provided focus is perfect and atmospheric turbulence is at a minimum). This formula gives the best-case resolution performance and is valid only for perfect optical systems. The presence of aberrations reduces image contrast and can effectively reduce the system spatial cutoff frequency if the image contrast falls below the ability of the imaging device to discern. The coherent case is given by See also Modulation transfer function Superlens
https://en.wikipedia.org/wiki/False%20discovery%20rate
In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR, which is the expected proportion of "discoveries" (rejected null hypotheses) that are false (incorrect rejections of the null). Equivalently, the FDR is the expected ratio of the number of false positive classifications (false discoveries) to the total number of positive classifications (rejections of the null). The total number of rejections of the null include both the number of false positives (FP) and true positives (TP). Simply put, FDR = FP / (FP + TP). FDR-controlling procedures provide less stringent control of Type I errors compared to family-wise error rate (FWER) controlling procedures (such as the Bonferroni correction), which control the probability of at least one Type I error. Thus, FDR-controlling procedures have greater power, at the cost of increased numbers of Type I errors. History Technological motivations The modern widespread use of the FDR is believed to stem from, and be motivated by, the development in technologies that allowed the collection and analysis of a large number of distinct variables in several individuals (e.g., the expression level of each of 10,000 different genes in 100 different persons). By the late 1980s and 1990s, the development of "high-throughput" sciences, such as genomics, allowed for rapid data acquisition. This, coupled with the growth in computing power, made it possible to seamlessly perform a very high number of statistical tests on a given data set. The technology of microarrays was a prototypical example, as it enabled thousands of genes to be tested simultaneously for differential expression between two biological conditions. As high-throughput technologies became common, technological and/or financial constraints led researchers to collect datasets with relatively small sample si
https://en.wikipedia.org/wiki/PIPES
PIPES is the common name for piperazine-N,N-bis(2-ethanesulfonic acid), and is a frequently used buffering agent in biochemistry. It is an ethanesulfonic acid buffer developed by Good et al. in the 1960s. Applications PIPES has two pKa values. One pKa (6.76 at 25 °C) is near the physiological pH which makes it useful in cell culture work. Its effective buffering range is 6.1-7.5 at 25 °C. The second pKa value is at 2.67 with a buffer range of from 1.5-3.5. PIPES has been documented minimizing lipid loss when buffering glutaraldehyde histology in plant and animal tissues. Fungal zoospore fixation for fluorescence microscopy and electron microscopy were optimized with a combination of glutaraldehyde and formaldehyde in PIPES buffer. It has a negligible capacity to bind divalent ions. See also MOPS HEPES MES Tris Common buffer compounds used in biology Good's buffers
https://en.wikipedia.org/wiki/Heishansaurus
Heishansaurus, meaning "Heishan lizard" after the area in China where it was discovered, is the name given to a dubious genus of herbivorous ornithischian dinosaur, probably belonging to the Ankylosauridae. In 1930, Swedish palaeontologist Anders Birger Bohlin discovered dinosaur fossils, in the context of the Swedish-Chinese expeditions headed by Sven Hedin, near Jiayuguan ("Chia-Yu-Kuan"), in the west of Gansu Province. In 1953, Bohlin named these as the type species Heishansaurus pachycephalus. The generic name refers to the Heishan, the "Black Mountains". The specific name pachycephalus, meaning "thick-headed", was inspired by Bohlin's identification of the taxon as a pachycephalosaur. Today this dinosaur is more probably considered an ankylosaur. The fossils, from the Minhe Formation dating from the Late Cretaceous (Campanian or Maastrichtian stage), were fragmentary. The type is the only known specimen. The material consisted of poorly preserved cranial and postcranial fragments plus some dermal scutes. It contained skull fragments including a maxilla, teeth, vertebrae from the neck, back and tail, osteoderms and spikes. Today, the specimen is lost. Of one dorsal vertebra a cast remains, preserved in the American Museum of Natural History with the inventory number AMNH 2062. Bohlin considered the species to be a member of the pachycephalosaurians because he mistook an osteoderm for the thick skull roof typical of this group. The material is probably ankylosaurid. It has been seen as a junior synonym of Pinacosaurus but the genus is more generally considered a nomen dubium, especially since Bohlin's description can only be checked by comparison with his published drawings. Notes
https://en.wikipedia.org/wiki/Wadge%20hierarchy
In descriptive set theory, within mathematics, Wadge degrees are levels of complexity for sets of reals. Sets are compared by continuous reductions. The Wadge hierarchy is the structure of Wadge degrees. These concepts are named after William W. Wadge. Wadge degrees Suppose and are subsets of Baire space ωω. Then is Wadge reducible to or ≤W if there is a continuous function on ωω with . The Wadge order is the preorder or quasiorder on the subsets of Baire space. Equivalence classes of sets under this preorder are called Wadge degrees, the degree of a set is denoted by []W. The set of Wadge degrees ordered by the Wadge order is called the Wadge hierarchy. Properties of Wadge degrees include their consistency with measures of complexity stated in terms of definability. For example, if ≤W and is a countable intersection of open sets, then so is . The same works for all levels of the Borel hierarchy and the difference hierarchy. The Wadge hierarchy plays an important role in models of the axiom of determinacy. Further interest in Wadge degrees comes from computer science, where some papers have suggested Wadge degrees are relevant to algorithmic complexity. Wadge's lemma states that under the axiom of determinacy (AD), for any two subsets of Baire space, ≤W or ≤W ωω\. The assertion that the Wadge lemma holds for sets in Γ is the semilinear ordering principle for Γ or SLO(Γ). Any defines a linear order on the equivalence classes modulo complements. Wadge's lemma can be applied locally to any pointclass Γ, for example the Borel sets, Δ1n sets, Σ1n sets, or Π1n sets. It follows from determinacy of differences of sets in Γ. Since Borel determinacy is proved in ZFC, ZFC implies Wadge's lemma for Borel sets. Wadge's lemma is similar to the cone lemma from computability theory. Wadge's lemma via Wadge and Lipschitz games The Wadge game is a simple infinite game discovered by William Wadge (pronounced "wage"). It is used to investigate the notion of co
https://en.wikipedia.org/wiki/Write%20buffer
A write buffer is a type of data buffer that can be used to hold data being written from the cache to main memory or to the next cache in the memory hierarchy to improve performance and reduce latency. It is used in certain CPU cache architectures like Intel's x86 and AMD64. In multi-core systems, write buffers destroy sequential consistency. Some software disciplines, like C11's data-race-freedom, are sufficient to regain a sequentially consistent view of memory. A variation of write-through caching is called buffered write-through. Use of a write buffer in this manner frees the cache to service read requests while the write is taking place. It is especially useful for very slow main memory in that subsequent reads are able to proceed without waiting for long main memory latency. When the write buffer is full (i.e. all buffer entries are occupied), subsequent writes still have to wait until slots are freed. Subsequent reads could be served from the write buffer. To further mitigate this stall, one optimization called write buffer merge may be implemented. Write buffer merge combines writes that have consecutive destination addresses into one buffer entry. Otherwise, they would occupy separate entries which increases the chance of pipeline stall. A victim buffer is a type of write buffer that stores dirty evicted lines in write-back caches so that they get written back to main memory. Besides reducing pipeline stall by not waiting for dirty lines to write back as a simple write buffer does, a victim buffer may also serve as a temporary backup storage when subsequent cache accesses exhibit locality, requesting those recently evicted lines, which are still in the victim buffer. The store buffer was invented by IBM during Project ACS between 1964 and 1968, but it was first implemented in commercial products in the 1990s. Notes
https://en.wikipedia.org/wiki/Flower
A flower, sometimes known as a bloom or blossom, is the reproductive structure found in flowering plants (plants of the division Angiospermae). Flowers produce gametophytes, which in flowering plants consist of a few haploid cells which produce gametes. The "male" gametophyte, which produces non-motile sperm, is enclosed within pollen grains; the "female" gametophyte is contained within the ovule. When pollen from the anther of a flower is deposited on the stigma, this is called pollination. Some flowers may self-pollinate, producing seed using pollen from the same flower or a different flower of the same plant, but others have mechanisms to prevent self-pollination and rely on cross-pollination, when pollen is transferred from the anther of one flower to the stigma of another flower on a different individual of the same species. Self-pollination happens in flowers where the stamen and carpel mature at the same time, and are positioned so that the pollen can land on the flower's stigma. This pollination does not require an investment from the plant to provide nectar and pollen as food for pollinators. Some flowers produce diaspores without fertilization (parthenocarpy). Flowers contain sporangia and are the site where gametophytes develop. Most flowering plants depend on animals, such as bees, moths, and butterflies, to transfer their pollen between different flowers, and have evolved to attract these pollinators by various strategies, including brightly colored, conspicuous petals, attractive scents, and the production of nectar, a food source for pollinators. In this way, many flowering plants have co-evolved with pollinators to be mutually dependent on services they provide to one another—in the plant's case, a means of reproduction; in the pollinator's case, a source of food. After fertilization, the ovary of the flower develops into fruit containing seeds. Flowers have long been appreciated by humans for their beauty and pleasant scents, and also hold cultu
https://en.wikipedia.org/wiki/Brauer%27s%20theorem%20on%20forms
There also is Brauer's theorem on induced characters. In mathematics, Brauer's theorem, named for Richard Brauer, is a result on the representability of 0 by forms over certain fields in sufficiently many variables. Statement of Brauer's theorem Let K be a field such that for every integer r > 0 there exists an integer ψ(r) such that for n ≥ ψ(r) every equation has a non-trivial (i.e. not all xi are equal to 0) solution in K. Then, given homogeneous polynomials f1,...,fk of degrees r1,...,rk respectively with coefficients in K, for every set of positive integers r1,...,rk and every non-negative integer l, there exists a number ω(r1,...,rk,l) such that for n ≥ ω(r1,...,rk,l) there exists an l-dimensional affine subspace M of Kn (regarded as a vector space over K) satisfying An application to the field of p-adic numbers Letting K be the field of p-adic numbers in the theorem, the equation (*) is satisfied, since , b a natural number, is finite. Choosing k = 1, one obtains the following corollary: A homogeneous equation f(x1,...,xn) = 0 of degree r in the field of p-adic numbers has a non-trivial solution if n is sufficiently large. One can show that if n is sufficiently large according to the above corollary, then n is greater than r2. Indeed, Emil Artin conjectured that every homogeneous polynomial of degree r over Qp in more than r2 variables represents 0. This is obviously true for r = 1, and it is well known that the conjecture is true for r = 2 (see, for example, J.-P. Serre, A Course in Arithmetic, Chapter IV, Theorem 6). See quasi-algebraic closure for further context. In 1950 Demyanov verified the conjecture for r = 3 and p ≠ 3, and in 1952 D. J. Lewis independently proved the case r = 3 for all primes p. But in 1966 Guy Terjanian constructed a homogeneous polynomial of degree 4 over Q2 in 18 variables that has no non-trivial zero. On the other hand, the Ax–Kochen theorem shows that for any fixed degree Artin's conjecture is true for all but finitely ma
https://en.wikipedia.org/wiki/Plant%20collecting
Plant collecting is the acquisition of plant specimens for the purposes of research, cultivation, or as a hobby. Plant specimens may be kept alive, but are more commonly dried and pressed to preserve the quality of the specimen. Plant collecting is an ancient practice with records of a Chinese botanist collecting roses over 5000 years ago. Herbaria are collections of preserved plants samples and their associated data for scientific purposes. The largest herbarium in the world exist at the Muséum National d'Histoire Naturelle, in Paris, France. Plant samples in herbaria typically include a reference sheet with information about the plant and details of collection. This detailed and organized system of filing provides horticulturist and other researchers alike with a way to find information about a certain plant, and a way to add new information to an existing plant sample file. The collection of live plant specimens from the wild, sometimes referred to as plant hunting, is an activity that has occurred for centuries. The earliest recorded evidence of plant hunting was in 1495 BC when botanists were sent to Somalia to collect incense trees for Queen Hatshepsut. The Victorian era saw a surge in plant hunting activity as botanical adventurers explored the world to find exotic plants to bring home, often at considerable personal risk. These plants usually ended up in botanical gardens or the private gardens of wealthy collectors. Prolific plant hunters in this period included William Lobb and his brother Thomas Lobb, George Forrest, Joseph Hooker, Charles Maries and Robert Fortune. Sample gathering The first step of plant collection begins with the selection of the sample. When collecting a sample it is important to first make sure that land you are collecting on allows for the removal of natural specimens. The next step after finding a suitable plant for collection is to assign it with a number for record keeping purposes. This number system is up to the individual
https://en.wikipedia.org/wiki/Classification%20theorem
In mathematics, a classification theorem answers the classification problem "What are the objects of a given type, up to some equivalence?". It gives a non-redundant enumeration: each object is equivalent to exactly one class. A few issues related to classification are the following. The equivalence problem is "given two objects, determine if they are equivalent". A complete set of invariants, together with which invariants are solves the classification problem, and is often a step in solving it. A (together with which invariants are realizable) solves both the classification problem and the equivalence problem. A canonical form solves the classification problem, and is more data: it not only classifies every class, but provides a distinguished (canonical) element of each class. There exist many classification theorems in mathematics, as described below. Geometry Classification of Euclidean plane isometries Classification theorems of surfaces Classification of two-dimensional closed manifolds Enriques–Kodaira classification of algebraic surfaces (complex dimension two, real dimension four) Nielsen–Thurston classification which characterizes homeomorphisms of a compact surface Thurston's eight model geometries, and the geometrization conjecture Berger classification Classification of Riemannian symmetric spaces Classification of 3-dimensional lens spaces Classification of manifolds Algebra Classification of finite simple groups Classification of Abelian groups Classification of Finitely generated abelian group Classification of Rank 3 permutation group Classification of 2-transitive permutation groups Artin–Wedderburn theorem — a classification theorem for semisimple rings Classification of Clifford algebras Classification of low-dimensional real Lie algebras Classification of Simple Lie algebras and groups Classification of simple complex Lie algebras Classification of simple real Lie algebras Classification of centerless simple Lie gro
https://en.wikipedia.org/wiki/Essential%20fatty%20acid%20interactions
There are many fatty acids found in nature. The two essential fatty acids are omega-3 and omega-6, which are necessary for good human health. However, the effects of the ω-3 (omega-3) and ω-6 (omega-6) essential fatty acids (EFAs) are characterized by their interactions. The interactions between these two fatty acids directly effect the signaling pathways and biological functions like inflammation, protein synthesis, neurotransmitters in our brain, and metabolic pathways in the human body. Arachidonic acid (AA) is a 20-carbon omega-6 essential fatty acid. It sits at the head of the "arachidonic acid cascade," which initiates 20 different signaling paths that control a wide array of biological functions, including inflammation, cell growth, and the central nervous system. Most AA in the human body is derived from dietary linoleic acid (18:2 ω-6), which is found in nuts, seeds, vegetable oils, and animal fats. During inflammation, two other groups of dietary essential fatty acids form cascade that compete with the arachidonic acid cascade. EPA (20:5 ω-3) provides the most important competing cascade. EPA is ingested from oily fish, algae oil, or alpha-linolenic acid (derived from walnuts, hemp oil, and flax oil). DGLA (20:3 ω-6) provides a third, less prominent cascade. It is derived from dietary GLA (18:3 ω-6) found in borage oil. These two parallel cascades soften the inflammatory-promoting effects of specific eicosanoids made from AA. The diet from a century ago had much less ω-3 than the diet of early hunter-gatherers but also much less pollution than today, which evokes the inflammatory response. We can also look at the ratio of ω-3 to ω-6 in comparison with their diets. These changes have been accompanied by increased rates of many diseases—the so-called diseases of civilization—that involve inflammatory processes. There is now very strong evidence that several of these diseases are ameliorated by increasing dietary ω-3. There is also more preliminary eviden
https://en.wikipedia.org/wiki/Prodynorphin
Prodynorphin, also known as proenkephalin B, is an opioid polypeptide hormone involved with chemical signal transduction and cell communication. The gene for prodynorphin is expressed in the endometrium and the striatum, and its gene map locus is 20pter-p12. Prodynorphin is a basic building-block of endorphins, the chemical messengers in the brain that appear most heavily involved in the anticipation and experience of pain and the formation of deep emotional bonds, and that are also critical in learning and memory. The gene is thought to influence perception, as well as susceptibility to drug dependence, and is expressed more readily in human beings than in other primates. Evolutionary implications Most humans have multiple copies of the regulatory gene sequence for prodynorphin, which is virtually identical among all primates, whereas other primates have only a single copy. In addition, most Asian populations have two copies of the gene sequence for prodynorphin, whereas East Africas, Middle Easterners, and Europeans tend to have three repetitions. The extent of regulatory gene disparities for prodynorphin, between human and primates, has gained the attention of scientists. There are very few genes known to be directly related to mankind's speciation from other great apes. According to computational biologist researcher Matthew W. Hahn of Indiana University, "this is the first documented instance of a neural gene that has had its regulation shaped by natural selection during human origins." The prodynorphin polypeptide is identical in humans and chimpanzees, but the regulatory promoter sequences have been shown to exhibit marked differences. According to Hahn, "humans have the ability to turn on this gene more easily and more intensely than other primates", a reason why regulation of this gene may have been important in the evolution of modern humans' mental capacity. See also Dynorphin Proenkephalin Proopiomelanocortin (POMC)
https://en.wikipedia.org/wiki/Convergence%20tests
In mathematics, convergence tests are methods of testing for the convergence, conditional convergence, absolute convergence, interval of convergence or divergence of an infinite series . List of tests Limit of the summand If the limit of the summand is undefined or nonzero, that is , then the series must diverge. In this sense, the partial sums are Cauchy only if this limit exists and is equal to zero. The test is inconclusive if the limit of the summand is zero. This is also known as the nth-term test, test for divergence, or the divergence test. Ratio test This is also known as d'Alembert's criterion. Suppose that there exists such that If r < 1, then the series is absolutely convergent. If r > 1, then the series diverges. If r = 1, the ratio test is inconclusive, and the series may converge or diverge. Root test This is also known as the nth root test or Cauchy's criterion. Let where denotes the limit superior (possibly ; if the limit exists it is the same value). If r < 1, then the series converges absolutely. If r > 1, then the series diverges. If r = 1, the root test is inconclusive, and the series may converge or diverge. The root test is stronger than the ratio test: whenever the ratio test determines the convergence or divergence of an infinite series, the root test does too, but not conversely. Integral test The series can be compared to an integral to establish convergence or divergence. Let be a non-negative and monotonically decreasing function such that . If then the series converges. But if the integral diverges, then the series does so as well. In other words, the series converges if and only if the integral converges. -series test A commonly-used corollary of the integral test is the p-series test. Let . Then converges if . The case of yields the harmonic series, which diverges. The case of is the Basel problem and the series converges to . In general, for , the series is equal to the Riemann zeta function applied to , t
https://en.wikipedia.org/wiki/Plurix
Plurix is a Unix-like operating system developed in Brazil in the early 1980s. Overview Plurix was developed in the Federal University of Rio de Janeiro (UFRJ), at the Electronic Computing Center (NCE). The NCE researchers, after returning from postgraduate courses in the USA, attempted to license the UNIX source code from AT&T in the late 1970s without success. In 1982, due to AT&T refusing to license the code, a development team led by Newton Faller decided to initiate the development of an alternative system, called Plurix (**), using as reference UNIX Version 7, the most recent at the time, that they had running on an old Motorola computer system. In 1985, the Plurix system was up and running on the Pegasus 32-X, a shared-memory, multi-processor computer also designed at NCE. Plurix was licensed to some Brazilian companies in 1988. Two other Brazilian universities also developed their own UNIX systems: Universidade Federal de Minas Gerais (UFMG) developed the DCC-IX operating system, and University of São Paulo (USP) developed the REAL operating system in 1987. The NCE/UFRJ also offered technical courses on OS design and implementation to local computer companies, some of which later produced their own proprietary UNIX systems. In fact, these Brazilian companies first created an organization of companies interested in UNIX (called API) and tried to license UNIX from AT&T. Their attempts were frustrated at the end of 1986, when AT&T canceled negotiations with API. Some of these companies, EDISA, COBRA, and SOFTEC, invested in the development of their own systems, EDIX, SOX and ANALIX, respectively. AT&T License When AT&T finally licensed their code to Brazilian companies, the majority of them decided to drop their local development, use the licensed code, and just "localize" the system for their purposes. COBRA and NCE/UFRJ kept developing, and tried to convince the Brazilian government to prohibit the further entrance of AT&T UNIX into Brazil, since the
https://en.wikipedia.org/wiki/Optical%20manufacturing%20and%20testing
Optical manufacturing and testing spans an enormous range of manufacturing procedures and optical test configurations. The manufacture of a conventional spherical lens typically begins with the generation of the optic's rough shape by grinding a glass blank. This can be done, for example, with ring tools. Next, the lens surface is polished to its final form. Typically this is done by lapping—rotating and rubbing the rough lens surface against a tool with the desired surface shape, with a mixture of abrasives and fluid in between. Typically a carved pitch tool is used to polish the surface of a lens. The mixture of abrasive is called slurry and it is typically made from cerium or zirconium oxide in water with lubricants added to facilitate pitch tool movement without sticking to the lens. The particle size in the slurry is adjusted to get the desired shape and finish. During polishing, the lens may be tested to confirm that the desired shape is being produced, and to ensure that the final shape has the correct form to within the allowed precision. The deviation of an optical surface from the correct shape is typically expressed in fractions of a wavelength, for some convenient wavelength of light (perhaps the wavelength at which the lens is to be used, or a visible wavelength for which a source is available). Inexpensive lenses may have deviations of form as large as several wavelengths (λ, 2λ, etc.). More typical industrial lenses would have deviations no larger than a quarter wavelength (λ/4). Precision lenses for use in applications such as lasers, interferometers, and holography have surfaces with a tenth of a wavelength (λ/10) tolerance or better. In addition to surface profile, a lens must meet requirements for surface quality (scratches, pits, specks, etc.) and accuracy of dimensions. Fabrication techniques Glass blank manufacturing Batch mixing Casting techniques Annealing schedules and equipment Physical characterization techniques Index of ref
https://en.wikipedia.org/wiki/Edge%20cover
In graph theory, an edge cover of a graph is a set of edges such that every vertex of the graph is incident to at least one edge of the set. In computer science, the minimum edge cover problem is the problem of finding an edge cover of minimum size. It is an optimization problem that belongs to the class of covering problems and can be solved in polynomial time. Definition Formally, an edge cover of a graph is a set of edges such that each vertex in is incident with at least one edge in . The set is said to cover the vertices of . The following figure shows examples of edge coverings in two graphs (the set is marked with red). A minimum edge covering is an edge covering of smallest possible size. The edge covering number is the size of a minimum edge covering. The following figure shows examples of minimum edge coverings (again, the set is marked with red). Note that the figure on the right is not only an edge cover but also a matching. In particular, it is a perfect matching: a matching in which each vertex is incident with exactly one edge in . A perfect matching (if it exists) is always a minimum edge covering. Examples The set of all edges is an edge cover, assuming that there are no degree-0 vertices. The complete bipartite graph has edge covering number . Algorithms A smallest edge cover can be found in polynomial time by finding a maximum matching and extending it greedily so that all vertices are covered. In the following figure, a maximum matching is marked with red; the extra edges that were added to cover unmatched nodes are marked with blue. (The figure on the right shows a graph in which a maximum matching is a perfect matching; hence it already covers all vertices and no extra edges were needed.) On the other hand, the related problem of finding a smallest vertex cover is an NP-hard problem. Looking at the image it already becomes obvious why, for a given minimal edge Cover and maximum matching , the following is true: . Countin
https://en.wikipedia.org/wiki/Elementary%20divisors
In algebra, the elementary divisors of a module over a principal ideal domain (PID) occur in one form of the structure theorem for finitely generated modules over a principal ideal domain. If is a PID and a finitely generated -module, then M is isomorphic to a finite sum of the form where the are nonzero primary ideals. The list of primary ideals is unique up to order (but a given ideal may be present more than once, so the list represents a multiset of primary ideals); the elements are unique only up to associatedness, and are called the elementary divisors. Note that in a PID, the nonzero primary ideals are powers of prime ideals, so the elementary divisors can be written as powers of irreducible elements. The nonnegative integer is called the free rank or Betti number of the module . The module is determined up to isomorphism by specifying its free rank , and for class of associated irreducible elements and each positive integer the number of times that occurs among the elementary divisors. The elementary divisors can be obtained from the list of invariant factors of the module by decomposing each of them as far as possible into pairwise relatively prime (non-unit) factors, which will be powers of irreducible elements. This decomposition corresponds to maximally decomposing each submodule corresponding to an invariant factor by using the Chinese remainder theorem for R. Conversely, knowing the multiset of elementary divisors, the invariant factors can be found, starting from the final one (which is a multiple of all others), as follows. For each irreducible element such that some power occurs in , take the highest such power, removing it from , and multiply these powers together for all (classes of associated) to give the final invariant factor; as long as is non-empty, repeat to find the invariant factors before it. See also Invariant factors
https://en.wikipedia.org/wiki/Geometric%20modeling
Geometric modeling is a branch of applied mathematics and computational geometry that studies methods and algorithms for the mathematical description of shapes. The shapes studied in geometric modeling are mostly two- or three-dimensional (solid figures), although many of its tools and principles can be applied to sets of any finite dimension. Today most geometric modeling is done with computers and for computer-based applications. Two-dimensional models are important in computer typography and technical drawing. Three-dimensional models are central to computer-aided design and manufacturing (CAD/CAM), and widely used in many applied technical fields such as civil and mechanical engineering, architecture, geology and medical image processing. Geometric models are usually distinguished from procedural and object-oriented models, which define the shape implicitly by an opaque algorithm that generates its appearance. They are also contrasted with digital images and volumetric models which represent the shape as a subset of a fine regular partition of space; and with fractal models that give an infinitely recursive definition of the shape. However, these distinctions are often blurred: for instance, a digital image can be interpreted as a collection of colored squares; and geometric shapes such as circles are defined by implicit mathematical equations. Also, a fractal model yields a parametric or implicit model when its recursive definition is truncated to a finite depth. Notable awards of the area are the John A. Gregory Memorial Award and the Bézier award. See also 2D geometric modeling Architectural geometry Computational conformal geometry Computational topology Computer-aided engineering Computer-aided manufacturing Digital geometry Geometric modeling kernel List of interactive geometry software Parametric equation Parametric surface Solid modeling Space partitioning
https://en.wikipedia.org/wiki/Tilt%20%28optics%29
In optics, tilt is a deviation in the direction a beam of light propagates. Overview Tilt quantifies the average slope in both the X and Y directions of a wavefront or phase profile across the pupil of an optical system. In conjunction with piston (the first Zernike polynomial term), X and Y tilt can be modeled using the second and third Zernike polynomials: X-Tilt: Y-Tilt: where is the normalized radius with and is the azimuthal angle with . The and coefficients are typically expressed as a fraction of a chosen wavelength of light. Piston and tilt are not actually true optical aberrations, as they do not represent or model curvature in the wavefront. Defocus is the lowest order true optical aberration. If piston and tilt are subtracted from an otherwise perfect wavefront, a perfect, aberration-free image is formed. Rapid optical tilts in both X and Y directions are termed jitter. Jitter can arise from three-dimensional mechanical vibration, and from rapidly varying 3D refraction in aerodynamic flowfields. Jitter may be compensated in an adaptive optics system by using a flat mirror mounted on a dynamic two-axis mount that allows small, rapid, computer-controlled changes in the mirror X and Y angles. This is often termed a "fast steering mirror", or FSM. A gimbaled optical pointing system cannot mechanically track an object or stabilize a projected laser beam to much better than several hundred microradians. Buffeting due to aerodynamic turbulence further degrades the pointing stability. Light, however, has no appreciable momentum, and by reflecting from a computer-driven FSM, an image or laser beam can be stabilized to single microradians, or even a few hundred nanoradians. This almost totally eliminates image blurring due to motion, and far-field laser beam jitter. Limitations on the degree of line-of-sight stabilization arise from the limited dynamic range of the FSM tilt, and the highest frequency the mirror tilt angle can be change
https://en.wikipedia.org/wiki/Apollonius%27s%20theorem
In geometry, Apollonius's theorem is a theorem relating the length of a median of a triangle to the lengths of its sides. It states that "the sum of the squares of any two sides of any triangle equals twice the square on half the third side, together with twice the square on the median bisecting the third side". Specifically, in any triangle if is a median, then It is a special case of Stewart's theorem. For an isosceles triangle with the median is perpendicular to and the theorem reduces to the Pythagorean theorem for triangle (or triangle ). From the fact that the diagonals of a parallelogram bisect each other, the theorem is equivalent to the parallelogram law. The theorem is named for the ancient Greek mathematician Apollonius of Perga. Proof The theorem can be proved as a special case of Stewart's theorem, or can be proved using vectors (see parallelogram law). The following is an independent proof using the law of cosines. Let the triangle have sides with a median drawn to side Let be the length of the segments of formed by the median, so is half of Let the angles formed between and be and where includes and includes Then is the supplement of and The law of cosines for and states that Add the first and third equations to obtain as required. See also
https://en.wikipedia.org/wiki/Jan%20L.%20A.%20van%20de%20Snepscheut
Johannes Lambertus Adriana van de Snepscheut (; 12 September 195323 February 1994) was a computer scientist and educator. He was a student of Martin Rem and Edsger Dijkstra. At the time of his death he was the executive officer of the computer science department at the California Institute of Technology. He was also developing an editor for proving theorems called "Proxac". In the early morning hours of February 23, 1994, van de Snepscheut attacked his sleeping wife, Terre, with an axe. He then set their house on fire, and died as it burned around him. Terre and their three children escaped their burning home. Bibliography Jan L. A. Van De Snepscheut, Gerrit A. Slavenburg, Introducing the notion of processes to hardware, ACM SIGARCH Computer Architecture News, April 1979. Jan L. A. Van De Snepscheut, Trace Theory and VLSI Design,, Lecture Notes in Computer Science, Volume 200, Springer, 1985. Jan L. A. Van De Snepscheut, What computing is all about. Springer, 1993.
https://en.wikipedia.org/wiki/Magnetix
Magnetix is a magnetic construction toy that combines plastic building pieces containing embedded neodymium magnets, and steel bearing balls that can be connected to form geometric shapes and structures. Designed to be a cheaper version of the Geomag magnetic construction set, Magnetix's image suffered severely when an early manufacturing defect caused a death. It was sold under various brands after the defect was corrected. Design The spheres are to in diameter (larger than Geomag), approximately in weight, and are prone to surface corrosion, unlike most other magnetic construction toys. The bars with magnets at each end are long, or , or and flexible, or short rigid curves. Panel shapes include two types of interlocking triangles, interlockable squares, and circle or disks. The triangles and squares identify the North-South polarity of one of their embedded magnets. The disks identify all four magnets. Popularity According to TD Monthly, a trade magazine for the toy industry, Magnetix sets were among the top 10 most-wanted building sets in 2005, and top sellers on web sites including Amazon.com, KB Toys.com, and Walmart.com. Product recalls On March 31, 2006, the Consumer Product Safety Commission (CPSC) ordered a recall of all Magnetix brand magnetic building sets. The official CPSC recall notice was issued after the death of a small child and four serious injuries requiring surgery. "Consumers should stop using recalled products immediately unless otherwise instructed," according to the recall notice. Other brands of magnetic builders, such as Geomag were not recalled. On April 19, 2007, CPSC ordered further Magnetix recalls, recalling over 4 million sets. "To date, CPSC and Mega Brands are aware of one death, one aspiration and 27 intestinal injuries. Emergency surgical intervention was needed in all but one case. At least 1,500 incidents of magnets separating from the building pieces have been reported. ... If a child swallows more than one tiny p
https://en.wikipedia.org/wiki/History%20of%20penicillin
The history of penicillin follows observations and discoveries of evidence of antibiotic activity of the mould Penicillium that led to the development of penicillins that became the first widely used antibiotics. Following the production of a relatively pure compound in 1942, penicillin was the first naturally-derived antibiotic. Ancient societies used moulds to treat infections, and in the following centuries many people observed the inhibition of bacterial growth by moulds. While working at St Mary's Hospital in London in 1928, Scottish physician Alexander Fleming was the first to experimentally determine that a Penicillium mould secretes an antibacterial substance, which he named "penicillin". The mould was found to be a variant of Penicillium notatum (now called Penicillium rubens), a contaminant of a bacterial culture in his laboratory. The work on penicillin at St Mary's ended in 1929. In 1939, a team of scientists at the Sir William Dunn School of Pathology at the University of Oxford, led by Howard Florey that included Edward Abraham, Ernst Chain, Mary Ethel Florey, Norman Heatley and Margaret Jennings, began researching penicillin. They developed a method for cultivating the mould and extracting, purifying and storing penicillin from it, together with an assay for measuring its purity. They carried out experiments with animals to determine penicillin's safety and effectiveness before conducting clinical trials and field tests. They derived its chemical structure and determined how it works. The private sector and the United States Department of Agriculture located and produced new strains and developed mass production techniques. During the Second World War penicillin became an important part of the Allied war effort, saving thousands of lives. Alexander Fleming, Howard Florey and Ernst Chain shared the 1945 Nobel Prize in Physiology or Medicine for the discovery and development of penicillin. After the end of the war in 1945, penicillin became widely a
https://en.wikipedia.org/wiki/Power%20module
A power module or power electronic module provides the physical containment for several power components, usually power semiconductor devices. These power semiconductors (so-called dies) are typically soldered or sintered on a power electronic substrate that carries the power semiconductors, provides electrical and thermal contact and electrical insulation where needed. Compared to discrete power semiconductors in plastic housings as TO-247 or TO-220, power packages provide a higher power density and are in many cases more reliable. Module Topologies Besides modules that contain a single power electronic switch (as MOSFET, IGBT, BJT, Thyristor, GTO or JFET) or diode, classical power modules contain multiple semiconductor dies that are connected to form an electrical circuit of a certain structure, called topology. Modules also contain other components such as ceramic capacitors to minimize switching voltage overshoots and NTC thermistors to monitor the module's substrate temperature. Examples of broadly available topologies implemented in modules are: switch (MOSFET, IGBT), with antiparallel Diode; bridge rectifier containing four (1-phase) or six (3-phase) diodes half bridge (inverter leg, with two switches and their corresponding antiparallel diodes) H-Bridge (four switches and the corresponding antiparallel diodes) boost or power factor correction (one (or two) switches with one (or two) high frequency rectifying diodes) ANPFC (power factor correction leg with two switches and their corresponding antiparallel diodes and four high frequency rectifying diodes) three level NPC (I-Type) (multilevel inverter leg with four switches and their corresponding antiparallel diodes) three level MNPC (T-Type) (multilevel inverter leg with four switches and their corresponding antiparallel diodes) three level ANPC (multilevel inverter leg with six switches and their corresponding antiparallel diodes) three level H6.5 - (consisting of six switches (four fast IGBTs/tw
https://en.wikipedia.org/wiki/Wireless%20site%20survey
A wireless site survey, sometimes called an RF (Radio Frequency) site survey or wireless survey, is the process of planning and designing a wireless network, to provide a wireless solution that will deliver the required wireless coverage, data rates, network capacity, roaming capability and quality of service (QoS). The survey usually involves a site visit to test for RF interference, and to identify optimum installation locations for access points. This requires analysis of building floor plans, inspection of the facility, and use of site survey tools. Interviews with IT management and the end users of the wireless network are also important to determine the design parameters for the wireless network. As part of the wireless site survey, the effective range boundary is set, which defines the area over which signal levels needed support the intended application. This involves determining the minimum signal-to-noise ratio (SNR) needed to support performance requirements. Wireless site survey can also mean the walk-testing, auditing, analysis or diagnosis of an existing wireless network, particularly one which is not providing the level of service required. Wireless site survey process Wireless site surveys are typically conducted using computer software that collects and analyses WLAN metrics and/or RF spectrum characteristics. Before a survey, a floor plan or site map is imported into a site survey application and calibrated to set scale. During a survey, a surveyor walks the facility with a portable computer that continuously records the data. The surveyor either marks the current position on the floor plan manually, by clicking on the floor plan, or uses a GPS receiver that automatically marks the current position if the survey is conducted outdoors. After a survey, data analysis is performed and survey results are documented in site survey reports generated by the application. All these data collection, analysis, and visualization tasks are highly automated
https://en.wikipedia.org/wiki/Notoceratops
Notoceratops (meaning "southern horned face") is a dubious genus of extinct ornithischian dinosaur based on an incomplete, toothless left dentary (now lost) from the Late Cretaceous of Patagonia (in Argentina), probably dating to the Campanian or Maastrichtian. It was most likely a ceratopsian and it was found in the Lago Colhué Huapi Formation. Discovery and naming In 1918, palaeontologist Augusto Tapia (1893–1966) discovered the genus holotype. He also named the type species, N. bonarellii (originally spelt as Notoceratops Bonarelli), in 1918. The generic name is derived from Greek notos, "the south", keras, "horn" and ops, "face". The specific name honours Guido Bonarelli (1871-1951), who advised Tapia in his study of the find. By present conventions the epithet is spelled bonarellii, thus without a capital B. In many later publications the specific name is misspelled "bonarelli", with a single "i", from the incorrect assumption it would be derived from a Latinised "Bonarell~ius". The fossil, found near the Lago Colhué Huapi in Chubut, was eventually described by Friedrich von Huene in 1929, but it has since been lost. Phylogeny Originally referred as a ceratopsian by Tapia in 1918, it was later dismissed because no other members of that group were known from the Southern Hemisphere. However, the 2003 discovery of another possible ceratopsian, Serendipaceratops, from Australia could change this view. Notoceratops has since been considered a nomen dubium and may have been a hadrosaur instead. An analysis published by Tom Rich et al. in 2014, which focused on the validity of Serendipaceratops, also examined the published material from Notoceratops. They concluded that the holotype had ceratopsian features and that the genus is probably valid.
https://en.wikipedia.org/wiki/Contactless%20smart%20card
A contactless smart card is a contactless credential whose dimensions are credit card size. Its embedded integrated circuits can store (and sometimes process) data and communicate with a terminal via NFC. Commonplace uses include transit tickets, bank cards and passports. There are two broad categories of contactless smart cards. Memory cards contain non-volatile memory storage components, and perhaps some specific security logic. Contactless smart cards contain read-only RFID called CSN (Card Serial Number) or UID, and a re-writeable smart card microchip that can be transcribed via radio waves. Overview A contactless smart card is characterized as follows: Dimensions are normally credit card size. The ID-1 of ISO/IEC 7810 standard defines them as 85.60 × 53.98 × 0.76 mm (3.370 × 2.125 × 0.030 in). Contains a security system with tamper-resistant properties (e.g. a secure cryptoprocessor, secure file system, human-readable features) and is capable of providing security services (e.g. confidentiality of information in the memory). Assets managed by way of a central administration systems, or applications, which receive or interchange information with the card, such as card hotlisting and updates for application data. Card data is transferred via radio waves to the central administration system through card read-write devices, such as point of sales devices, doorway access control readers, ticket readers, ATMs, USB-connected desktop readers, etc. Benefits Contactless smart cards can be used for identification, authentication, and data storage. They also provide a means of effecting business transactions in a flexible, secure, standard way with minimal human intervention. History Contactless smart cards were first used for electronic ticketing in 1995 in Seoul, South Korea. Since then, smart cards with contactless interfaces have been increasingly popular for payment and ticketing applications such as mass transit. Globally, contactless fare collection is bei
https://en.wikipedia.org/wiki/Myofascial%20pain%20syndrome
Myofascial pain syndrome (MPS), also known as chronic myofascial pain (CMP), is a syndrome characterized by chronic pain in multiple myofascial trigger points ("knots") and fascial (connective tissue) constrictions. It can appear in any body part. Symptoms of a myofascial trigger point include: focal point tenderness, reproduction of pain upon trigger point palpation, hardening of the muscle upon trigger point palpation, pseudo-weakness of the involved muscle, referred pain, and limited range of motion following approximately 5 seconds of sustained trigger point pressure. The cause is believed to be muscle tension or spasms within the affected musculature. Diagnosis is based on the symptoms and possible sleep studies. Treatment may include pain medication, physical therapy, mouth guards, and occasionally benzodiazepine. It is a relatively common cause of temporomandibular pain. Signs and symptoms Primary symptoms include: Localized muscle pain Trigger points that activate the pain (MTrPs) Generally speaking, the muscular pain is steady, aching, and deep. Depending on the case and location the intensity can range from mild discomfort to excruciating and "lightning-like". Knots may be visible or felt beneath the skin. The pain does not resolve on its own, even after typical first-aid self-care such as ice, heat, and rest. Electromyography (EMG) has been used to identify abnormal motor neuron activity in the affected region. A physical exam usually reveals palpable trigger points in affected muscles and taut bands corresponding to the contracted muscles. The trigger points are exquisitely tender spots on the taut bands. Causes The causes of MPS are not fully documented or understood. At least one study rules out trigger points: "The theory of myofascial pain syndrome (MPS) caused by trigger points (TrPs) ... has been refuted. This is not to deny the existence of the clinical phenomena themselves, for which scientifically sound and logically plausible explanat
https://en.wikipedia.org/wiki/Tarhana
Tarhana is a dried food ingredient, based on a fermented mixture of grain and yogurt or fermented milk, found in the cuisines of Central Asia, Southeast Europe and the Middle East. Dry tarhana has a texture of coarse, uneven crumbs, and it is usually made into a thick soup with water, stock, or milk. As it is both acidic and low in moisture, the milk proteins keep for long periods. Tarhana is very similar to some kinds of kashk. Regional variations of the name include Armenian թարխանա tarkhana; Greek τραχανάς trahanas or (ξυνό)χονδρος (xyno)hondros; Persian ترخینه، ترخانه، ترخوانه tarkhineh, tarkhāneh, tarkhwāneh; Kurdish tarxane; Albanian trahana; Bulgarian трахана or тархана; Serbo-Croatian tarana or trahana; Hungarian tarhonya; Turkish tarhana. The Armenian tarkhana is made up of matzoon and eggs mixed with equal amounts of wheat flour and starch. Small pieces of dough are prepared and dried and then kept in glass containers and used mostly in soups, dissolving in hot liquids. The Greek trahanas contains only cracked wheat or a couscous-like paste and fermented milk. The Turkish tarhana consists of cracked wheat (or flour), yoghurt, and vegetables, fermented and then dried. In Cyprus, it is considered a national specialty, and is often served with pieces of halloumi cheese in it. In Albania it is prepared with wheat, yoghurt and butter, and served with hot olive oil and feta cheese. Etymology Hill and Bryer suggest that the term tarhana is related to Greek τρακτόν (trakton, romanized as tractum), a thickener Apicius wrote about in the 1st century CE which most other authors consider to be a sort of cracker crumb. Dalby (1996) connects it to the Greek τραγός/τραγανός (tragos/traganos), described (and condemned) in Galen's Geoponica 3.8. Weaver (2002) also considers it of Western origin. Perry, on the other hand, considers that the phonetic evolution of τραγανός to tarhana is unlikely, and that it probably comes from tarkhwāneh. He considers the resemblance
https://en.wikipedia.org/wiki/Linear%20energy%20transfer
In dosimetry, linear energy transfer (LET) is the amount of energy that an ionizing particle transfers to the material traversed per unit distance. It describes the action of radiation into matter. It is identical to the retarding force acting on a charged ionizing particle travelling through the matter. By definition, LET is a positive quantity. LET depends on the nature of the radiation as well as on the material traversed. A high LET will slow down the radiation more quickly, generally making shielding more effective and preventing deep penetration. On the other hand, the higher concentration of deposited energy can cause more severe damage to any microscopic structures near the particle track. If a microscopic defect can cause larger-scale failure, as is the case in biological cells and microelectronics, the LET helps explain why radiation damage is sometimes disproportionate to the absorbed dose. Dosimetry attempts to factor in this effect with radiation weighting factors. Linear energy transfer is closely related to stopping power, since both equal the retarding force. The unrestricted linear energy transfer is identical to linear electronic stopping power, as discussed below. But the stopping power and LET concepts are different in the respect that total stopping power has the nuclear stopping power component, and this component does not cause electronic excitations. Hence nuclear stopping power is not contained in LET. The appropriate SI unit for LET is the newton, but it is most typically expressed in units of kiloelectronvolts per micrometre (keV/μm) or megaelectronvolts per centimetre (MeV/cm). While medical physicists and radiobiologists usually speak of linear energy transfer, most non-medical physicists talk about stopping power. Restricted and unrestricted LET The secondary electrons produced during the process of ionization by the primary charged particle are conventionally called delta rays, if their energy is large enough so that they thems
https://en.wikipedia.org/wiki/Intercostal%20space
The intercostal space (ICS) is the anatomic space between two ribs (Lat. costa). Since there are 12 ribs on each side, there are 11 intercostal spaces, each numbered for the rib superior to it. Structures in intercostal space several kinds of intercostal muscle intercostal arteries and intercostal veins intercostal lymph nodes intercostal nerves Order of components Muscles There are 3 muscular layers in each intercostal space, consisting of the external intercostal muscle, the internal intercostal muscle, and the thinner innermost intercostal muscle. These muscles help to move the ribs during breathing. Neurovascular bundles Neurovascular bundles are located between the internal intercostal muscle and the innermost intercostal muscle. The neurovascular bundle has a strict order of vein-artery-nerve (VAN), from top to bottom. This neurovascular bundle runs high in the intercostal space, and the smaller collateral neurovascular bundle runs just superior to the lower rib of the space (in the order NAV from superior to inferior). Invasive procedures such as thoracentesis are performed with oblique entry of the instrument, directly above the upper margin of the relevant rib, to avoid damaging the neurovascular bundles. Nerves In reference to the muscles of the thoracic wall, the intercostal nerves and vessels run just behind the internal intercostal muscles: therefore, they are generally covered on the inside by the parietal pleura, except when they are covered by the innermost intercostal muscles, innermost intercostal membrane, subcostal muscles or the transversus thoracis muscle.
https://en.wikipedia.org/wiki/Virtual%20Physiological%20Human
The Virtual Physiological Human (VPH) is a European initiative that focuses on a methodological and technological framework that, once established, will enable collaborative investigation of the human body as a single complex system. The collective framework will make it possible to share resources and observations formed by institutions and organizations, creating disparate but integrated computer models of the mechanical, physical and biochemical functions of a living human body. VPH is a framework which aims to be descriptive, integrative and predictive. Clapworthy et al. state that the framework should be descriptive by allowing laboratory and healthcare observations around the world "to be collected, catalogued, organized, shared and combined in any possible way." It should be integrative by enabling those observations to be collaboratively analyzed by related professionals in order to create "systemic hypotheses." Finally, it should be predictive by encouraging interconnections between extensible and scalable predictive models and "systemic networks that solidify those systemic hypotheses" while allowing observational comparison. The framework is formed by large collections of anatomical, physiological, and pathological data stored in digital format, typically by predictive simulations developed from these collections and by services intended to support researchers in the creation and maintenance of these models, as well as in the creation of end-user technologies to be used in the clinical practice. VPH models aim to integrate physiological processes across different length and time scales (multi-scale modelling). These models make possible the combination of patient-specific data with population-based representations. The objective is to develop a systemic approach which avoids a reductionist approach and seeks not to subdivide biological systems in any particular way by dimensional scale (body, organ, tissue, cells, molecules), by scientific discipline (
https://en.wikipedia.org/wiki/Physiome
The physiome of an individual's or species' physiological state is the description of its functional behavior. The physiome describes the physiological dynamics of the normal intact organism and is built upon information and structure (genome, proteome, and morphome). The term comes from "physio-" (nature) and "-ome" (as a whole). The concept of a physiome project was presented to the International Union of Physiological Sciences (IUPS) by its Commission on Bioengineering in Physiology in 1993. A workshop on designing the Physiome Project was held in 1997. At its world congress in 2001, the IUPS designated the project as a major focus for the next decade. The project is led by the Physiome Commission of the IUPS. Other research initiatives related to the physiome include: The EuroPhysiome Initiative The NSR Physiome Project of the National Simulation Resource (NSR) at the University of Washington, supporting the IUPS Physiome Project The Wellcome Trust Heart Physiome Project, a collaboration between the University of Auckland and the University of Oxford, part of the wider IUPS Physiome Project See also Physiomics Living Human Project Virtual Physiological Human Virtual Physiological Rat Cytome Human Genome Project List of omics topics in biology Cardiophysics
https://en.wikipedia.org/wiki/Retrotransposon%20marker
Retrotransposon markers are components of DNA which are used as cladistic markers. They assist in determining the common ancestry, or not, of related taxa. The "presence" of a given retrotransposon in related taxa suggests their orthologous integration, a derived condition acquired via a common ancestry, while the "absence" of particular elements indicates the plesiomorphic condition prior to integration in more distant taxa. The use of presence/absence analyses to reconstruct the systematic biology of mammals depends on the availability of retrotransposons that were actively integrating before the divergence of a particular species. Details The analysis of SINEs – Short INterspersed Elements – LINEs – Long INterspersed Elements – or truncated LTRs – Long Terminal Repeats – as molecular cladistic markers represents a particularly interesting complement to DNA sequence and morphological data. The reason for this is that retrotransposons are assumed to represent powerful noise-poor synapomorphies. The target sites are relatively unspecific so that the chance of an independent integration of exactly the same element into one specific site in different taxa is not large and may even be negligible over evolutionary time scales. Retrotransposon integrations are currently assumed to be irreversible events; this might change since no eminent biological mechanisms have yet been described for the precise re-excision of class I transposons, but see van de Lagemaat et al. (2005). A clear differentiation between ancestral and derived character state at the respective locus thus becomes possible as the absence of the introduced sequence can be with high confidence considered ancestral. In combination, the low incidence of homoplasy together with a clear character polarity make retrotransposon integration markers ideal tools for determining the common ancestry of taxa by a shared derived transpositional event. The "presence" of a given retrotransposon in related taxa suggests
https://en.wikipedia.org/wiki/Viroplasm
A viroplasm, sometimes called "virus factory" or "virus inclusion", is an inclusion body in a cell where viral replication and assembly occurs. They may be thought of as viral factories in the cell. There are many viroplasms in one infected cell, where they appear dense to electron microscopy. Very little is understood about the mechanism of viroplasm formation. Definition A viroplasm is a perinuclear or a cytoplasmic large compartment where viral replication and assembly occurs. The viroplasm formation is caused by the interactions between the virus and the infected cell, where viral products and cell elements are confined. Groups of viruses that form viroplasms Viroplasms have been reported in many unrelated groups of Eukaryotic viruses that replicate in cytoplasm, however, viroplasms from plant viruses have not been as studied as viroplasms from animal viruses. Viroplasms have been found in the cauliflower mosaic virus, rotavirus, vaccinia virus and the rice dwarf virus. These appear electron-dense under an electron microscope and are insoluble. Structure and formation Viroplasms are localized in the perinuclear area or in the cytoplasm of infected cells and are formed early in the infection cycle. The number and the size of viroplasms depend on the virus, the virus isolate, hosts species, and the stage of the infection. For example, viroplasms of mimivirus have a similar size to the nucleus of its host, the amoeba Acanthamoeba polyphaga. A virus can induce changes in composition and organization of host cell cytoskeletal and membrane compartments, depending on the step of the viral replication cycle. This process involves a number of complex interactions and signaling events between viral and host cell factors. Viroplasms are formed early during the infection; in many cases, the cellular rearrangements caused during virus infection lead to the construction of sophisticated inclusions —viroplasms— in the cell where the factory will be assembled. The
https://en.wikipedia.org/wiki/Molecular%20beacon
Molecular beacons, or molecular beacon probes, are oligonucleotide hybridization probes that can report the presence of specific nucleic acids in homogenous solutions. Molecular beacons are hairpin-shaped molecules with an internally quenched fluorophore whose fluorescence is restored when they bind to a target nucleic acid sequence. This is a novel non-radioactive method for detecting specific sequences of nucleic acids. They are useful in situations where it is either not possible or desirable to isolate the probe-target hybrids from an excess of the hybridization probes. Molecular beacon probes A typical molecular beacon probe is 25 nucleotides long. The middle 15 nucleotides are complementary to the target DNA or RNA and do not base pair with one another, while the five nucleotides at each terminus are complementary to each other rather than to the target DNA. A typical molecular beacon structure can be divided in 4 parts: 1) loop, an 18–30 base pair region of the molecular beacon that is complementary to the target sequence; 2) stem formed by the attachment to both termini of the loop of two short (5 to 7 nucleotide residues) oligonucleotides that are complementary to each other; 3) 5' fluorophore at the 5' end of the molecular beacon, a fluorescent dye is covalently attached; 4) 3' quencher (non fluorescent) dye that is covalently attached to the 3' end of the molecular beacon. When the beacon is in closed loop shape, the quencher resides in proximity to the fluorophore, which results in quenching the fluorescent emission of the latter. If the nucleic acid to be detected is complementary to the strand in the loop, the event of hybridization occurs. The duplex formed between the nucleic acid and the loop is more stable than that of the stem because the former duplex involves more base pairs. This causes the separation of the stem and hence of the fluorophore and the quencher. Once the fluorophore is no longer next to the quencher, illumination of the hybrid
https://en.wikipedia.org/wiki/Boreotropical%20flora
Boreotropical flora were plants that may have formed a belt of vegetation around the Northern Hemisphere during the Eocene epoch. These included forests composed of large, fast-growing trees (such as dawn redwoods) as far north as 80°N. See also Paleocene-Eocene Thermal Maximum
https://en.wikipedia.org/wiki/CER-200
CER ( – Digital Electronic Computer) model 200 is an early digital computer developed by Mihajlo Pupin Institute (Serbia) in 1966. See also CER Computers Mihajlo Pupin Institute History of computer hardware in the SFRY One-of-a-kind computers CER computers
https://en.wikipedia.org/wiki/Sun%20Cloud
Sun Cloud was an on-demand Cloud computing service operated by Sun Microsystems prior to its acquisition by Oracle Corporation. The Sun Cloud Compute Utility provided access to a substantial computing resource over the Internet for US$1 per CPU-hour. It was launched as Sun Grid in March 2006. It was based on and supported open source technologies such as Solaris 10, Sun Grid Engine, and the Java platform. Sun Cloud delivered enterprise computing power and resources over the Internet, enabling developers, researchers, scientists and businesses to optimize performance, speed time to results, and accelerate innovation without investment in IT infrastructure. In early 2010 Oracle announced it was discontinuing the Sun Cloud project. Since Sunday, March 7, 2010, the network.com web site has been inaccessible. Suitable applications A typical application that could run on the Compute Utility fit the following parameters: must be self-contained runs on the Solaris 10 Operating System (OS) is implemented with standard object libraries included with the Solaris 10 OS or user libraries packaged with the executable all executable code must be available on the Compute Utility at time of execution runs to completion under control of shell scripts (no requirement for interactive access) has a total maximum size of applications and data that does not exceed 10 gigabytes can be packaged for upload to Sun Cloud as one or more ZIP files of 300 megabytes or smaller Resources, jobs, and runs Resources are collections of files that contain the user's data and executable. Jobs are a Compute Utility concept that define the elements of the unit of work that is submitted to the Sun Cloud Compute Utility. The major elements of a job include the name of the shell script controlling program execution, required arguments to the shell script, and a list of resources that must be in place for the job to run. A run is a specific instantiation of a Job description submitted to the Su
https://en.wikipedia.org/wiki/HRS-100
HRS-100, ХРС-100, GVS-100 or ГВС-100, (see Ref.#1, #2, #3 and #4) (, , ) was a third generation hybrid computer developed by Mihajlo Pupin Institute (Serbia, then SFR Yugoslavia) and engineers from USSR in the period from 1968 to 1971. Three systems HRS-100 were deployed in Academy of Sciences of USSR in Moscow and Novosibirsk (Akademgorodok) in 1971 and 1978. More production was contemplated for use in Czechoslovakia and German Democratic Republic (DDR), but that was not realised. HRS-100 was invented and developed to study the dynamical systems in real and accelerated scale time and for efficient solving of wide array of scientific tasks at the institutes of the A.S. of USSR (in the fields: Aerospace-nautics, Energetics, Control engineering, Microelectronics, Telecommunications, Bio-medical investigations, Chemical industry etc.). Overview HRS-100 was composed of: Digital computer: central processor 16 kilowords of 0.9 μs 36-bit magnetic core primary memory, expandable to 64 kilowords. secondary disk storage peripheral devices (teleprinters, punched tape reader/punchers, parallel printers and punched card readers). multiple Analog computer modules Interconnection devices multiple analog and digital Peripheral devices Central processing unit HRS-100 has a 32-bit TTL MSI processor with following capabilities: four basic arithmetic operations are implemented in hardware for both fixed point and floating point operations Addressing modes: immediate/literal, absolute/direct, relative, unlimited-depth multi-level memory indirect and relative-indirect 7 index registers and dedicated "index arithmetic" hardware 32 interrupt "channels" (10 from within the CPU, 10 from peripherals and 12 from interconnection devices and analog computer) Primary memory Primary memory was made up of 0.9 μs cycle time magnetic core modules. Each 36-bit word is organized as follows: 32 data bits 1 parity bit 3 program protection bits specifying which program (Operating Sys
https://en.wikipedia.org/wiki/TIM-100
The TIM-100 was a PTT teller microcomputer developed by Mihajlo Pupin Institute (Serbia) in 1985 (Ref.lit. #1). It was based on the Intel microprocessors types 80x86 and VLSI circuitry. RAM had capacity max.8MB, and the external memory were floppy disks of 5.25 or 3.50 inch. (Ref.literature #2, #3 and #4). Multiuser, multitasking Operating system was real-time NRT and also TRANOS (developed by PTT office). See also Mihajlo Pupin Institute History of computer hardware in the SFRY Microcomputers
https://en.wikipedia.org/wiki/Biomarker
In biomedical contexts, a biomarker, or biological marker, is a measurable indicator of some biological state or condition. Biomarkers are often measured and evaluated using blood, urine, or soft tissues to examine normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention. Biomarkers are used in many scientific fields. Medicine Biomarkers used in the medical field, are a part of a relatively new clinical toolset categorized by their clinical applications. The four main classes are molecular, physiologic, histologic and radiographic biomarkers. All four types of biomarkers have a clinical role in narrowing or guiding treatment decisions and follow a sub-categorization of being either predictive, prognostic, or diagnostic. Predictive Predictive molecular, cellular, or imaging biomarkers that pass validation can serve as a method of predicting clinical outcomes. Predictive biomarkers are used to help optimize ideal treatments, and often indicate the likelihood of benefiting from a specific therapy. For example, molecular biomarkers situated at the interface of pathology-specific molecular process architecture and drug mechanism of action promise capturing aspects allowing assessment of an individual treatment response. This offers a dual approach to both seeing trends in retrospective studies and using biomarkers to predict outcomes. For example, in metastatic colorectal cancer predictive biomarkers can serve as a way of evaluating and improving patient survival rates and in the individual case by case scenario, they can serve as a way of sparing patients from needless toxicity that arises from cancer treatment plans. Common examples of predictive biomarkers are genes such as ER, PR and HER2/neu in breast cancer, BCR-ABL fusion protein in chronic myeloid leukaemia, c-KIT mutations in GIST tumours and EGFR1 mutations in NSCLC. Diagnostic Diagnostic biomarkers that meet a burden of proof can serve a role in narrowi