source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Jean-Yves%20B%C3%A9ziau
Jean-Yves Beziau (; born January 15, 1965, in Orléans, France is a Swiss Professor in logic at the University of Brazil, Rio de Janeiro, and Researcher of the Brazilian Research Council. He is permanent member and former president of the Brazilian Academy of Philosophy. Before going to Brazil, he was Professor of the Swiss National Science Foundation at the University of Neuchâtel in Switzerland and researcher at Stanford University working with Patrick Suppes. Career Béziau works in the field of logic—in particular, paraconsistent logic, the square of opposition and universal logic. He holds a Maîtrise in Philosophy from Pantheon-Sorbonne University, a DEA in Philosophy from Pantheon-Sorbonne University, a PhD in Philosophy from the University of São Paulo, a MSc and a PhD in Logic and Foundations of Computer Science from Paris Diderot University. Béziau is the editor-in-chief of the journal Logica Universalis and of the South American Journal of Logic—an online, open-access journal—as well as of the Springer book series Studies in Universal Logic. He is also the editor of College Publication's book series Logic PhDs He has launched four major international series of events: UNILOG (World Congress and School on Universal Logic), SQUARE (World Congress on the Square of Opposition), WOCOLOR (World Congress on Logic and Religion), LIQ (Logic in Question). Béziau created the World Logic Day (January 14). Selected publications "What is paraconsistent logic?" In D. Batens et al. (eds.), Frontiers of Paraconsistent Logic, Research Studies Press, Baldock, 2000, pp. 95–111. Handbook of Paraconsistency (ed. with Walter Carnielli and Dov Gabbay). London: College Publication, 2007. "Semantic computation of truth based on associations already learned" (with Patrick Suppes), Journal of Applied Logic, 2 (2004), pp. 457–467. "From paraconsistent logic to universal logic", Sorites, 12 (2001), pp. 5–32. Logica Universalis: Towards a General Theory of Logic (ed.)
https://en.wikipedia.org/wiki/Bridge%20graft
A bridge graft is a grafting technique used to re-establish the supply of nutrients to the rootstock of a woody perennial when the full thickness of the bark has been removed from part of the trunk. Damage to the innermost layer of the bark, called the phloem, can interrupt the transport of leave photosynthesised sugars down to the roots. Such wounds are often caused by rabbits or other rodents, stripping the bark away and girdling the tree. The inability to transport sugars from leaves down to the root system causes the root death after the stored nutrient are exhausted, which eventually results in the plant's death. A bridge graft uses scions to 'bridge' the gap. Each scion is taper cut in order to accommodate the need for matching the cambium layers of the scion with those of the tree to which it is being grafted. It is also vital that the scions be placed so that the end which was closest to their own roots before they were cut is at the bottom of the graft, and the end which was closest to the growing tip is at the top. Incorrect placement (cells in the scion being upside down) will result in its death. Once in place, the graft wounds must be completely sealed, in order to facilitate joining together and prevent infection of the site. Where one-quarter or less of the trunk circumference has been girdled, it may not be necessary to use this technique. It is also difficult to apply on small caliper tree trunks. Steps The Ontario Ministry of Agriculture Factsheet gives details and diagrams for the technique. However, modern arboriculture suggests that the application of pruning paints and wound dressings can inhibit the trees' natural defences, so a person attempting this technique may try it without the application of wound dressing prior to the graft insertion. Most trees will produce callus tissues to compartmentalize the wounded area. This natural defence is stimulated by environmental factors which may include the presence of the 'f
https://en.wikipedia.org/wiki/Dov%20Gabbay
Dov M. Gabbay (; born October 26, 1945) is an Israeli logician. He is Augustus De Morgan Professor Emeritus of Logic at the Group of Logic, Language and Computation, Department of Computer Science, King's College London. Work Gabbay has authored over four hundred and fifty research papers and over thirty research monographs. He is editor of several international journals, and of many reference works and handbooks of logic, including the Handbook of Philosophical Logic (with Franz Guenthner), the Handbook of Logic in Computer Science] (with Samson Abramsky and T. S. E. Maibaum), and the Handbook of Logic in Artificial Intelligence and Logic Programming (with C.J. Hogger and J.A. Robinson). He is well-known for pioneering work on logic in computer science and artificial intelligence, especially the application of (executable) temporal logics in computer science, in particular formal verification, the logical foundations of non-monotonic reasoning and artificial intelligence, the introduction of fibring logics and the theory of labelled deductive systems. He is Chairman and founder of several international conferences, executive of the European Foundation of Logic, Language and Information and President of the International IGPL Logic Group. He is founder, and joint President of the International Federation of Computational Logic. He is also one of the four founders and council member for many years of FoLLI, the Association of Logic, Language and Information, from which he is now retired. He remains a life member. He is co-founder with Jane Spurr of College Publications, a not-for-profit, start-up academic publisher, intended to compete with major expensive publishers at affordable prices, and not requiring copyright assignment from authors. A two volume Festschrift in his honor was published in 2005 by College Publications. Regular positions 1968–1970 – Instructor, Hebrew University of Jerusalem 1970–1973 – Assistant Professor of Philosophy, Stanford Univers
https://en.wikipedia.org/wiki/Principle%20of%20minimum%20energy
The principle of minimum energy is essentially a restatement of the second law of thermodynamics. It states that for a closed system, with constant external parameters and entropy, the internal energy will decrease and approach a minimum value at equilibrium. External parameters generally means the volume, but may include other parameters which are specified externally, such as a constant magnetic field. In contrast, for isolated systems (and fixed external parameters), the second law states that the entropy will increase to a maximum value at equilibrium. An isolated system has a fixed total energy and mass. A closed system, on the other hand, is a system which is connected to another, and cannot exchange matter (i.e. particles), but can transfer other forms of energy (e.g. heat), to or from the other system. If, rather than an isolated system, we have a closed system, in which the entropy rather than the energy remains constant, then it follows from the first and second laws of thermodynamics that the energy of that system will drop to a minimum value at equilibrium, transferring its energy to the other system. To restate: The maximum entropy principle: For a closed system with fixed internal energy (i.e. an isolated system), the entropy is maximized at equilibrium. The minimum energy principle: For a closed system with fixed entropy, the total energy is minimized at equilibrium. Mathematical explanation The total energy of the system is where S is entropy, and the are the other extensive parameters of the system (e.g. volume, particle number, etc.). The entropy of the system may likewise be written as a function of the other extensive parameters as . Suppose that X is one of the which varies as a system approaches equilibrium, and that it is the only such parameter which is varying. The principle of maximum entropy may then be stated as: and at equilibrium. The first condition states that entropy is at an extremum, and the second condition states
https://en.wikipedia.org/wiki/Capacitor-spring%20analogy
There are several formal analogies that can be made between electricity, which is invisible to the eye, and more familiar physical behaviors, such as the flowing of water or the motion of mechanical devices. In the case of capacitance, one analogy to a capacitor in mechanical rectilineal terms is a spring where the compliance of the spring is analogous to the capacitance. Thus in electrical engineering, a capacitor may be defined as an ideal electrical component which satisfies the equation where = voltage measured at the terminals of the capacitor, = the capacitance of the capacitor, = current flowing between the terminals of the capacitor, and = time. The equation quoted above has the same form as that describing an ideal massless spring: , where: is the force applied between the two ends of the spring, is the stiffness, or spring constant (inverse of compliance) defined as force/displacement, and is the speed (or velocity) of one end of the spring, the other end being fixed. Note that in the electrical case, current (I) is defined as the rate of change of charge (Q) with respect to time: While in the mechanical case, velocity (v) is defined as the rate of change of displacement (x) with respect to time: Thus, in this analogy: Charge is represented by linear displacement, current is represented by linear velocity, voltage by force. time by time Also, these analogous relationships apply: energy. Energy stored in a spring is , while energy stored in a capacitor is . Electric power. Here there is an analogy between the mechanical concept of power as the scalar product of velocity and displacement, and the electrical concept that in an AC circuit with sinusoidal excitation, power is the product where is the phase angle between and , measured in RMS terms. Electrical resistance (R) is analogous to mechanical viscous drag coefficient (force being proportional to velocity is analogous to Ohm's law - voltage being proportional to current
https://en.wikipedia.org/wiki/Zoophorus
Zoophorus () and Zophorus (), meaning "bearing animals", was the Ancient Greek term for a decorated frieze between the architrave and cornice, typically with a continuous bas-relief. A zoophoric column is a pillar supporting the figure of an animal. The word is rarely used in modern English architectural writing.
https://en.wikipedia.org/wiki/HNCA%20experiment
HNCA is a 3D triple-resonance NMR experiment commonly used in the field of protein NMR. The name derives from the experiment's magnetization transfer pathway: The magnetization of the amide proton of an amino acid residue is transferred to the amide nitrogen, and then to the alpha carbons of both the starting residue and the previous residue in the protein's amino acid sequence. In contrast, the complementary HNCOCA experiment transfers magnetization only to the alpha carbon of the previous residue. The HNCA experiment is used, often in tandem with HNCOCA, to assign alpha carbon resonance signals to specific residues in the protein. This experiment requires a purified sample of protein prepared with 13C and 15N isotopic labelling, at a concentration greater than 0.1 mM, and is thus generally only applied to recombinant proteins. The spectrum produced by this experiment has 3 dimensions: A proton axis, a 15N axis and a 13C axis. For residue i peaks will appear at {HN(i), N(i), Calpha (i)} and {HN(i), N(i), Calpha(i-1)}, while for the complementary HNCOCA experiment peaks appear only at {HN(i), N(i), Calpha(i-1)}. Together, these two experiments reveal the alpha carbon chemical shift for each amino acid residue in a protein, and provide information linking adjacent residues in the protein's sequence.
https://en.wikipedia.org/wiki/HNCOCA%20experiment
HNCOCA is a 3D triple-resonance NMR experiment commonly used in the field of protein NMR. The name derives from the experiment's magnetization transfer pathway: The magnetization of the amide proton of an amino acid residue is transferred to the amide nitrogen, and then to the alpha carbon of the previous residue in the protein's amino acid sequence. In contrast, the complementary HNCA experiment transfers magnetization to the alpha carbons of both the starting residue and the previous residue in the sequence. The HNCOCA experiment is used, often in tandem with HNCA, to assign alpha carbon resonance signals to specific residues in the protein. This experiment requires a purified sample of protein prepared with 13C and 15N isotopic labelling, at a concentration greater than 0.1 mM, and is thus generally only applied to recombinant proteins. The spectrum produced by this experiment has 3 dimensions: A proton axis, a 15N axis and a 13C axis. For residue i peaks will appear at {HN(i), N(i), Cα(i-1)} only, while for the complementary HNCA experiment peaks appear at {HN(i), N(i), Cα(i-1)} and {HN(i), N(i), Cα (i)}. Together, these two experiments reveal the alpha carbon chemical shift for each amino acid residue in a protein, and provide information linking adjacent residues in the protein's sequence.
https://en.wikipedia.org/wiki/Hawaiian%20Trough
The Hawaiian Trough, otherwise known as the Hawaiian Deep, is a moat-like depression of the seafloor surrounding the Hawaiian Islands. The weight from the volcanic island chain depresses the plastic lithosphere that is already weakened by the underlying thermal hotspot, causing subsidence to occur. The location with the greatest rate of subsidence is directly above the hotspot with a rate of about per year. The Hawaiian Trough is about deep and has a radius of about . The subsiding lithosphere is balanced out and through the concept of isostasy a part of the crust surrounding the trough is levered upwards creating the Hawaiian Arch. The Hawaiian Arch extends about above the surrounding ocean floor, and contains tilted coral reefs. Trough Development The Hawaiian Trough is filled with a stratified sedimentary section >2 km thick that was deposited in 4 sequential stages. The first (bottom) stage is a basal layer composed of 50 –100 m of primarily pelagic sediment. This layer was slowly deposited on the 80-Ma oceanic crust before the depression formed. The sediment in this layer is from a variety of different sources including wind blown material, slowly setting fine-grained sediment, and biogenic debris. The second stage is characterized by a bedded layer of volcanoclastic material that fills the Trough as it quickly subsides due to large influxes of material from nearby volcanos. The majority of material in this layer was transported by turbidity currents that flow along the axis of the moat and transport material from mass wasting events. During the third stage, the depression fills with slumps and debris avalanches which creates a layer of landslide debris. This layer contributes to the bulk of the sediment in the Trough. In the fourth and final stage, volcanic activity and subsistence effectively ends, and sediment deposits brought my turbidity currents, create a ponded unit in the deepest part of the Trough. This top layer is primary composed of turbidite a
https://en.wikipedia.org/wiki/GRE%20Mathematics%20Test
The GRE subject test in mathematics is a standardized test in the United States created by the Educational Testing Service (ETS), and is designed to assess a candidate's potential for graduate or post-graduate study in the field of mathematics. It contains questions from many fields of mathematics; about 50% of the questions come from calculus (including pre-calculus topics, multivariate calculus, and differential equations), 25% come from algebra (including linear algebra, abstract algebra, and number theory), and 25% come from a broad variety of other topics typically encountered in undergraduate mathematics courses, such as point-set topology, probability and statistics, geometry, and real analysis. Up until the September 2023 administration, the GRE subject test in Mathematics was paper-based, as opposed to the GRE general test which is usually computer-based. Since then, it's been moved online It contains approximately 66 multiple-choice questions, which are to be answered within 2 hours and 50 minutes. Scores on this exam are required for entrance to most math Ph.D. programs in the United States. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 920 and 400, which correspond to the 99th percentile and the 1st percentile, respectively. The mean score for all test takers from July 1, 2011 to June 30, 2014 was 659, with a standard deviation of 137. Prior to October 2001, a significant percentage of students were achieving perfect scores on the exam, which made it difficult for competitive programs to differentiate between students in the upper percentiles. As a result, the test was reworked and renamed "The Mathematics Subject Test (Rescaled)". According to ETS, "Scores earned on the test after October 2001 should not be compared to scores earned prior to that date." Tests generally take place three times per year, within an approximately 14-day windo
https://en.wikipedia.org/wiki/Lateralization%20of%20brain%20function
The lateralization of brain function (or hemispheric dominance/ latralisation ) is the tendency for some neural functions or cognitive processes to be specialized to one side of the brain or the other. The median longitudinal fissure separates the human brain into two distinct cerebral hemispheres, connected by the corpus callosum. Although the macrostructure of the two hemispheres appears to be almost identical, different composition of neuronal networks allows for specialized function that is different in each hemisphere. Lateralization of brain structures is based on general trends expressed in healthy patients; however, there are numerous counterexamples to each generalization. Each human's brain develops differently, leading to unique lateralization in individuals. This is different from specialization, as lateralization refers only to the function of one structure divided between two hemispheres. Specialization is much easier to observe as a trend, since it has a stronger anthropological history. The best example of an established lateralization is that of Broca's and Wernicke's areas, where both are often found exclusively on the left hemisphere. Function lateralization, such as semantics, intonation, accentuation, and prosody, has since been called into question and largely been found to have a neuronal basis in both hemispheres. Another example is that each hemisphere in the brain tends to represent one side of the body. In the cerebellum, this is the same body side, but in the forebrain this is predominantly the contralateral side. Lateralized functions Language Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right-handed individuals. While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers. Broca's area and Wernicke's area, associated with the production of speech
https://en.wikipedia.org/wiki/Set-theoretic%20topology
In mathematics, set-theoretic topology is a subject that combines set theory and general topology. It focuses on topological questions that are independent of Zermelo–Fraenkel set theory (ZFC). Objects studied in set-theoretic topology Dowker spaces In the mathematical field of general topology, a Dowker space is a topological space that is T4 but not countably paracompact. Dowker conjectured that there were no Dowker spaces, and the conjecture was not resolved until M.E. Rudin constructed one in 1971. Rudin's counterexample is a very large space (of cardinality ) and is generally not well-behaved. Zoltán Balogh gave the first ZFC construction of a small (cardinality continuum) example, which was more well-behaved than Rudin's. Using PCF theory, M. Kojman and S. Shelah constructed a subspace of Rudin's Dowker space of cardinality that is also Dowker. Normal Moore spaces A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC. Cardinal functions Cardinal functions are widely used in topology as a tool for describing various topological properties. Below are some examples. (Note: some authors, arguing that "there are no finite cardinal numbers in general topology", prefer to define the cardinal functions listed below so that they never take on finite cardinal numbers as values; this requires modifying some of the definitions given below, e.g. by adding "" to the right-hand side of the definitions, etc.) Perhaps the simplest cardinal invariants of a topological space X are its cardinality and the cardinality of its topology, denoted respectively by |X| and o(X). The weight w(X ) of a topological space X is the smallest possible cardinality of a base for X. When w(X ) the space X is said to be second countable. The -weight of a space X is the smallest cardinality of a -base for X. (A -base is
https://en.wikipedia.org/wiki/True%20north
True north (also called geodetic north or geographic north) is the direction along Earth's surface towards the place where the imaginary rotational axis of the Earth intersects the surface of the Earth. That place is called the True North Pole. True south is the direction opposite to the true north. North per se is one of the cardinal directions, a system of naming orientations on the Earth. There are multiple ways of determining the North in different contexts. It is important to make a distinction between the magnetic north and Magnetic North Pole which is a less steady location close to the True North Pole determined by a compass and the magnetic field of the Earth. Due to fundamental limitations in map projection, true north also differs from the grid north which is marked by the direction of the grid lines on a typical printed map. However, the longitude lines on a globe lead to the true poles, because the three-dimensional representation avoids those limitations. The celestial pole is the location on the imaginary celestial sphere where an imaginary extension of the rotational axis of the Earth intersects the celestial sphere. Within a margin of error of 1°, the true north direction can be approximated by the position of the pole star Polaris which would currently appear to be very close to the intersection, tracing a tiny circle in the sky each sidereal day. Due to the axial precession of Earth, true north rotates in an arc with respect to the stars that takes approximately 25,000 years to complete. Around 2101–2103, Polaris will make its closest approach to the celestial north pole (extrapolated from recent Earth precession). The visible star nearest the north celestial pole 5,000 years ago was Thuban. On maps published by the United States Geological Survey (USGS) and the United States Armed Forces, true north is marked with a line terminating in a five-pointed star. The east and west edges of the USGS topographic quadrangle maps of the United States ar
https://en.wikipedia.org/wiki/Zweikanalton
Zweikanalton ("two-channel sound") or A2 Stereo, is an analog television sound transmission system used in Germany, Austria, Australia, Switzerland, Netherlands and some other countries that use or used PAL-B or PAL-G. TV3 Malaysia formerly used Zweikanalton on its UHF analogue transmission frequency (Channel 29), while NICAM was instead used on its VHF analogue transmission frequency (Channel 12). South Korea also formerly utilised a modified version of Zweikanalton for its NTSC analogue television system until 31 December 2012. It relies on two separate FM carriers. This offers a relatively high separation between the channels (compared to a subcarrier-based multiplexing system) and can thus be used for bilingual broadcasts as well as stereo. Unlike the competing NICAM standard, Zweikanalton is an analog system. How it works A 2nd FM sound carrier containing the right channel for stereo is transmitted at a frequency 242 kHz higher than the existing mono FM sound carrier, and channel mixing is used in the receiver to derive the left channel. The second sound carrier also contains a 54.6875 kHz pilot tone to indicate whether the transmission is stereo or bilingual. The pilot tone is 50% amplitude-modulated with 117.5 Hz for stereo or 274.1 Hz for bilingual. a.The second sound carrier frequency of DK systems varies from country, and sometimes manufacturers divide them into DK1/DK2/DK3 systems. b.The video bandwidth is reduced. Zweikanalton can be adapted to any existing analogue television system, and modern PAL or SECAM television receivers generally include a sound detector IC that can decode both Zweikanalton and NICAM. Zweikanalton can carry either a completely separate audio program, or can be used for stereo sound transmission. In the latter case, the first FM carrier carries (L+R) for compatibility, while the second carrier carries R (not L-R.) After combining the two channels, this method improves the signal-to-noise ratio by reducing the correlated n
https://en.wikipedia.org/wiki/SMC%20protein
SMC complexes represent a large family of ATPases that participate in many aspects of higher-order chromosome organization and dynamics. SMC stands for Structural Maintenance of Chromosomes. Classification Eukaryotic SMCs Eukaryotes have at least six SMC proteins in individual organisms, and they form three distinct heterodimers with specialized functions: A pair of SMC1 and SMC3 constitutes the core subunits of the cohesin complexes involved in sister chromatid cohesion. SMC1 and SMC3 also have functions in the repair of DNA double-strained breaks in the process of homologous recombination. Likewise, a pair of SMC2 and SMC4 acts as the core of the condensin complexes implicated in chromosome condensation. SMC2 and SMC4 have the function of DNA repair as well. Condensin I plays a role in single-strained break repair but not in double-strained breaks. The opposite is true for Condensin II, which plays a role in homologous recombination. A dimer composed of SMC5 and SMC6 functions as part of a yet-to-be-named complex implicated in DNA repair and checkpoint responses. Each complex contains a distinct set of non-SMC regulatory subunits. Some organisms have variants of SMC proteins. For instance, mammals have a meiosis-specific variant of SMC1, known as SMC1β. The nematode Caenorhabditis elegans has an SMC4-variant that has a specialized role in dosage compensation. The following table shows the SMC proteins names for several model organisms and vertebrates: Prokaryotic SMCs SMC proteins are conserved from bacteria to humans. Most bacteria have a single SMC protein in individual species that forms a homodimer. Recently SMC proteins have been shown to aid the daughter cells DNA at the origin of replication to guarantee proper segregation. In a subclass of Gram-negative bacteria, including Escherichia coli, a distantly related protein known as MukB plays an equivalent role. Molecular structure Primary structure SMC proteins are 1,000-1,500 amino-acid long. They h
https://en.wikipedia.org/wiki/Timeline%20of%20the%20UK%20electricity%20supply%20industry
This timeline outlines the key developments in the United Kingdom electricity industry from the start of electricity supplies in the 1870s to the present day. It identifies significant developments in technology for the generation, transmission and use of electricity; outlines developments in the structure of the industry including key organisations and facilities; and records the legislation and regulations that have governed the UK electricity industry.   The first part is a chronological table of significant events; the second part is a list of local acts of Parliament (1879–1948) illustrating the growth of electricity supplies. Significant events The following is a list of significant events in the history of the electricity sector in the United Kingdom. Local legislation timeline In addition to the Public General Acts on electricity supply given in the above table, there were also Local Acts. The Electric Lighting Acts 1882 to 1909 permitted local authorities and companies to apply to the Board of Trade for provisional orders and licences to supply electricity. The orders were confirmed by local Electric Lighting Orders Confirmation Acts. Local authorities and companies could also obtain Local Acts for electricity supply. A sample of Local Acts is given in the table below. Note that Local Acts have a chapter number for the relevant year in lower-case Roman numerals. See also Energy policy of the United Kingdom Energy use and conservation in the United Kingdom Energy switching services in the UK
https://en.wikipedia.org/wiki/Hypostatic%20abstraction
Hypostatic abstraction in mathematical logic, also known as hypostasis or subjectal abstraction, is a formal operation that transforms a predicate into a relation; for example "Honey is sweet" is transformed into "Honey has sweetness". The relation is created between the original subject and a new term that represents the property expressed by the original predicate. Description Technical definition Hypostasis changes a propositional formula of the form X is Y to another one of the form X has the property of being Y or X has Y-ness. The logical functioning of the second object Y-ness consists solely in the truth-values of those propositions that have the corresponding abstract property Y as the predicate. The object of thought introduced in this way may be called a hypostatic object and in some senses an abstract object and a formal object. The above definition is adapted from the one given by Charles Sanders Peirce. As Peirce describes it, the main point about the formal operation of hypostatic abstraction, insofar as it operates on formal linguistic expressions, is that it converts a predicative adjective or predicate into an extra subject, thus increasing by one the number of "subject" slots—called the arity or adicity—of the main predicate. Application The grammatical trace of this hypostatic transformation is a process that extracts the adjective "sweet" from the predicate "is sweet", replacing it by a new, increased-arity predicate "possesses", and as a by-product of the reaction, as it were, precipitating out the substantive "sweetness" as a second subject of the new predicate. The abstraction of hypostasis takes the concrete physical sense of "taste" found in "honey is sweet" and ascribes to it the formal metaphysical characteristics in "honey has sweetness". This is the fallacy of reification. See also
https://en.wikipedia.org/wiki/UAProf
The UAProf (User Agent Profile) specification is concerned with capturing capability and preference information for wireless devices. This information can be used by content providers to produce content in an appropriate format for the specific device. UAProf is related to the Composite Capability/Preference Profiles Specification created by the World Wide Web Consortium. UAProf is based on RDF. UAProf files typically have the file extensions rdf or xml, and are usually served with mimetype application/xml. They are an XML-based file format. The RDF format means that the document schema is extensible. A UAProf file describes the capabilities of a mobile handset, including Vendor, Model, Screensize, Multimedia Capabilities, Character Set support, and more. Recent UAProfiles have also begun to include data conforming to MMS, PSS5 and PSS6 schemas, which includes much more detailed data about video, multimedia, streaming and MMS capabilities. A mobile handset sends a header within an http request, containing the URL to its UAProf. The http header is usually X-WAP-Profile:, but sometimes may look more like 19-Profile:, WAP-Profile: or a number of other similar headers. UAProf production for a device is voluntary: for GSM devices, the UAProf is normally produced by the vendor of the device (e.g. Nokia, Samsung, LG) whereas for CDMA / BREW devices it's more common for the UAProf to be produced by the telecommunications company. A content delivery system (such as a WAP site) can use UAProf to adapt content for display, or to decide what items to offer for download. However, drawbacks to relying solely on UAProf are (See also ): Not all devices have UAProfs (including many new Windows Mobile devices, iDen handsets, or legacy handsets) Not all advertised UAProfs are available (about 20% of links supplied by handsets are dead or unavailable, according to figures from UAProfile.com) UAProf can contain schema or data errors which can cause parsing to fail Retrieving
https://en.wikipedia.org/wiki/Tonelli%E2%80%93Shanks%20algorithm
The Tonelli–Shanks algorithm (referred to by Shanks as the RESSOL algorithm) is used in modular arithmetic to solve for r in a congruence of the form r2 ≡ n (mod p), where p is a prime: that is, to find a square root of n modulo p. Tonelli–Shanks cannot be used for composite moduli: finding square roots modulo composite numbers is a computational problem equivalent to integer factorization. An equivalent, but slightly more redundant version of this algorithm was developed by Alberto Tonelli in 1891. The version discussed here was developed independently by Daniel Shanks in 1973, who explained: My tardiness in learning of these historical references was because I had lent Volume 1 of Dickson's History to a friend and it was never returned. According to Dickson, Tonelli's algorithm can take square roots of x modulo prime powers pλ apart from primes. Core ideas Given a non-zero and a prime (which will always be odd), Euler's criterion tells us that has a square root (i.e., is a quadratic residue) if and only if: . In contrast, if a number has no square root (is a non-residue), Euler's criterion tells us that: . It is not hard to find such , because half of the integers between 1 and have this property. So we assume that we have access to such a non-residue. By (normally) dividing by 2 repeatedly, we can write as , where is odd. Note that if we try , then . If , then is a square root of . Otherwise, for , we have and satisfying: ; and is a -th root of 1 (because ). If, given a choice of and for a particular satisfying the above (where is not a square root of ), we can easily calculate another and for such that the above relations hold, then we can repeat this until becomes a -th root of 1, i.e., . At that point is a square root of . We can check whether is a -th root of 1 by squaring it times and check whether it is 1. If it is, then we do not need to do anything, as the same choice of and works. But if it is not, must
https://en.wikipedia.org/wiki/Millennium%20Mathematics%20Project
The Millennium Mathematics Project (MMP) was set up within the University of Cambridge in England as a joint project between the Faculties of Mathematics and Education in 1999. The MMP aims to support maths education for pupils of all abilities from ages 5 to 19 and promote the development of mathematical skills and understanding, particularly through enrichment and extension activities beyond the school curriculum, and to enhance the mathematical understanding of the general public. The project was directed by John Barrow from 1999 until September 2020. Programmes The MMP includes a range of complementary programmes: The NRICH website publishes free mathematics education enrichment material for ages 5 to 19. NRICH material focuses on problem-solving, building core mathematical reasoning and strategic thinking skills. In the academic year 2004/5 the website attracted over 1.7 million site visits (more than 49 million hits). Plus Magazine is a free online maths magazine for age 15+ and the general public. In 2004/5, Plus attracted over 1.3 million website visits (more than 31 million hits). The website won the Webby award in 2001 for the best Science site on the Internet. The Motivate video-conferencing project links university mathematicians and scientists to primary and secondary schools in areas of the UK from Jersey and Belfast to Glasgow and inner-city London, with international links to Pakistan, South Africa, India and Singapore. The project has also developed a Hands On Maths Roadshow presenting creative methods of exploring mathematics, and in 2004 took on the running of Simon Singh's Enigma schools workshops, exploring maths through cryptography and codebreaking. Both are taken to primary and secondary schools and public venues such as shopping centres across the UK and Ireland. James Grime is the Enigma Project Officer and gives talks in schools and to the general public about the history and mathematics of code breaking - including the demonstration of
https://en.wikipedia.org/wiki/Continuous%20symmetry
In mathematics, continuous symmetry is an intuitive idea corresponding to the concept of viewing some symmetries as motions, as opposed to discrete symmetry, e.g. reflection symmetry, which is invariant under a kind of flip from one state to another. However, a discrete symmetry can always be reinterpreted as a subset of some higher-dimensional continuous symmetry, e.g. reflection of a 2 dimensional object in 3 dimensional space can be achieved by continuously rotating that object 180 degrees across a non-parallel plane. Formalization The notion of continuous symmetry has largely and successfully been formalised in the mathematical notions of topological group, Lie group and group action. For most practical purposes continuous symmetry is modelled by a group action of a topological group that preserves some structure. Particularly, let be a function, and G is a group that acts on X then a subgroup is a symmetry of f if for all . One-parameter subgroups The simplest motions follow a one-parameter subgroup of a Lie group, such as the Euclidean group of three-dimensional space. For example translation parallel to the x-axis by u units, as u varies, is a one-parameter group of motions. Rotation around the z-axis is also a one-parameter group. Noether's theorem Continuous symmetry has a basic role in Noether's theorem in theoretical physics, in the derivation of conservation laws from symmetry principles, specifically for continuous symmetries. The search for continuous symmetries only intensified with the further developments of quantum field theory. See also Goldstone's theorem Infinitesimal transformation Noether's theorem Sophus Lie Motion (geometry) Circular symmetry
https://en.wikipedia.org/wiki/Application%20directory
An application directory is a grouping of software code, help files and resources that together comprise a complete software package but are presented to the user as a single object. They are currently used in RISC OS and the ROX Desktop, and also form the basis of the Zero Install application distribution system. Similar technology includes VMware ThinApp, and the NEXTSTEP/GNUstep/Mac OS X concept of application bundles. Their heritage lies in the system for automatically launching software stored on floppy disk on Acorn's earlier 8-bit micros such as the BBC Micro (the !BOOT file). Bundling various files in this manner allows tools for manipulating applications to be replaced by tools for manipulating the file system. Applications can often be "installed" simply by dragging them from a distribution medium to a hard disk, and "uninstalled" by deleting the application directory. Fixed contents In order to support user interaction with application directories, several files have special status. Application binaries Launching an application directory causes the included file AppRun (ROX Desktop) or !Run (RISC OS) to be launched. On RISC OS this is generally an Obey file (a RISC OS command script) which allocates memory and loads OS extension modules and shared libraries before executing the application binary, usually called !RunImage. Under the ROX Desktop, it is not uncommon for it to be a shell script that will launch the correct system binary if available or compile a suitable binary from source otherwise. Help files and icons Both RISC OS and the ROX Desktop allow the user to view help files associated with an application directory without launching the application. RISC OS relies on a file in the directory named !Help which is launched as if the user double-clicked on it when help is requested (and can be any format the system understands, but plain text and !Draw formats are common), while the ROX Desktop opens the application's Help subdirectory. Simila
https://en.wikipedia.org/wiki/Feingold%20diet
The Feingold diet is an elimination diet initially devised by Benjamin Feingold following research in the 1970s that appeared to link food additives with hyperactivity; by eliminating these additives and various foods the diet was supposed to alleviate the condition. Popular in its day, the diet has since been referred to as an "outmoded treatment"; there is no good evidence that it is effective, and it is difficult for people to follow. Technique The diet was originally based on the elimination of salicylate, artificial food coloring, and artificial flavors; later on in the 1970s, the preservatives BHA, BHT, and (somewhat later) TBHQ were eliminated. Besides foods with the eliminated additives, aspirin- or additive-containing drugs and toiletries were to be avoided. Even today, parents are advised to limit their purchases of mouthwash, toothpaste, cough drops, perfume, and various other nonfood products to those published in the Feingold Association's annual Foodlist and Shopping Guide. Some versions of the diet prohibit only artificial food coloring and additives. According to the Royal College of Psychiatrists the diet prohibited a number of foods that contain salicylic acid including apples, cucumbers and tomatoes. Feingold stressed that the diet must be followed strictly and for an entire lifetime, and that whole families – not just the subject being "treated" – must observe the diet's rules. Effectiveness Although the diet had a certain popular appeal, a 1983 meta-analysis found research on it to be of poor quality, and that overall there was no good evidence that it was effective in fulfilling its claims. In common with other elimination diets, the Feingold diet can be costly and boring, and thus difficult for people to maintain. In general, there is no evidence to support broad claims that food coloring causes food intolerance and ADHD-like behavior in children. It is possible that certain food coloring may act as a trigger in those who are genet
https://en.wikipedia.org/wiki/System%20integration%20testing
System integration testing (SIT) involves the overall testing of a complete system of many subsystem components or elements. The system under test may be composed of hardware, or software, or hardware with embedded software, or hardware/software with human-in-the-loop testing. SIT consists, initially, of the "process of assembling the constituent parts of a system in a logical, cost-effective way, comprehensively checking system execution (all nominal & exceptional paths), and including a full functional check-out." Following integration, system test is a process of "verifying that the system meets its requirements, and validating that the system performs in accordance with the customer or user expectations." In technology product development, the beginning of system integration testing is often the first time that an entire system has been assembled such that it can be tested as a whole. In order to make system testing most productive, the many constituent assemblies and subsystems will have typically gone through a subsystem test and successfully verified that each subsystem meets its requirements at the subsystem interface level. In the context of software systems and software engineering, system integration testing is a testing process that exercises a software system's coexistence with others. With multiple integrated systems, assuming that each have already passed system testing, SIT proceeds to test their required interactions. Following this, the deliverables are passed on to acceptance testing. Software system integration testing For software SIT is part of the software testing life cycle for collaborative projects. Usually, a round of SIT precedes the user acceptance test (UAT) round. Software providers usually run a pre-SIT round of tests before consumers run their SIT test cases. For example, if an integrator (company) is providing an enhancement to a customer's existing solution, then they integrate the new application layer and the new data
https://en.wikipedia.org/wiki/Carath%C3%A9odory%E2%80%93Jacobi%E2%80%93Lie%20theorem
The Carathéodory–Jacobi–Lie theorem is a theorem in symplectic geometry which generalizes Darboux's theorem. Statement Let M be a 2n-dimensional symplectic manifold with symplectic form ω. For p ∈ M and r ≤ n, let f1, f2, ..., fr be smooth functions defined on an open neighborhood V of p whose differentials are linearly independent at each point, or equivalently where {fi, fj} = 0. (In other words, they are pairwise in involution.) Here {–,–} is the Poisson bracket. Then there are functions fr+1, ..., fn, g1, g2, ..., gn defined on an open neighborhood U ⊂ V of p such that (fi, gi) is a symplectic chart of M, i.e., ω is expressed on U as Applications As a direct application we have the following. Given a Hamiltonian system as where M is a symplectic manifold with symplectic form and H is the Hamiltonian function, around every point where there is a symplectic chart such that one of its coordinates is H.
https://en.wikipedia.org/wiki/Nilmanifold
In mathematics, a nilmanifold is a differentiable manifold which has a transitive nilpotent group of diffeomorphisms acting on it. As such, a nilmanifold is an example of a homogeneous space and is diffeomorphic to the quotient space , the quotient of a nilpotent Lie group N modulo a closed subgroup H. This notion was introduced by Anatoly Mal'cev in 1951. In the Riemannian category, there is also a good notion of a nilmanifold. A Riemannian manifold is called a homogeneous nilmanifold if there exist a nilpotent group of isometries acting transitively on it. The requirement that the transitive nilpotent group acts by isometries leads to the following rigid characterization: every homogeneous nilmanifold is isometric to a nilpotent Lie group with left-invariant metric (see Wilson). Nilmanifolds are important geometric objects and often arise as concrete examples with interesting properties; in Riemannian geometry these spaces always have mixed curvature, almost flat spaces arise as quotients of nilmanifolds, and compact nilmanifolds have been used to construct elementary examples of collapse of Riemannian metrics under the Ricci flow. In addition to their role in geometry, nilmanifolds are increasingly being seen as having a role in arithmetic combinatorics (see Green–Tao) and ergodic theory (see, e.g., Host–Kra). Compact nilmanifolds A compact nilmanifold is a nilmanifold which is compact. One way to construct such spaces is to start with a simply connected nilpotent Lie group N and a discrete subgroup . If the subgroup acts cocompactly (via right multiplication) on N, then the quotient manifold will be a compact nilmanifold. As Mal'cev has shown, every compact nilmanifold is obtained this way. Such a subgroup as above is called a lattice in N. It is well known that a nilpotent Lie group admits a lattice if and only if its Lie algebra admits a basis with rational structure constants: this is Malcev's criterion. Not all nilpotent Lie groups admit l
https://en.wikipedia.org/wiki/The%20EMBO%20Journal
The EMBO Journal is a semi-monthly peer-reviewed scientific journal focusing on full-length papers describing original research of general interest in molecular biology and related areas. The editor-in-chief is Facundo D. Batista (Harvard Medical School). History The journal was established in 1982 and was published by Nature Publishing Group on behalf of the European Molecular Biology Organization until the launch of EMBO Press in 2013. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 impact factor of 13.783. See also EMBO Reports Molecular Systems Biology
https://en.wikipedia.org/wiki/Thermoanaerobacterales
The Thermoanaerobacterales is a polyphyletic order of bacteria placed within the polyphyletic class Clostridia, and encompassing four families: the Thermoanaerobacteraceae, the Thermodesulfobiaceae, the Thermoanaerobacterales Family III. Incertae Sedis, and the Thermoanaerobacterales Family IV. Incertae Sedis, and various unplaced genera. This order is noted for the species' abilities to survive in extreme environments without oxygen and of relatively elevated temperatures for a living being (up to 80-90 °C). An example organism in this order is Thermoanaerobacter ethanolicus. Phylogeny The Thermoanaerobacterales, as previously mentioned, is polyphyletic, and consists numerous morphologically similar clades: Phylogeny Note: polyphyletic Thermoanaerobacterales
https://en.wikipedia.org/wiki/NoScript
NoScript (or NoScript Security Suite) is a free and open-source extension for Firefox- and Chromium-based web browsers, written and maintained by Giorgio Maone, an Italian software developer and member of the Mozilla Security Group. Features Active content blocking By default, NoScript blocks active (executable) web content, which can be wholly or partially unblocked by allowlisting a site or domain from the extension's toolbar menu or by clicking a placeholder icon. In the default configuration, active content is globally denied, although the user may turn this around and use NoScript to block specific unwanted content. The allowlist may be permanent or temporary (until the browser closes or the user revokes permissions). Active content may consist of JavaScript, web fonts, media codecs, WebGL, and Flash. The add-on also offers specific countermeasures against security exploits. Because many web browser attacks require active content that the browser normally runs without question, disabling such content by default and using it only to the degree that it is necessary reduces the chances of vulnerability exploitation. In addition, not loading this content saves significant bandwidth and defeats some forms of web tracking. NoScript is useful for developers to see how well their site works with JavaScript turned off. It also can remove many irritating web elements, such as in-page pop-up messages and certain paywalls, which require JavaScript in order to function. NoScript takes the form of a toolbar icon or status bar icon in Firefox. It displays on every website to denote whether NoScript has either blocked, allowed, or partially allowed scripts to run on the web page being viewed. Clicking or hovering (since version 2.0.3rc1) the mouse cursor on the NoScript icon gives the user the option to allow or forbid the script's processing. NoScript's interface, whether accessed by right-clicking on the web page or the distinctive NoScript box at the bottom of the p
https://en.wikipedia.org/wiki/Implicit%20data%20structure
In computer science, an implicit data structure or space-efficient data structure is a data structure that stores very little information other than the main or required data: a data structure that requires low overhead. They are called "implicit" because the position of the elements carries meaning and relationship between elements; this is contrasted with the use of pointers to give an explicit relationship between elements. Definitions of "low overhead" vary, but generally means constant overhead; in big O notation, O(1) overhead. A less restrictive definition is a succinct data structure, which allows greater overhead. Definition An implicit data structure is one with constant space overhead (above the information-theoretic lower bound). Historically, defined an implicit data structure (and algorithms acting on one) as one "in which structural information is implicit in the way data are stored, rather than explicit in pointers." They are somewhat vague in the definition, defining it most strictly as a single array, with only the size retained (a single number of overhead), or more loosely as a data structure with constant overhead (). This latter definition is today more standard, and the still-looser notion of a data structure with non-constant but small overhead is today known as a succinct data structure, as defined by ; it was referred to as semi-implicit by . A fundamental distinction is between static data structures (read-only) and dynamic data structures (which can be modified). Simple implicit data structures, such as representing a sorted list as an array, may be very efficient as a static data structure, but inefficient as a dynamic data structure, due to modification operations (such as insertion in the case of a sorted list) being inefficient. Examples A trivial example of an implicit data structure is an array data structure, which is an implicit data structure for a list, and requires only the constant overhead of the length; unlike a lin
https://en.wikipedia.org/wiki/Extent%20of%20reaction
In physical chemistry and chemical engineering, extent of reaction is a quantity that measures the extent to which the reaction has proceeded. Often, it refers specifically to the value of the extent of reaction when equilibrium has been reached. It is usually denoted by the Greek letter ξ. The extent of reaction is usually defined so that it has units of amount (moles). It was introduced by the Belgian scientist Théophile de Donder. Definition Consider the reaction A ⇌ 2 B + 3 C Suppose an infinitesimal amount of the reactant A changes into B and C. This requires that all three mole numbers change according to the stoichiometry of the reaction, but they will not change by the same amounts. However, the extent of reaction can be used to describe the changes on a common footing as needed. The change of the number of moles of A can be represented by the equation , the change of B is , and the change of C is . The change in the extent of reaction is then defined as where denotes the number of moles of the reactant or product and is the stoichiometric number of the reactant or product. Although less common, we see from this expression that since the stoichiometric number can either be considered to be dimensionless or to have units of moles, conversely the extent of reaction can either be considered to have units of moles or to be a unitless mole fraction. The extent of reaction represents the amount of progress made towards equilibrium in a chemical reaction. Considering finite changes instead of infinitesimal changes, one can write the equation for the extent of a reaction as The extent of a reaction is generally defined as zero at the beginning of the reaction. Thus the change of is the extent itself. Assuming that the system has come to equilibrium, Although in the example above the extent of reaction was positive since the system shifted in the forward direction, this usage implies that in general the extent of reaction can be positive or negative,
https://en.wikipedia.org/wiki/Edge%20device
In computer networking, an edge device is a device that provides an entry point into enterprise or service provider core networks. Examples include routers, routing switches, integrated access devices (IADs), multiplexers, and a variety of metropolitan area network (MAN) and wide area network (WAN) access devices. Edge devices also provide connections into carrier and service provider networks. An edge device that connects a local area network to a high speed switch or backbone (such as an ATM switch) may be called an edge concentrator. Functions In general, edge devices are normally routers that provide authenticated access (most commonly PPPoA and PPPoE) to faster, more efficient backbone and core networks. The trend is to make the edge device smart and the core device(s) "dumb and fast", so edge routers often include quality of service (QoS) and multi-service functions to manage different types of traffic. Consequently, core networks are often designed with switches that use routing protocols such as Open Shortest Path First (OSPF) or Multiprotocol Label Switching (MPLS) for reliability and scalability, allowing edge routers to have redundant links to the core network. Links between core networks are different—for example, Border Gateway Protocol (BGP) routers are often used for peering exchanges. Translation Edge devices may translate between one type of network protocol and another. For example, Ethernet or Token Ring types of local area networks (LANs) or xDSL equipment may use an Asynchronous Transfer Mode (ATM) backbone to other core networks. ATM networks send data in cells and use connection-oriented virtual circuits. An IP network is packet oriented; so if ATM is used as a core, packets must be encapsulated in cells and the destination address must be converted to a virtual circuit identifier. Some new types of optical fibre use a passive optical network subscriber loop such as GPON, with the edge device connecting to Ethernet for backhaul (telecommu
https://en.wikipedia.org/wiki/Fano%20factor
In statistics, the Fano factor, like the coefficient of variation, is a measure of the dispersion of a counting process. It was originally used to measure the Fano noise in ion detectors. It is named after Ugo Fano, an Italian American physicist. The Fano factor after a time is defined as where is the standard deviation and is the mean number of events of a counting process after some time . The Fano factor can be viewed as a kind of noise-to-signal ratio; it is a measure of the reliability with which the waiting time random variable can be estimated after several random events. For a Poisson counting process, the variance in the count equals the mean count, so . Definition For a counting process , the Fano factor after a time is defined as, Sometimes, the long term limit is also termed the Fano factor, For a renewal process with holding times distributed similar to a random variable , we have that, Since we have that the right hand side is equal to the square of the coefficient of variation , the right hand side of this equation is sometimes referred to as the Fano factor as well. Interpretation When considered as the dispersion of the number, the Fano factor roughly corresponds to the width of the peak of . As such, the Fano factor is often interpreted as the unpredictability of the underlying process. Example: Constant Random Variable When the holding times are constant, then . As such, if then we interpret the renewal process as being very predictable. Example: Poisson Counting Process When the likelihood of an event occurring in any time interval is equal for all time, then the holding times must be exponentially distributed, giving a Poisson counting process, for which . Use in particle detection In particle detectors, the Fano factor results from the energy loss in a collision not being purely statistical. The process giving rise to each individual charge carrier is not independent as the number of ways an atom may be ionized is li
https://en.wikipedia.org/wiki/Vacuum%20evaporation
Vacuum evaporation is the process of causing the pressure in a liquid-filled container to be reduced below the vapor pressure of the liquid, causing the liquid to evaporate at a lower temperature than normal. Although the process can be applied to any type of liquid at any vapor pressure, it is generally used to describe the boiling of water by lowering the container's internal pressure below standard atmospheric pressure and causing the water to boil at room temperature. The vacuum evaporation treatment process consists of reducing the interior pressure of the evaporation chamber below atmospheric pressure. This reduces the boiling point of the liquid to be evaporated, thereby reducing or eliminating the need for heat in both the boiling and condensation processes. There are other advantages, such as the ability to distill liquids with high boiling points and avoiding decomposition of substances that are heat sensitive. Application Food When the process is applied to food and the water is evaporated and removed, the food can be stored for long periods without spoiling. It is also used when boiling a substance at normal temperatures would chemically change the consistency of the product, such as egg whites coagulating when attempting to dehydrate the albumen into a powder. This process was invented by Henri Nestlé in 1866, of Nestlé Chocolate fame, although the Shakers were already using a vacuum pan before that (see condensed milk). This process is used industrially to make such food products as evaporated milk for milk chocolate and tomato paste for ketchup. In the sugar industry vacuum evaporation is used in the crystallization of sucrose solutions. Traditionally this process was performed in batch mode, but nowadays continuous vacuum pans are available. Wastewater treatment Vacuum evaporators are used in a wide range of industrial sectors to treat industrial wastewater. It represents a clean, safe and very versatile technology with low management cost
https://en.wikipedia.org/wiki/Apache%20Directory
Apache Directory is an open source project of the Apache Software Foundation. The Apache Directory Server, originally written by Alex Karasulu, is an embeddable directory server entirely written in Java. It was certified LDAPv3-compatible by The Open Group in 2006. Besides LDAP, the server supports other protocols as well, and a Kerberos server. There exist these subprojects: Apache Directory Studio - an LDAP browser/editor for data, schema, LDIF, and DSML written in an Eclipse-based framework. Apache SCIMple - an implementation of SCIM v2.0 specification. Apache Fortress - a standards-based authorization system. Apache Kerby - a Kerberos implementation written in Java. Apache LDAP API - an SDK for directory access in Java. Apache Mavibot - a Multi Version Concurrency Control (MVCC) BTree in Java. See also List of LDAP software
https://en.wikipedia.org/wiki/Album%20graecum
Album græcum, or stercus canis officinale, is the dung of dogs or hyenas that has become white through exposure to air. It is used in dressing leather. White dog dung (often mixed with honey) was formerly used as a medicinal drug to treat inflammations of the throat, or as plaster, spread on skin to close and heal wounds."Dogs white Dung, or Album Graecum, as it is commonly call'd. This is said to cleanse and deterge; but is used in little else than in Inflammations of the Throat, with Honey; and that outwardly as a Plaster more than any other way: but is seldom appears to any great purpose." At the time of its use, it was thought by some to be a form of witchcraft."I inquired next how she became dumb. She told me, by reason of a sore swelling she took in her throat and tongue; but afterwards by the application of Album Graecum, "which I thought," said she, “was revealed to me, I recovered my speech.” I asked her, how she came to the knowledge of witches and their practices."
https://en.wikipedia.org/wiki/Barakat%20syndrome
Barakat syndrome is a rare disease characterized by hypoparathyroidism, sensorineural deafness and renal disease, and hence also known as HDR syndrome. It was first described by Amin J. Barakat et al. in 1977. Presentation It is a genetic developmental disorder with clinical diversity characterized by hypoparathyroidism, sensorineural deafness and renal disease. Affected people usually present with hypocalcaemia, tetany, or afebrile convulsions at any age. Hearing loss is usually bilateral and may range from mild to profound impairment. Renal disease includes nephrotic syndrome, cystic kidney, renal dysplasia, hypoplasia or aplasia, pelvicalyceal deformity, vesicoureteral reflux, chronic kidney disease, hematuria, proteinuria and renal scarring. Other reported features include: intellectual disability, polycystic ovaries, particular distinct facial characteristics, ischaemic stroke and retinitis pigmentosa. Genetics The defect in the majority of cases has mapped to chromosome 10p (Gene Map Locus: 10pter-p13 or 10p14-p15.1). Haploinsufficiency (deletions) of zinc-finger transcription factor GATA3 or mutations in the GATA3 gene appear to be the underlying cause of this syndrome. It causes the failure in the specification of prosensory domain and subsequently leads to increased cell death in the cochlear duct thus causing deafness. Since the spectrum of phenotypic variation in affected people is quite large, Barakat (HDR) syndrome probably arises as a low penetrance haploinsufficient disorder in which their genetic background plays a major role in the severity of the disease. Inheritance is probably autosomal dominant. Diagnosis A thorough diagnosis should be performed on every affected individual, and siblings should be studied for deafness, parathyroid and renal disease. The syndrome should be considered in infants who have been diagnosed prenatally with a chromosome 10p defect, and those who have been diagnosed with well defined phenotypes of urinary tract abn
https://en.wikipedia.org/wiki/List%20of%20Python%20software
The Python programming language is actively used by many people, both in industry and academia, for a wide variety of purposes. Integrated Development Environments (IDEs) for Python Atom, an open source cross-platform IDE with autocomplete, help and more Python features under package extensions. Codelobster, a cross-platform IDE for various languages, including Python. EasyEclipse, an open source IDE for Python and other languages. Eclipse ,with the Pydev plug-in. Eclipse supports many other languages as well. Emacs, with the built-in python-mode. Eric, an IDE for Python and Ruby Geany, IDE for Python development and other languages. IDLE, a simple IDE bundled with the default implementation of the language. Jupyter Notebook, an IDE that supports markdown, Python, Julia, R and several other languages. Komodo IDE an IDE PHOTOS Python, Perl, PHP and Ruby. NetBeans, is written in Java and runs everywhere where a JVM is installed. Ninja-IDE, free software, written in Python and Qt, Ninja name stands for Ninja-IDE Is Not Just Another IDE PIDA, open source IDE written in Python capable of embedding other text editors, such as Vim. PyCharm, a proprietary and Open Source IDE for Python development. PyScripter, Free and open-source software Python IDE for Microsoft Windows. PythonAnywhere, an online IDE and Web hosting service. Python Tools for Visual Studio, Free and open-source plug-in for Visual Studio. Spyder, IDE for scientific programming. Vim, with "lang#python" layer enabled. Visual Studio Code, an Open Source IDE for various languages, including Python. Wing IDE, cross-platform proprietary with some free versions/licenses IDE for Python. Replit, an online IDE that supports multiple languages. Unit testing frameworks Python package managers and Python distributions Anaconda, Python distribution with conda package manager Enthought, Enthought Canopy Python with Python package manager pip, package management system used to install and manage
https://en.wikipedia.org/wiki/RTEMS
Real-Time Executive for Multiprocessor Systems (RTEMS), formerly Real-Time Executive for Missile Systems, and then Real-Time Executive for Military Systems, is a real-time operating system (RTOS) designed for embedded systems. It is free and open-source software. Development began in the late 1980s with early versions available via File Transfer Protocol (ftp) as early as 1993. OAR Corporation is currently managing the RTEMS project in cooperation with a steering committee which includes user representatives. Design RTEMS is designed for real-time, embedded systems and to support various open application programming interface (API) standards including Portable Operating System Interface (POSIX) and µITRON. The API now known as the Classic RTEMS API was originally based on the Real-Time Executive Interface Definition (RTEID) specification. RTEMS includes a port of the FreeBSD Internet protocol suite (TCP/IP stack) and support for various file systems including Network File System (NFS) and File Allocation Table (FAT). RTEMS provides extensive multi-processing and memory-management services, and even a System-database alongside many other facilities. It has extensive documentation. Architectures RTEMS has been ported to various target processor architectures: ARM Atmel AVR Blackfin Freescale, now NXP ColdFire Texas Instruments – C3x/C4x DSPs Intel – x86 architecture members 80386, Pentium, and above LatticeMico32 68k MIPS Nios II OpenRISC PowerPC Renesas – H8/300, M32C, M32R, SuperH RISC-V RV32, RV64 using QEMU SPARC – ERC32, LEON, V9 Uses RTEMS is used in many application domains. The Experimental Physics and Industrial Control System (EPICS) community includes multiple people who are active RTEMS submitters. RTEMS is also popular for space uses since it supports multiple microprocessors developed for use in space including SPARC ERC32 and LEON, MIPS Mongoose-V, ColdFire, and PowerPC architectures, which are available in space hardened models. RT
https://en.wikipedia.org/wiki/Radeon%20R100%20series
The Radeon R100 is the first generation of Radeon graphics chips from ATI Technologies. The line features 3D acceleration based upon Direct3D 7.0 and OpenGL 1.3, and all but the entry-level versions offloading host geometry calculations to a hardware transform and lighting (T&L) engine, a major improvement in features and performance compared to the preceding Rage design. The processors also include 2D GUI acceleration, video acceleration, and multiple display outputs. "R100" refers to the development codename of the initially released GPU of the generation. It is the basis for a variety of other succeeding products. Development Architecture The first-generation Radeon GPU was launched in 2000, and was initially code-named Rage 6 (later R100), as the successor to ATI's aging Rage 128 Pro which was unable to compete with the GeForce 256. The card also had been described as Radeon 256 in the months leading up to its launch, possibly to draw comparisons with the competing Nvidia card, although the moniker was dropped with the launch of the final product. The R100 was built on a 180 nm semiconductor manufacturing process. Like the GeForce, the Radeon R100 featured a hardware transform and lighting (T&L) engine to perform geometry calculations, freeing up the host computer's CPU. In 3D rendering the processor can write 2 pixels to the framebuffer and sample 3 texture maps per pixel per clock. This is commonly referred to as a 2×3 configuration, or a dual-pipeline design with 3 TMUs per pipe. As for Radeon's competitors, the GeForce 256 is 4×1, GeForce2 GTS is 4×2 and 3dfx Voodoo 5 5500 is a 2×1+2×1 SLI design. Unfortunately, the third texture unit did not get much use in games during the card's lifetime because software was not frequently performing more than dual texturing. In terms of rendering, its "Pixel Tapestry" architecture allowed for Environment Mapped Bump Mapping (EMBM) and Dot Product (Dot3) Bump Mapping support, offering the most complete Bump Mapping su
https://en.wikipedia.org/wiki/Pop-up%20book
A pop-up book is any book with three-dimensional pages, often with elements that pop up as a page is turned. The terminology serves as an umbrella term for movable book, pop-ups, tunnel books, transformations, volvelles, flaps, pull-tabs, pop-outs, pull-downs, and other features each performing in a different manner. Three-dimensional greeting cards use the same principles. Interactive and pop-up types Design and creation of such books in arts is sometimes called "paper engineering". This usage should not be confused with traditional paper engineering, the engineering of systems to mass-produce paper products. Animated books Animated books combine three elements: story, colored illustrations which include text, and "two or more animated illustrations with their movement mechanisms working between a doubled page". In 1938, Julian Wehr's animations for children's books were patented as "moving illustrations" that move the picture up and down and horizontally at the same time with a single movement. Transformations Transformations show a scene made up of vertical slats. When a reader pulls a tab on the side, the slats slide under and over one another to "transform" into a totally different scene. Ernest Nister, one of the early English children's book authors, often produced books solely of transformations. Many of these have been reproduced by the Metropolitan Museum of Art. Tunnel books Tunnel books (also called peepshow books) consist of a set of pages bound with two folded concertina strips on each side and viewed through a hole in the cover. Openings in each page allow the viewer to see through the entire book to the back, and images on each page work together to create a dimensional scene inside. This type of book dates from the mid-18th century and was inspired by theatrical stage sets. Traditionally, these books were often created to commemorate special events or sold as souvenirs of tourist attractions. The term "tunnel book" derives from the fact th
https://en.wikipedia.org/wiki/List%20of%20Taiwanese%20flags
Taiwan has been controlled by various governments and has been associated with various flags throughout its history. Since 1945, the Republic of China rules the island and which became the major territorial base of the ROC since 1949, thus the flag most commonly associated with it is the Flag of the Republic of China. The first national flag of Taiwan was first used in 1663 during the Kingdom of Tungning, which had a plain white flag with the character 「鄭」 (zhèng) on the red bordered circle. The flag of the Qing dynasty was also used from 1862 until 1895, when the Republic of Formosa was declared. The Formosan flag had a tiger on a plain blue filed with azure clouds below it. During Japanese rule of Taiwan, the flag of Japan was flown in the island from 1895 to 1945. Following the transfer of Taiwan from Japan to China in 1945, the national flag was specified in Article Six of the 1947 Constitution of the Republic of China. After the Chinese Civil War in 1949, the government of Chiang Kai-shek relocated the Republic of China (ROC) to the island of Taiwan. Current flag Historical flags Royal flags Political divisions Below are the flags used in the political divisions of Taiwan. Provinces Special municipalities Provincial cities Counties Military flags Other state flags Political party flags Chinese Taipei sports flags Other historical flags See also List of Chinese flags Proposed flags of Taiwan
https://en.wikipedia.org/wiki/Ethnomycology
Ethnomycology is the study of the historical uses and sociological impact of fungi and can be considered a subfield of ethnobotany or ethnobiology. Although in theory the term includes fungi used for such purposes as tinder, medicine (medicinal mushrooms) and food (including yeast), it is often used in the context of the study of psychoactive mushrooms such as psilocybin mushrooms, the Amanita muscaria mushroom, and the ergot fungus. American banker Robert Gordon Wasson pioneered interest in this field of study in the late 1950s, when he and his wife became the first Westerners on record allowed to participate in a mushroom velada, held by the Mazatec curandera María Sabina. The biologist Richard Evans Schultes is also considered an ethnomycological pioneer. Later researchers in the field include Terence McKenna, Albert Hofmann, Ralph Metzner, Carl Ruck, Blaise Daniel Staples, Giorgio Samorini, Keewaydinoquay Peschel, John Marco Allegro, Clark Heinrich, John W. Allen, Jonathan Ott, Paul Stamets, Casey Brown and Juan Camilo Rodríguez Martínez. Besides mycological determination in the field, ethnomycology depends to a large extent on anthropology and philology. One of the major debates among ethnomycologists is Wasson's theory that the Soma mentioned in the Rigveda of the Indo-Aryans was the Amanita muscaria mushroom. Following his example similar attempts have been made to identify psychoactive mushroom usage in many other (mostly) ancient cultures, with varying degrees of credibility. Another much written about topic is the content of the Kykeon, the sacrament used during the Eleusinian mysteries in ancient Greece between approximately 1500 BCE and 396 CE. Although not an ethnomycologist as such, philologist John Allegro has made an important contribution suggesting, in a book controversial enough to have his academic career destroyed, that Amanita muscaria was not only consumed as a sacrament but was the main focus of worship in the more esoteric sects of Sumeri
https://en.wikipedia.org/wiki/Bochner%20integral
In mathematics, the Bochner integral, named for Salomon Bochner, extends the definition of Lebesgue integral to functions that take values in a Banach space, as the limit of integrals of simple functions. Definition Let be a measure space, and be a Banach space. The Bochner integral of a function is defined in much the same way as the Lebesgue integral. First, define a simple function to be any finite sum of the form where the are disjoint members of the -algebra the are distinct elements of and χE is the characteristic function of If is finite whenever then the simple function is integrable, and the integral is then defined by exactly as it is for the ordinary Lebesgue integral. A measurable function is Bochner integrable if there exists a sequence of integrable simple functions such that where the integral on the left-hand side is an ordinary Lebesgue integral. In this case, the Bochner integral is defined by It can be shown that the sequence is a Cauchy sequence in the Banach space hence the limit on the right exists; furthermore, the limit is independent of the approximating sequence of simple functions These remarks show that the integral is well-defined (i.e independent of any choices). It can be shown that a function is Bochner integrable if and only if it lies in the Bochner space Properties Elementary properties Many of the familiar properties of the Lebesgue integral continue to hold for the Bochner integral. Particularly useful is Bochner's criterion for integrability, which states that if is a measure space, then a Bochner-measurable function is Bochner integrable if and only if Here, a function  is called Bochner measurable if it is equal -almost everywhere to a function taking values in a separable subspace of , and such that the inverse image of every open set  in  belongs to . Equivalently, is the limit -almost everywhere of a sequence of countably-valued simple functions. Linear operators If is a continuous
https://en.wikipedia.org/wiki/Vector-valued%20function
A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range. Example: Helix A common example of a vector-valued function is one that depends on a single real parameter t, often representing time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian , these specific types of vector-valued functions are given by expressions such as where f(t), g(t) and h(t) are the coordinate functions of the parameter t, and the domain of this vector-valued function is the intersection of the domains of the functions f, g, and h. It can also be referred to in a different notation: The vector r(t) has its tail at the origin and its head at the coordinates evaluated by the function. The vector shown in the graph to the right is the evaluation of the function near t = 19.5 (between 6π and 6.5π; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as t increases from zero through 8π. In 2D, We can analogously speak about vector-valued functions as or Linear case In the linear case the function can be expressed in terms of matrices: where y is an n × 1 output vector, x is a k × 1 vector of inputs, and A is an n × k matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form where in addition b is an n × 1 vector of parameters. The linear case arises often, for example in multiple regression, where for instance the n × 1 vector of predicted values of a dependent variable is expressed linearly in terms of a k × 1 vector (k < n) of estimated values of model parameters: in which
https://en.wikipedia.org/wiki/Executable-space%20protection
In computer security, executable-space protection marks memory regions as non-executable, such that an attempt to execute machine code in these regions will cause an exception. It makes use of hardware features such as the NX bit (no-execute bit), or in some cases software emulation of those features. However, technologies that emulate or supply an NX bit will usually impose a measurable overhead while using a hardware-supplied NX bit imposes no measurable overhead. The Burroughs 5000 offered hardware support for executable-space protection on its introduction in 1961; that capability remained in its successors until at least 2006. In its implementation of tagged architecture, each word of memory had an associated, hidden tag bit designating it code or data. Thus user programs cannot write or even read a program word, and data words cannot be executed. If an operating system can mark some or all writable regions of memory as non-executable, it may be able to prevent the stack and heap memory areas from being executable. This helps to prevent certain buffer overflow exploits from succeeding, particularly those that inject and execute code, such as the Sasser and Blaster worms. These attacks rely on some part of memory, usually the stack, being both writable and executable; if it is not, the attack fails. OS implementations Many operating systems implement or have an available executable space protection policy. Here is a list of such systems in alphabetical order, each with technologies ordered from newest to oldest. For some technologies, there is a summary which gives the major features each technology supports. The summary is structured as below. Hardware Supported Processors: (Comma separated list of CPU architectures) Emulation: (No) or (Architecture Independent) or (Comma separated list of CPU architectures) Other Supported: (None) or (Comma separated list of CPU architectures) Standard Distribution: (No) or (Yes) or (Comma separated list of dist
https://en.wikipedia.org/wiki/Augmented%20matrix
In linear algebra, an augmented matrix is a matrix obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices. Given the matrices and , where the augmented matrix (A|B) is written as This is useful when solving systems of linear equations. For a given number of unknowns, the number of solutions to a system of linear equations depends only on the rank of the matrix representing the system and the rank of the corresponding augmented matrix. Specifically, according to the Rouché–Capelli theorem, any system of linear equations is inconsistent (has no solutions) if the rank of the augmented matrix is greater than the rank of the coefficient matrix; if, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has free parameters where is the difference between the number of variables and the rank; hence in such a case there are an infinitude of solutions. An augmented matrix may also be used to find the inverse of a matrix by combining it with the identity matrix. To find the inverse of a matrix Let be the square 2×2 matrix To find the inverse of C we create (C|I) where I is the 2×2 identity matrix. We then reduce the part of (C|I) corresponding to C to the identity matrix using only elementary row operations on (C|I). the right part of which is the inverse of the original matrix. Existence and number of solutions Consider the system of equations The coefficient matrix is and the augmented matrix is Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are an infinite number of solutions. In contrast, consider the system The coefficient matrix is and the augmented matrix is
https://en.wikipedia.org/wiki/Cray%20C90
The Cray C90 series (initially named the Y-MP C90) was a vector processor supercomputer launched by Cray Research in 1991. The C90 was a development of the Cray Y-MP architecture. Compared to the Y-MP, the C90 processor had a dual vector pipeline and a faster 4.1 ns clock cycle (244 MHz), which together gave three times the performance of the Y-MP processor. The maximum number of processors in a system was also doubled from eight to 16. The C90 series used the same Model E IOS (Input/Output Subsystem) and UNICOS operating system as the earlier Y-MP Model E. The C90 series included the C94, C98 and C916 models (configurations with a maximum of four, eight, and 16 processor respectively) and the C92A and C94A (air-cooled models). Maximum SRAM memory was between 1 and 8 GB, depending on model. The D92, D92A, D94 and D98 (also known as the C92D, C92AD, C94D and C98D respectively) variants were equipped with slower, but higher-density DRAM memory, allowing increased maximum memory sizes of up to 16 GB, depending on the model. The successor system was the Cray T90. External links Cray Research and Cray computers FAQ Part 5
https://en.wikipedia.org/wiki/Centre%20for%20Computational%20Geography
The Centre for Computational Geography (CCG) is an inter-disciplinary research centre based at the University of Leeds. The CCG was founded in 1993 by Stan Openshaw and Phil Rees, and builds on over 40 years experience in spatial analysis and modelling within the School of Geography. CCG research is concerned with the development and application of tools for analysis, visualisation and modelling geographical systems.
https://en.wikipedia.org/wiki/Whonamedit%3F
Whonamedit? is an online English-language dictionary of medical eponyms and the people associated with their identification. Though it is a dictionary, many eponyms and persons are presented in extensive articles with comprehensive bibliographies. The dictionary is hosted in Norway and maintained by medical historian Ole Daniel Enersen.
https://en.wikipedia.org/wiki/Clinical%20pathology
Clinical pathology is a medical specialty that is concerned with the diagnosis of disease based on the laboratory analysis of bodily fluids, such as blood, urine, and tissue homogenates or extracts using the tools of chemistry, microbiology, hematology, molecular pathology, and Immunohaematology. This specialty requires a medical residency. Clinical pathology is a term used in the US, UK, Ireland, many Commonwealth countries, Portugal, Brazil, Italy, Japan, and Peru; countries using the equivalent in the home language of "laboratory medicine" include Austria, Germany, Romania, Poland and other Eastern European countries; other terms are "clinical analysis" (Spain) and "clinical/medical biology (France, Belgium, Netherlands, North and West Africa). Licensing and subspecialities The American Board of Pathology certifies clinical pathologists, and recognizes the following secondary specialties of clinical pathology: Chemical pathology, also called clinical chemistry Hematopathology Blood banking - Transfusion medicine Clinical microbiology Cytogenetics Molecular genetics pathology. In some countries other sub specialities fall under certified Clinical Biologists responsibility: Reproductive biology including Assisted reproductive technology, Sperm bank and Semen analysis Immunopathology Organization Clinical pathologists are often medical doctors. In some countries in South-America, Europe, Africa or Asia, this specialty can be practiced by non-physicians, such as Ph.D. or Pharm.D. after a variable number of years of residency. In United States of America Clinical pathologists work in close collaboration with clinical scientists (clinical biochemists, clinical microbiologists, etc.), medical technologists, hospital administrators, and referring physicians to ensure the accuracy and optimal utilization of laboratory testing. Clinical pathology is one of the two major divisions of pathology, the other being anatomical pathology. Often, pathologists pr
https://en.wikipedia.org/wiki/Aleksandr%20Kurosh
Aleksandr Gennadyevich Kurosh (; January 19, 1908 – May 18, 1971) was a Soviet mathematician, known for his work in abstract algebra. He is credited with writing The Theory of Groups, the first modern and high-level text on group theory, published in 1944. He was born in Yartsevo, in the Dukhovshchinsky Uyezd of the Smolensk Governorate of the Russian Empire and died in Moscow. He received his doctorate from the Moscow State University in 1936 under the direction of Pavel Alexandrov. In 1937 he became a professor there, and from 1949 until his death he held the Chair of Higher Algebra at Moscow State University. In 1938, he was the PhD thesis adviser to his fellow group theory scholar Sergei Chernikov, with whom he would develop important relationships between finite and infinite groups, discover the Kurosh-Chernikov class of groups, and publish several influential papers over the next decades. In all, he had 27 PhD students, including also Vladimir Andrunakievich, Mark Graev, and Anatoly Shirshov. Selected publications Teoriya Grupp (Теория групп), 2 vols., Nauk, 1944, 2nd edition 1953. German translation: Gruppentheorie. 2 vols., 1953, 1956, Akademie Verlag, Berlin, 2nd edition 1970, 1972. English translation: The Theory of Groups, 2 vols., Chelsea Publishing Company, the Bronx, tr. by K. A. Hirsch, 1950, 2nd edition 1955. Vorlesungen über Allgemeine Algebra. Verlag Harri Deutsch, Zürich 1964. Zur Theorie der Kategorien. Deutscher Verlag der Wissenschaften, Berlin 1963. Kurosch: Zur Zerlegung unendlicher Gruppen. Mathematische Annalen vol. 106, 1932. Kurosch: Über freie Produkte von Gruppen. Mathematische Annalen vol. 108, 1933. Kurosch: Die Untergruppen der freien Produkte von beliebigen Gruppen. Mathematische Annalen, vol. 109, 1934. A. G. Kurosh, S. N. Chernikov, “Solvable and nilpotent groups”, Uspekhi Mat. Nauk, 2:3(19) (1947), 18–59. A. G. Kurosh, "Curso de Álgebra Superior", Editorial Mir, Moscú 1997, traducción de Emiliano Aparicio Bernardo (in
https://en.wikipedia.org/wiki/Fossil%20wood
Fossil wood, also known as fossilized tree, is wood that is preserved in the fossil record. Over time the wood will usually be the part of a plant that is best preserved (and most easily found). Fossil wood may or may not be petrified, in which case it is known as petrified wood or petrified tree. The study of fossil wood is sometimes called palaeoxylology, with a "palaeoxylologist" somebody who studies fossil wood. The fossil wood may be the only part of the plant that has been preserved, with the rest of the plant completely unknown: therefore such wood may get a special kind of botanical name. This will usually include "xylon" and a term indicating its presumed (not necessarily certain) affinity, such as Araucarioxylon (wood similar to that of extant Araucaria or some related genus like Agathis or Wollemia), Palmoxylon (wood similar to that of modern Arecaeae), or Castanoxylon (wood similar to that of modern chinkapin or chestnut tree). Types Petrified wood Petrified wood are fossils of wood that have turned to stone through the process of permineralization. All organic materials are replaced with minerals while maintaining the original structure of the wood. The most notable example is the petrified forest in Arizona. Mummified wood Mummified wood are fossils of wood that have not permineralized. They are formed when trees are buried rapidly in dry cold or hot environments. They are valued in paleobotany because they retain original cells and tissues capable of being examined with the same techniques used with extant plants in dendrology. Notable examples include the mummified forests in Ellesmere Island and Axel Heiberg Island. Submerged forests Submerged forests are remains of trees submerged by marine transgression. They are important in determining sea level rise since the last glacial period. See also Amber Dendrochronology Paleobotany Xyloid lignite
https://en.wikipedia.org/wiki/Oral%20allergy%20syndrome
Oral allergy syndrome (OAS) or pollen-food allergy is a type of food allergy classified by a cluster of allergic reactions in the mouth and throat in response to eating certain (usually fresh) fruits, nuts, and vegetables. It typically develops in adults with hay fever. OAS is not a separate food allergy, but rather represents cross-reactivity between distant remnants of tree or weed pollen still found in certain fruits and vegetables. Therefore, OAS is only seen in people with seasonal pollen allergies, and mostly people who are allergic to tree pollen. It is usually limited to ingestion of uncooked fruits or vegetables. In adults, up to 60% of all food allergic reactions are due to cross-reactions between foods and inhalative allergens. OAS is a Type 1 or immunoglobulin E-mediated hypersensitivity, which is sometimes called a "true allergy". The body's immune system produces IgE antibodies against pollen; in OAS, these antibodies also bind to (or cross-react with) other structurally similar proteins found in botanically related plants. OAS can occur any time of the year, but is most prevalent during the pollen season. Individuals with OAS usually develop symptoms within minutes of eating the food. Signs and symptoms Individuals with OAS may have any of a number of allergic reactions that usually occur very rapidly, within minutes of eating a trigger food. The most common reaction is an itching or burning sensation in the lips, mouth, ear canal, or pharynx. Sometimes other reactions can be triggered in the eyes, nose, and skin. Swelling of the lips, tongue, and uvula, and a sensation of tightness in the throat may be observed. Once the allergen reaches the stomach, it is broken down by the acid, and the allergic reaction does not progress further. The natural course of the disease is that it usually persists. If an affected person swallows the raw food, and the allergen is not destroyed by the stomach acids, it is likely that there will be a reaction fr
https://en.wikipedia.org/wiki/United%20States%20Army%20Research%20Laboratory
The U.S. Army Combat Capabilities Development Command Army Research Laboratory (DEVCOM ARL) is the U.S. Army's foundational research laboratory. ARL is headquartered at the Adelphi Laboratory Center (ALC) in Adelphi, Maryland. Its largest single site is at Aberdeen Proving Ground, Maryland. Other major ARL locations include Research Triangle Park, North Carolina, White Sands Missile Range, New Mexico, Graces Quarters, Maryland, and NASA's Glenn Research Center, Ohio and Langley Research Center, Virginia. ARL also has regional sites in Playa Vista, California (ARL West), Chicago (ARL Central), Austin, TX (ARL South), and Boston (ARL Northeast). DEVCOM ARL has three directorates: Army Research Office, located in Research Triangle Park Army Research Directorate Research Business Directorate History Before the forming of the ARL, the United States Army had research facilities dating back to 1820 when the laboratory at Watertown Arsenal, Massachusetts, studied pyrotechnics and waterproof paper cartridges. This facility would evolve into the Materials Technology Laboratory. Most pre-WWII military research occurred within the military by military personnel, but in 1945, the Army published a policy affirming the need for civilian scientific contributions in military planning and weapons production. Non-military involvement before this time was frequent; however, methods for contribution to warfare technology was on limited and incidental basis. On June 11, 1946, a new research and development division of the War Department General Staff was created; however, due to internal forces within the military which supported the traditional technical service structure the division was closed. A variety of reorganizations took place over the next four decades, which put many organizations in command of Army research and development. Often commanders of these organizations were advocates of the reorganization, while some middle level management was opposed to the change. Reor
https://en.wikipedia.org/wiki/Lloyds%20Bank%20coprolite
The Lloyds Bank coprolite is a large coprolite, or fossilised specimen of human faeces, recovered by the York Archaeological Trust while excavating the Viking settlement of Jórvík (present-day York) in northern England. Description The coprolite was found in 1972 beneath the site of what was to become the branch of Lloyds Bank on Pavement in York, and may be the largest example of fossilised human faeces (palaeofaeces) ever found, measuring long and wide. Analysis of the stool has indicated that its producer subsisted largely on meat and bread whilst the presence of several hundred parasitic eggs suggests they were riddled with intestinal worms. In 1991, Andrew Jones, a York Archaeological Trust employee and palaeoscatologist, made international news with his appraisal of the item for insurance purposes: "This is the most exciting piece of excrement I've ever seen ... In its own way, it's as irreplaceable as the Crown Jewels". The layers that covered the coprolite were moist and peaty. The archaeologists also preserved timber, textiles and leather from the site. Display The specimen was put on display at the Archaeological Resource Centre, an outreach and education centre run by the York Archaeological Trust. In 2003, the coprolite broke into three pieces after being dropped while being exhibited to a party of visitors, and efforts were undertaken to reconstruct it. It has been displayed at Jorvik Viking Centre since 2008.
https://en.wikipedia.org/wiki/Automatic%20watch
An automatic watch, also known as a self-winding watch or simply an automatic, is a mechanical watch where the natural motion of the wearer provides energy to wind the mainspring, making manual winding unnecessary if worn enough. It is distinguished from a manual watch in that a manual watch must have its mainspring wound by hand at regular intervals. Operation In a mechanical watch the watch's gears are turned by a spiral spring called a mainspring. In a manual watch, energy is stored in the mainspring by turning a knob, the crown, on the side of the watch. Then the energy from the mainspring powers the watch movement until it runs down, requiring the spring to be wound again. A self-winding watch movement has a mechanism which winds the mainspring using the natural motions of the wearer's body. The watch contains an oscillating weight that turns on a pivot. The normal movements of the watch in the user's pocket (for a pocketwatch) or on the user's arm (for a wristwatch) cause the rotor to pivot on its staff, which is attached to a ratcheted winding mechanism. The motion of the watch is thereby translated into circular motion of the weight which, through a series of reverser and reducing gears, eventually winds the mainspring. There are many different designs for modern self-winding mechanisms. Some designs allow winding of the watch to take place while the weight swings in only one direction while other, more advanced, mechanisms have two ratchets and wind the mainspring during both clockwise and anti-clockwise weight motions. The fully wound mainspring in a typical watch can store enough energy reserve for roughly two days, allowing the watch to keep running through the night while stationary. In many cases automatic wristwatches can also be wound manually by turning the crown, so the watch can be kept running when not worn, and in case the wearer's wrist motions are not sufficient to keep it wound automatically. Preventing overwinding Self-winding mechanism
https://en.wikipedia.org/wiki/Charles%20Hesterman%20Merz
Charles Hesterman Merz (5 October 1874 – 14 or 15 October 1940) was a British electrical engineer who pioneered the use of high-voltage three-phase AC power distribution in the United Kingdom, building a system in the North East of England in the early 20th century that became the model for the country's National Grid. Early life Merz was the eldest son of industrial chemist John Theodore Merz (a Quaker from Germany) and Alice Mary Richardson, a sister of John Wigham Richardson the Tyneside ship builder. He was born in Gateshead and attended Bootham School, York. He attended Armstrong College in Newcastle, where his father was a part-time lecturer. Career He then entered an apprenticeship at the Newcastle Electric Supply Company (NESCo), which had been founded by his father, in 1889. In 1898 Merz became the first Secretary and Chief Engineer of the Cork Electric Tramways and Lighting Company in Cork, Ireland. In 1899 Merz set up a consulting firm which, with the arrival of William McLellan in 1902, became Merz & McLellan. Merz and McLellan had first worked together in Cork. His next major project was the Neptune Bank Power Station in Wallsend near Newcastle. It was the first three-phase electricity supply system in Great Britain, and was opened by Lord Kelvin on 18 June 1901. In the same year he toured the US and Canada. Together with Bernard Price, he developed and patented one of the earliest forms of automatic mains protection. This system was successful and became known as the Merz-Price system. When Price was succeeded by Philip Vassar Hunter, Merz worked with him to develop an improved version which became known as the Merz-Hunter system. He was known affectionately within the electricity industry as the "Grid King". He was a consultant to a local tramway company on the electrification of their horse-drawn routes and, subsequently, to the Tyneside local lines of the North Eastern Railway, a pioneer of British mainline railway electrification, whose elec
https://en.wikipedia.org/wiki/GRB7
Growth factor receptor-bound protein 7, also known as GRB7, is a protein that in humans is encoded by the GRB7 gene. Function The product of this gene belongs to a small family of adaptor proteins that are known to interact with a number of receptor tyrosine kinases and signaling molecules. This gene encodes a growth factor receptor-binding protein that interacts with epidermal growth factor receptor (EGFR) and ephrin receptors. The protein plays a role in the integrin signaling pathway and cell migration by binding with focal adhesion kinase (FAK). Alternative splicing results in multiple transcript variants encoding different isoforms, although the full-length natures of only two of the variants have been determined to date. Clinical significance GRB7 is an SH2-domain adaptor protein that binds to receptor tyrosine kinases and provides the intra-cellular direct link to the Ras proto-oncogene. Human GRB7 is located on the long arm of chromosome 17, next to the ERBB2 (alias HER2/neu) proto-oncogene. These two genes are commonly co-amplified (present in excess copies) in breast cancers. GRB7, thought to be involved in migration , is well known to be over-expressed in testicular germ cell tumors, esophageal cancers, and gastric cancers. Interactions GRB7 has been shown to interact with: EPH receptor B1, Insulin receptor, PTK2, RET proto-oncogene, and Rnd1 Model organisms Model organisms have been used in the study of GRB7 function. A conditional knockout mouse line called Grb7tm1b(EUCOMM)Wtsi was generated at the Wellcome Trust Sanger Institute. Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Additional screens performed: - In-depth immunological phenotyping
https://en.wikipedia.org/wiki/Procrustes%20analysis
In statistics, Procrustes analysis is a form of statistical shape analysis used to analyse the distribution of a set of shapes. The name Procrustes () refers to a bandit from Greek mythology who made his victims fit his bed either by stretching their limbs or cutting them off. In mathematics: an orthogonal Procrustes problem is a method which can be used to find out the optimal rotation and/or reflection (i.e., the optimal orthogonal linear transformation) for the Procrustes Superimposition (PS) of an object with respect to another. a constrained orthogonal Procrustes problem, subject to det(R) = 1 (where R is an orthogonal matrix), is a method which can be used to determine the optimal rotation for the PS of an object with respect to another (reflection is not allowed). In some contexts, this method is called the Kabsch algorithm. When a shape is compared to another, or a set of shapes is compared to an arbitrarily selected reference shape, Procrustes analysis is sometimes further qualified as classical or ordinary, as opposed to generalized Procrustes analysis (GPA), which compares three or more shapes to an optimally determined "mean shape". Introduction To compare the shapes of two or more objects, the objects must be first optimally "superimposed". Procrustes superimposition (PS) is performed by optimally translating, rotating and uniformly scaling the objects. In other words, both the placement in space and the size of the objects are freely adjusted. The aim is to obtain a similar placement and size, by minimizing a measure of shape difference called the Procrustes distance between the objects. This is sometimes called full, as opposed to partial PS, in which scaling is not performed (i.e. the size of the objects is preserved). Notice that, after full PS, the objects will exactly coincide if their shape is identical. For instance, with full PS two spheres with different radii will always coincide, because they have exactly the same shape. Conversely, wi
https://en.wikipedia.org/wiki/Residential%20care
Residential care refers to long-term care given to adults or children who stay in a residential setting rather than in their own home or family home. There are various residential care options available, depending on the needs of the individual. People with disabilities, mental health problems, learning difficulties, Alzheimer's disease, dementia or who are frail aged are often cared for at home by paid or voluntary caregivers, such as family and friends, with additional support from home care agencies. However, if home-based care is not available or not appropriate for the individual, residential care may be required. Child care Children may be removed from abusive or unfit homes by government action, or they may be placed in various types of out-of-home care by parents who are unable to care for them or their special needs. In most jurisdictions the child is removed from the home only as a last resort, for their own safety and well-being or the safety or others, since out-of-home care is regarded as very disruptive to the child. They are moved to a place called a foster home. Residential schools A residential school is a school in which children generally stay 24 hours per day, 7 days per week (often called a boarding school). There is divided opinion about whether this type of schooling is beneficial for children. A case for residential special schooling has been advanced in the article: Residential special schooling: the inclusive option! in the Scottish Journal of Residential Child Care, Volume 3(2), 17–32, 2004 by Robin Jackson. Residential child care This type of out-of-home care is for orphans, or for children whose parents cannot or will not look after them. Orphaned, abandoned or high risk young people may live in small self-contained units established as home environments, for example within residential child care communities. Young people in this care are, if removed from home involuntarily, subject to government departmental evaluations that in
https://en.wikipedia.org/wiki/Selberg%20zeta%20function
The Selberg zeta-function was introduced by . It is analogous to the famous Riemann zeta function where is the set of prime numbers. The Selberg zeta-function uses the lengths of simple closed geodesics instead of the prime numbers. If is a subgroup of SL(2,R), the associated Selberg zeta function is defined as follows, or where p runs over conjugacy classes of prime geodesics (equivalently, conjugacy classes of primitive hyperbolic elements of ), and N(p) denotes the length of p (equivalently, the square of the bigger eigenvalue of p). For any hyperbolic surface of finite area there is an associated Selberg zeta-function; this function is a meromorphic function defined in the complex plane. The zeta function is defined in terms of the closed geodesics of the surface. The zeros and poles of the Selberg zeta-function, Z(s), can be described in terms of spectral data of the surface. The zeros are at the following points: For every cusp form with eigenvalue there exists a zero at the point . The order of the zero equals the dimension of the corresponding eigenspace. (A cusp form is an eigenfunction to the Laplace–Beltrami operator which has Fourier expansion with zero constant term.) The zeta-function also has a zero at every pole of the determinant of the scattering matrix, . The order of the zero equals the order of the corresponding pole of the scattering matrix. The zeta-function also has poles at , and can have zeros or poles at the points . The Ihara zeta function is considered a p-adic (and a graph-theoretic) analogue of the Selberg zeta function. Selberg zeta-function for the modular group For the case where the surface is , where is the modular group, the Selberg zeta-function is of special interest. For this special case the Selberg zeta-function is intimately connected to the Riemann zeta-function. In this case the determinant of the scattering matrix is given by: In particular, we see that if the Riemann zeta-function has a zero at , the
https://en.wikipedia.org/wiki/Non-negative%20matrix%20factorization
Non-negative matrix factorization (NMF or NNMF), also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix is factorized into (usually) two matrices and , with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically. NMF finds applications in such fields as astronomy, computer vision, document clustering, missing data imputation, chemometrics, audio signal processing, recommender systems, and bioinformatics. History In chemometrics non-negative matrix factorization has a long history under the name "self modeling curve resolution". In this framework the vectors in the right matrix are continuous curves rather than discrete vectors. Also early work on non-negative matrix factorizations was performed by a Finnish group of researchers in the 1990s under the name positive matrix factorization. It became more widely known as non-negative matrix factorization after Lee and Seung investigated the properties of the algorithm and published some simple and useful algorithms for two types of factorizations. Background Let matrix be the product of the matrices and , Matrix multiplication can be implemented as computing the column vectors of as linear combinations of the column vectors in using coefficients supplied by columns of . That is, each column of can be computed as follows: where is the -th column vector of the product matrix and is the -th column vector of the matrix . When multiplying matrices, the dimensions of the factor matrices may be significantly lower than those of the product matrix and it is this property that forms the basis of NMF. NMF generates factors with significan
https://en.wikipedia.org/wiki/Comparison%20of%20programming%20languages%20%28string%20functions%29
String functions are used in computer programming languages to manipulate a string or query information about a string (some do both). Most programming languages that have a string datatype will have some string functions although there may be other low-level ways within each language to handle strings directly. In object-oriented languages, string functions are often implemented as properties and methods of string objects. In functional and list-based languages a string is represented as a list (of character codes), therefore all list-manipulation procedures could be considered string functions. However such languages may implement a subset of explicit string-specific functions as well. For function that manipulate strings, modern object-oriented languages, like C# and Java have immutable strings and return a copy (in newly allocated dynamic memory), while others, like C manipulate the original string unless the programmer copies data to a new string. See for example Concatenation below. The most basic example of a string function is the length(string) function. This function returns the length of a string literal. e.g. length("hello world") would return 11. Other languages may have string functions with similar or exactly the same syntax or parameters or outcomes. For example, in many languages the length function is usually represented as len(string). The below list of common functions aims to help limit this confusion. Common string functions (multi language reference) String functions common to many languages are listed below, including the different names used. The below list of common functions aims to help programmers find the equivalent function in a language. Note, string concatenation and regular expressions are handled in separate pages. Statements in guillemets (« … ») are optional. CharAt { Example in Pascal } var MyStr: string = 'Hello, World'; MyChar: Char; begin MyChar := MyStr[2]; // 'e' # Example in ALGOL 68 # "Hello, W
https://en.wikipedia.org/wiki/Unified%20Soil%20Classification%20System
The Unified Soil Classification System (USCS) is a soil classification system used in engineering and geology to describe the texture and grain size of a soil. The classification system can be applied to most unconsolidated materials, and is represented by a two-letter symbol. Each letter is described below (with the exception of Pt): If the soil has 5–12% by weight of fines passing a #200 sieve (5% < P#200 < 12%), both grain size distribution and plasticity have a significant effect on the engineering properties of the soil, and dual notation may be used for the group symbol. For example, GW-GM corresponds to "well-graded gravel with silt." If the soil has more than 15% by weight retained on a #4 sieve (R#4 > 15%), there is a significant amount of gravel, and the suffix "with gravel" may be added to the group name, but the group symbol does not change. For example, SP-SM could refer to "poorly graded SAND with silt" or "poorly graded SAND with silt and gravel." Symbol chart ASTM D-2487 See also AASHTO Soil Classification System AASHTO ASTM International
https://en.wikipedia.org/wiki/Embalming%20chemicals
Embalming chemicals are a variety of preservatives, sanitising and disinfectant agents, and additives used in modern embalming to temporarily prevent decomposition and restore a natural appearance for viewing a body after death. A mixture of these chemicals is known as embalming fluid and is used to preserve bodies of deceased persons for both funeral purposes and in medical research in anatomical laboratories. The period for which a body is embalmed is dependent on time, expertise of the embalmer and factors regarding duration of stay and purpose. Typically, embalming fluid contains a mixture of formaldehyde, glutaraldehyde, methanol, and other solvents. The formaldehyde content generally ranges from 5 to 37 percent and the methanol content may range from 9 to 56 percent. In the United States alone, about 20 million liters (roughly 5.3 million gallons) of embalming fluid are used every year. How they work Embalming fluid acts to fix (denature) cellular proteins, meaning that they cannot act as a nutrient source for bacteria; embalming fluid also kills the bacteria themselves. Formaldehyde or glutaraldehyde fixes tissue or cells by irreversibly connecting a primary amine group in a protein molecule with a nearby nitrogen in a protein or DNA molecule through a -CH2- linkage called a Schiff base. The end result also creates the simulation, via color changes, of the appearance of blood flowing under the skin. Modern embalming is not done with a single fixative. Instead, various chemicals are used to create a mixture, called an arterial solution, which is uniquely generated for the needs of each case. For example, a body needing to be repatriated overseas needs a higher index (percentage of diluted preservative chemical) than one simply for viewing (known in the United States and Canada as a funeral visitation) at a funeral home before cremation or burial. Process Embalming fluid is injected into the arterial system of the deceased's abdomen and a trocar is inse
https://en.wikipedia.org/wiki/Dot-com%20company
A dot-com company, or simply a dot-com (alternatively rendered dot.com, dot com, dotcom or .com), is a company that does most of its business on the Internet, usually through a website on the World Wide Web that uses the popular top-level domain ".com". As of 2021, .com is by far the most used TLD, with almost half of all registrations. The suffix .com in a URL usually (but not always) refers to a commercial or for-profit entity, as opposed to a non-commercial entity or non-profit organization, which usually use .org. The name for the domain came from the word commercial, as that is the main intended use. Since the .com companies are web-based, often their products or services are delivered via web-based mechanisms, even when physical products are involved. On the other hand, some .com companies do not offer any physical products. History Origin of the .com domain (1985-1991) The .com top-level domain (TLD) was one of the first seven created when the Internet was first implemented in 1985; the others were .mil, .gov, .edu, .net, .int, and .org. The United States Department of Defense originally controlled the domain, but control was later transferred to the National Science Foundation as it was mainly used for non-defense-related purposes. Beginning of online commerce and rise in valuation (1992-1999) With the creation of the World Wide Web in 1991, many companies began creating websites to sell their products. In 1994, the first secure online credit card transaction was made using the NetMarket platform. By 1995, over 40 million people were using the Internet. That same year, companies including Amazon.com and eBay were launched, paving the way for future e-commerce companies. At the time of Amazon's IPO in 1997, they were recording a 900% increase in revenue over the previous year. By 1998, with a valuation of over $14 billion, they were still not making a profit. The same phenomenon occurred with many other internet companiesventure capitalists were eage
https://en.wikipedia.org/wiki/Plasmid%20preparation
A plasmid preparation is a method of DNA extraction and purification for plasmid DNA, it is an important step in many molecular biology experiments and is essential for the successful use of plasmids in research and biotechnology. Many methods have been developed to purify plasmid DNA from bacteria. During the purification procedure, the plasmid DNA is often separated from contaminating proteins and genomic DNA. These methods invariably involve three steps: growth of the bacterial culture, harvesting and lysis of the bacteria, and purification of the plasmid DNA. Purification of plasmids is central to molecular cloning. A purified plasmid can be used for many standard applications, such as sequencing and transfections into cells. Growth of the bacterial culture Plasmids are almost always purified from liquid bacteria cultures, usually E. coli, which have been transformed and isolated. Virtually all plasmid vectors in common use encode one or more antibiotic resistance genes as a selectable marker, for example a gene encoding ampicillin or kanamycin resistance, which allows bacteria that have been successfully transformed to multiply uninhibited. Bacteria that have not taken up the plasmid vector are assumed to lack the resistance gene, and thus only colonies representing successful transformations are expected to grow. Bacteria are grown under favourable conditions. Harvesting and lysis of the bacteria There are several methods for cell lysis, including alkaline lysis, mechanical lysis, and enzymatic lysis. Alkaline lysis The most common method is alkaline lysis, which involves the use of a high concentration of a basic solution, such as sodium hydroxide, to lyse the bacterial cells. When bacteria are lysed under alkaline conditions (pH 12.0–12.5) both chromosomal DNA and protein are denatured; the plasmid DNA however, remains stable. Some scientists reduce the concentration of NaOH used to 0.1M in order to reduce the occurrence of ssDNA. After the addition o
https://en.wikipedia.org/wiki/Seam%20sealant
Seam sealants are chemical coating compositions. Textiles Seam sealants are applied to waterproof seams of items such as rainwear, tents, backpacks, shoes, drysacks, and drysuits. They are often applied by the consumer post-purchase. Automotive industry Seam sealing was already performed manually successfully for preventing perforation corrosion in the 1980s. Today used in the OEM automotive industry primarily for the purpose of seals against air leaks and to waterproof sheetmetal overlaps that occur in the assembly of a vehicle. Such overlaps are typically decorative rather than structurally supportive. Accordingly, they are usually only spot welded and this process results in a closure that is not air or water tight. Seam sealants are sprayed or extruded over the joined edges of these overlaps, and they then either cure to a flexible waterproof "seal" by drying (dehydrating) in the case of water borne compositions, or thermoset irreversibly to a flexible adherent seam seal by going through an oven bake in the case of plasticized polyvinylchloride compositions. Most interior seam seals are not visible after the vehicle is finished, because they are covered by carpeting, interior roof headliner, or decorative trim panels. Exterior seam seals are always painted over and are referred to as "coach joint seals." The flat-stream application with many special solutions (special nozzles) was increasingly used as the most flexible procedure.
https://en.wikipedia.org/wiki/Machine%20shop
A machine shop or engineering workshop is a room, building, or company where machining, a form of subtractive manufacturing, is done. In a machine shop, machinists use machine tools and cutting tools to make parts, usually of metal or plastic (but sometimes of other materials such as glass or wood). A machine shop can be a small business (such as a job shop) or a portion of a factory, whether a toolroom or a production area for manufacturing. The building construction and the layout of the place and equipment vary, and are specific to the shop; for instance, the flooring in one shop may be concrete, or even compacted dirt, and another shop may have asphalt floors. A shop may be air-conditioned or not; but in other shops it may be necessary to maintain a controlled climate. Each shop has its own tools and machinery which differ from other shops in quantity, capability and focus of expertise. The parts produced can be the end product of the factory, to be sold to customers in the machine industry, the car industry, the aircraft industry, or others. It may encompass the frequent machining of customized components. In other cases, companies in those fields have their own machine shops. The production can consist of cutting, shaping, drilling, finishing, and other processes, frequently those related to metalworking. The machine tools typically include metal lathes, milling machines, machining centers, multitasking machines, drill presses, or grinding machines, many controlled with computer numerical control (CNC). Other processes, such as heat treating, electroplating, or painting of the parts before or after machining, are often done in a separate facility. A machine shop can contain some raw materials (such as bar stock for machining) and an inventory of finished parts. These items are often stored in a warehouse. The control and traceability of the materials usually depend on the company's management and the industries that are served, standard certification of the
https://en.wikipedia.org/wiki/Alligation
Alligation is an old and practical method of solving arithmetic problems related to mixtures of ingredients. There are two types of alligation: alligation medial, used to find the quantity of a mixture given the quantities of its ingredients, and alligation alternate, used to find the amount of each ingredient needed to make a mixture of a given quantity. Alligation medial is merely a matter of finding a weighted mean. Alligation alternate is more complicated and involves organizing the ingredients into high and low pairs which are then traded off. Alligation alternate provides answers when an algebraic solution (e.g., using simultaneous equations) is not possible (e.g., you have three variables but only two equations). Note that in this class of problem, there may be multiple feasible answers. Two further variations on Alligation occur : Alligation Partial and Alligation Total (see John King's Arithmetic Book 1795 which includes worked examples.) The technique is not used in schools although it is used still in pharmacies for quick calculation of quantities. Examples Alligation medial Suppose you make a cocktail drink combination out of 1/2 Coke, 1/4 Sprite, and 1/4 orange soda. The Coke has 120 grams of sugar per liter, the Sprite has 100 grams of sugar per liter, and the orange soda has 150 grams of sugar per liter. How much sugar does the drink have? This is an example of alligation medial because you want to find the amount of sugar in the mixture given the amounts of sugar in its ingredients. The solution is just to find the weighted average by composition: grams per liter Alligation alternate Suppose you like 1% milk, but you have only 3% whole milk and ½% low fat milk. How much of each should you mix to make an 8-ounce cup of 1% milk? This is an example of alligation alternate because you want to find the amount of two ingredients to mix to form a mixture with a given amount of fat. Since there are only two ingredients, there is only one possible wa
https://en.wikipedia.org/wiki/Color%20superconductivity
Color superconductivity is a phenomenon where matter carries color charge without loss, on analogy to the way conventional superconductors can carry electric charge without loss. Color superconductivity is predicted to occur in quark matter if the baryon density is sufficiently high (i.e., well above the density and energies of an atomic nucleus) and the temperature is not too high (well below 1012 kelvins). Color superconducting phases are to be contrasted with the normal phase of quark matter, which is just a weakly interacting Fermi liquid of quarks. In theoretical terms, a color superconducting phase is a state in which the quarks near the Fermi surface become correlated in Cooper pairs, which condense. In phenomenological terms, a color superconducting phase breaks some of the symmetries of the underlying theory, and has a very different spectrum of excitations and very different transport properties from the normal phase. Description Analogy with superconducting metals It is well known that at low temperature many metals become superconductors. A metal can be viewed in part as a Fermi liquid of electrons, and below a critical temperature, an attractive phonon-mediated interaction between the electrons near the Fermi surface causes them to pair up and form a condensate of Cooper pairs, which via the Anderson–Higgs mechanism makes the photon massive, leading to characteristic behaviors of a superconductor: infinite conductivity and the exclusion of magnetic fields (Meissner effect). The crucial ingredients for this to occur are: a liquid of charged fermions. an attractive interaction between the fermions low temperature (below the critical temperature) These ingredients are also present in sufficiently dense quark matter, leading physicists to expect that something similar will happen in that context: quarks carry both electric charge and color charge; the strong interaction between two quarks is powerfully attractive; the critical temperature is expec
https://en.wikipedia.org/wiki/M23%20software%20distribution%20system
m23 is a software distribution and management system for the Debian, Ubuntu, Kubuntu Linux, Xubuntu, Linux Mint, elementary OS, Fedora, CentOS and openSUSE distributions. m23 can partition and format clients and install a Linux operating system and any number of software packages like office packages, graphic tools, server applications or games via the m23 system. The entire administration is done via a webbrowser and is possible from all computers having access to the m23 server. m23 is developed predominantly by Hauke Goos-Habermann since the end of 2002. m23 differentiates between servers and clients. An m23 server is used for software distribution and the management of the clients. Computers which are administered (e.g. software is installed) through the m23 server are the clients. The client is booted over the network during the installation of the operating system. It is possible to start the client with a boot ROM on its network card, a boot disk or a boot CD. The client's hardware is detected and set up. The gathered hardware and partition information is sent to the m23 server. Afterwards, this information is shown in the m23 administration interface. Now the administrator has to choose how to partition and format the client. There are other settings, too, e.g. the distribution to be installed on the client. The m23 clients can be installed as workstation with the graphical user interfaces KDE 5.x, GNOME 3.x, Xfce, Unity, LXDE and pure X11 or as a server without graphical subsystem. In most server setups, the server does not need a user interface because most of the server software runs in text mode. M23 is released under the GNU GPL. Features Three steps to a complete client: To install a client via m23 is rather simple. Only three steps are needed for a completely installed client. Integration of existing clients into m23: Existing Debian-based systems can be assimilated into the m23 system easily and administered like a normal client (installed wi
https://en.wikipedia.org/wiki/Zebrawood
The name zebrawood is used to describe several tree species and the wood derived from them. Zebrawood is characterized by a striped figure that is reminiscent of a zebra. The name originally applied to the wood of Astronium graveolens, a large tree native to Central America. In the 20th century, the most important source of zebrawood was Microberlinia brazzavillensis, also called zebrano, a tree native to Central Africa. Other sources include Brazilian Astronium fraxinifolium, African Brachystegia spiciformis, Pacific Guettarda speciosa, and Asian Pistacia integerrima. History Zebrawood was first recorded in the British Customs returns for 1773, when 180 pieces of zebrawood were imported from the Mosquito Coast, a British colony (now Republic of Honduras and Nicaragua). In his History of Jamaica (1774), Edward Long relates, "The species of zebra wood at present in esteem among the cabinet-makers is brought to Jamaica from the Mosquito shore; it is of a most lovely tint, and richly veined..." The Mosquito Coast thereafter exported zebrawood regularly until the Convention of London (1786) and the consequent expulsion of British settlers from this part of the Viceroyalty of New Spain. An alternative name which occurs in 18th century British sources is palmaletto or palmalatta, from palo mulatto, which was the local name for the wood. At the beginning of the 19th century, another source of zebrawood was found in Brazil. This species, Astronium fraxinifolium, is native to northern South America, especially north-eastern Brazil. It is now traded as goncalo alves, a Portuguese name used in Brazil. On the European and American markets, however, it was still called zebrawood, and commonly used in British furniture-making between about 1810 and 1860. For most of the 19th century, the botanical identity of zebrawood was unknown. For many years, it was thought to be the product of Omphalobium lambertii DC., later reclassified as Connarus guianensis Lamb ex DC., and
https://en.wikipedia.org/wiki/DIMACS
The Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) is a collaboration between Rutgers University, Princeton University, and the research firms AT&T, Bell Labs, Applied Communication Sciences, and NEC. It was founded in 1989 with money from the National Science Foundation. Its offices are located on the Rutgers campus, and 250 members from the six institutions form its permanent members. DIMACS is devoted to both theoretical development and practical applications of discrete mathematics and theoretical computer science. It engages in a wide variety of evangelism including encouraging, inspiring, and facilitating researchers in these subject areas, and sponsoring conferences and workshops. Fundamental research in discrete mathematics has applications in diverse fields including Cryptology, Engineering, Networking, and Management Decision Support. Past directors have included Fred S. Roberts, Daniel Gorenstein, András Hajnal, and Rebecca N. Wright. The DIMACS Challenges DIMACS sponsors implementation challenges to determine practical algorithm performance on problems of interest. There have been eleven DIMACS challenges so far. 1990-1991: Network Flows and Matching 1992-1992: NP-Hard Problems: Max Clique, Graph Coloring, and SAT 1993-1994: Parallel Algorithms for Combinatorial Problems 1994-1995: Computational Biology: Fragment Assembly and Genome Rearrangement 1995-1996: Priority Queues, Dictionaries, and Multidimensional Point Sets 1998-1998: Near Neighbor Searches 2000-2000: Semidefinite and Related Optimization Problems 2001-2001: The Traveling Salesman Problem 2005-2005: The Shortest Path Problem 2011-2012: Graph Partitioning and Graph Clustering 2013-2014: Steiner Tree Problems 2020-2021: Vehicle Routing Problems
https://en.wikipedia.org/wiki/Three%20hares
The three hares (or three rabbits) is a circular motif appearing in sacred sites from East Asia, the Middle East and to the churches of Devon, England (as the "Tinners' Rabbits"), and historical synagogues in Europe. It is used as an architectural ornament, a religious symbol, and in other modern works of art or a logo for adornment (including tattoos), jewelry, and a coat of arms on an escutcheon. It is viewed as a puzzle, a topology problem or a visual challenge, and has been rendered as sculpture, drawing, and painting. The symbol features three hares or rabbits chasing each other in a circle. Like the triskelion, the triquetra, and their antecedents (e.g., the triple spiral), the symbol of the three hares has a threefold rotational symmetry. Each of the ears is shared by two hares, so that only three ears are shown. Although its meaning is apparently not explained in contemporary written sources from any of the medieval cultures where it is found, it is thought to have a range of symbolic or mystical associations with fertility and the lunar cycle. When used in Christian churches, it is presumed to be a symbol of the Trinity. Its origins and original significance are uncertain, as are the reasons why it appears in such diverse locations. Origins in Buddhism and diffusion on the Silk Road The earliest occurrences appear to be in cave temples in China, dated to the Sui dynasty (6th to 7th centuries). The iconography spread along the Silk Road, and was a symbol associated with Buddhism. In other contexts the metaphor has been given different meaning. For example, Guan Youhui, a retired researcher from the Dunhuang Academy, who spent 50 years studying the decorative patterns in the Mogao Caves, believes the three rabbits—"like many images in Chinese folk art that carry auspicious symbolism—represent peace and tranquility". See Aurel Stein. The hares have appeared in Lotus motifs. The three hares appear on 13th century Mongol metalwork, and on a copper coin, foun
https://en.wikipedia.org/wiki/Microsoft%20Security%20Development%20Lifecycle
The Microsoft Security Development Lifecycle is a software development process used and proposed by Microsoft to reduce software maintenance costs and increase reliability of software concerning software security related bugs. It is based on the classical spiral model. Versions See also Trusted computing base Further reading External links Software development process Microsoft initiatives Data security Security Crime prevention National security Cryptography Information governance
https://en.wikipedia.org/wiki/Toilet%20Duck
Toilet Duck is a brand name of toilet cleaner noted for the duck-shape of its bottle shaped to assist in dispensing the cleaner under the rim. The design was patented in 1980 by Durgol from Dällikon, Switzerland. It is now produced by S. C. Johnson & Son. The Toilet Duck brand can be found in the United States, United Kingdom and other countries around the world. In Germany, it is known as WC-Ente, previously produced by Henkel, and now by S. C. Johnson (Germany). In the Netherlands and Flanders it is called "Wc-eend", in France it is sold as "Canard-WC" and in Italy as "Anitra WC". In Hungary it used to have the name "Toalett Kacsa". Meanwhile, in Spain, it is sold as "Pato WC", in Portugal as "WC Pato", and in Mexico, Brazil, Colombia and Argentina as "Pato Purific" or simply "Pato". In Indonesia, it is one of the "Bebek" (duck) line of products, such as Bebek Kloset, Bebek Semerbak, Bebek Semerbak Flush, Bebek In Tank, and Bebek Kamar Mandi. The "Toilet" moniker has been dropped from the name in the UK and Ireland, and the product is now called "Duck". The same change is occurred in Hungary either, however also with the English "Duck" instead of "Kacsa". Today, the duck-shaped bottle is sold in North America under the Scrubbing Bubbles brand. Ingredients The following ingredients are part of all Duck toilet-cleaning products: L-lactic acid Water Ethoxylated alcohol Xanthan gum Sodium laureth sulfate Depending on variant, various dyes and fragrances are used, such as: Liquid Marine: Liquitint Blue Dye, Liquitint Pink AL Dye, 2,6-dimethyl-7-octen-2-ol, 2-methoxy-4-propylphenol, 3,7-dimethyloct-6-enenitrile, coumarin, dipropylene glycol, eucalyptol, geraniol, isobornyl acetate, isobutyl salicylate, linalool. Liquid Citrus: Liquitint Orange 157, 2-t-butylcyclohexyl acetate, 3,7-dimethylnona-2,6-dienenitrile, allyl 3-cyclohexylpropionate, decanal, dipropylene glycol, ethyl 2-methylvalerate, gamma-undecalactone, methylbenzyl acetate, tricyclo(5.2.1.02,6)dec
https://en.wikipedia.org/wiki/Form%20%28zoology%29
In zoology, the word "form" or forma (literally Latin for form) is a strictly informal term that is sometimes used to describe organisms. Under the International Code of Zoological Nomenclature the term has no standing (it is not accepted). In other words, although form names are Latin, and are sometimes wrongly appended to a binomial name, in a zoological context, forms have no taxonomic significance at all. Usage of the term Some zoologists use the word "form" or "forma" to describe variation in animals, especially insects, as part of a series of terms and abbreviations that are appended to the binomen or trinomen. Many 'typical specimens' may be described, but none should be considered absolute, unconditional or categorical. Forms have no official status, though they are sometimes useful in describing altitudinal or geographical clines. As opposed to morphs (see below), a subpopulation usually consists of a single form only at any given point of time. forma geographica - f. geogr. If used, nowadays usually denotes a part of a cline; for example for intergrades between subspecies in their area of contact. forma localis - f. loc. As "f. geogr." but only local, more restricted in occurrence. See also small population size. forma alta - f. alt. Altitudinal features are not necessarily inherited, but may entirely be due to environment. The same applies to temperature or humidity-generated forms, such as: forma vernalis - f. vern. (spring form) forma aestivalis - f. aest. (summer form) forma autumnalis - f. autumn. (autumn form) aberratio - ab. May be used for a single individual, for a small group such as an individual and its offspring, or for atypical individuals (for example, albinos). Also used for commonly observed forms of a species, but in this case use of forma (f.) or morpha, accompanied by a descriptive name, is more conventional. Notes: A morph is a similar concept with a less restricted occurrence (see also Polymorphism). As neither forms nor
https://en.wikipedia.org/wiki/Long%20double
In C and related programming languages, long double refers to a floating-point data type that is often more precise than double precision though the language standard only requires it to be at least as precise as double. As with C's other floating-point types, it may not necessarily map to an IEEE format. long double in C History The long double type was present in the original 1989 C standard, but support was improved by the 1999 revision of the C standard, or C99, which extended the standard library to include functions operating on long double such as sinl() and strtold(). Long double constants are floating-point constants suffixed with "L" or "l" (lower-case L), e.g., 0.3333333333333333333333333333333333L or 3.1415926535897932384626433832795029L for quadruple precision. Without a suffix, the evaluation depends on FLT_EVAL_METHOD. Implementations On the x86 architecture, most C compilers implement long double as the 80-bit extended precision type supported by x86 hardware (generally stored as 12 or 16 bytes to maintain data structure alignment), as specified in the C99 / C11 standards (IEC 60559 floating-point arithmetic (Annex F)). An exception is Microsoft Visual C++ for x86, which makes long double a synonym for double. The Intel C++ compiler on Microsoft Windows supports extended precision, but requires the /Qlong‑double switch for long double to correspond to the hardware's extended precision format. Compilers may also use long double for the IEEE 754 quadruple-precision binary floating-point format (binary128). This is the case on HP-UX, Solaris/SPARC, MIPS with the 64-bit or n32 ABI, 64-bit ARM (AArch64) (on operating systems using the standard AAPCS calling conventions, such as Linux), and z/OS with FLOAT(IEEE). Most implementations are in software, but some processors have hardware support. On some PowerPC systems, long double is implemented as a double-double arithmetic, where a long double value is regarded as the exact sum of two double-precis
https://en.wikipedia.org/wiki/Dave%20Thomas%20%28programmer%29
Dave Thomas (born 1956) is a computer programmer, author and editor. He has written about Ruby and together with Andy Hunt, he co-authored The Pragmatic Programmer and runs The Pragmatic Bookshelf publishing company. Thomas moved to the United States from England in 1994 and lives north of Dallas, Texas. Thomas coined the phrases 'Code Kata' and 'DRY' (Don't Repeat Yourself), and was an original signatory and author of The Manifesto for Agile Software Development. He studied computer science at Imperial College London. Works The Pragmatic Programmer, Andrew Hunt and David Thomas, 1999, Addison Wesley, . Programming Ruby: A Pragmatic Programmer's Guide, David Thomas and Andrew Hunt, 2000, Addison Wesley, Pragmatic Version Control Using CVS, David Thomas and Andrew Hunt, 2003, The Pragmatic Bookshelf, Pragmatic Unit Testing in Java with JUnit, Andrew Hunt and David Thomas, 2003, The Pragmatic Bookshelf, Pragmatic Unit Testing in C# with Nunit, Andrew Hunt and David Thomas, 2004, The Pragmatic Bookshelf, Programming Ruby (2nd Edition), Dave Thomas, Chad Fowler, and Andrew Hunt, 2004, The Pragmatic Bookshelf, Pragmatic Unit Testing in C# with Nunit, 2nd Edition, Andy Hunt and David Thomas with Matt Hargett, 2007, The Pragmatic Bookshelf, Agile Web Development with Rails, Dave Thomas, David Heinemeier Hansson, Andreas Schwarz, Thomas Fuchs, Leon Breedt, and Mike Clark, 2005, Pragmatic Bookshelf, Agile Web Development with Rails (2nd edition), Dave Thomas, with David Heinemener Hansson, Mike Clark, Justin Gehtland, James Duncan Davidson, 2006, Pragmatic Bookshelf, Programming Elixir: Functional |> Concurrent |> Pragmatic |> Fun, Dave Thomas, foreword by José Valim the creator of Elixir, and edited by Lynn Beighley, 2014, Pragmatic Bookshelf,
https://en.wikipedia.org/wiki/Radio-controlled%20submarine
A radio-controlled submarine is a scale model of a submarine that can be steered via radio control. The most common form are those operated by hobbyists. These can range from inexpensive toys to complex projects involving sophisticated electronics. Oceanographers and military units also operate radio-controlled submarines. Radio transmission through water As the conductivity of a medium increases, the more a radio signal passing through it is attenuated. High frequencies are also attenuated more than low frequencies, and tend to reflect off the surface of the water more. As is well known communication with military submarines uses very low frequency electromagnetic radiation for this reason. Military frequencies are well below allocated hobby radio control bands, but the lowest hobby bands - typically around 27MHz/40MHz - can penetrate several feet of water at short distances, typically less than 45 meters. Penetration at these frequencies is easier in fresh water, but are difficult to impossible in sea water. Modern radio control sets using the 2.4GHz band penetrate water very poorly, and are thus not used by serious divers. For underwater radio to work, even at these frequencies, the receiving aerial needs to be completely insulated from the surrounding water. Plastic covered wire provides adequate insulation - the aerial need not be held in an airtight container - but the cut end of such wire must be sealed against water ingress. Depending on water conditions, positive control can be maintained at perhaps 3 meters depth. Since control of model submarines may not be reliable at all times, such models usually carry a variety of apparatus intended to prevent model loss. Fail-safe systems which detect loss of signal and command the submarine to surface, or pressure sensors which limit the depth attained, may be used. Such specialist complexity usually makes a model submarine an expensive item compared to a model surface boat. Professional or military remote-cont
https://en.wikipedia.org/wiki/Unit%20disk%20graph
In geometric graph theory, a unit disk graph is the intersection graph of a family of unit disks in the Euclidean plane. That is, it is a graph with one vertex for each disk in the family, and with an edge between two vertices whenever the corresponding vertices lie within a unit distance of each other. They are commonly formed from a Poisson point process, making them a simple example of a random structure. Definitions There are several possible definitions of the unit disk graph, equivalent to each other up to a choice of scale factor: Unit disk graphs are the graph formed from a collection of points in the Euclidean plane, with a vertex for each point and an edge connecting each pair of points whose distance is below a fixed threshold. Unit disk graphs are the intersection graphs of equal-radius circles, or of equal-radius disks. These graphs have a vertex for each circle or disk, and an edge connecting each pair of circles or disks that have a nonempty intersection. Unit disk graphs may be formed in a different way from a collection of equal-radius circles, by connecting two circles with an edge whenever one circle contains the center of the other circle. Properties Every induced subgraph of a unit disk graph is also a unit disk graph. An example of a graph that is not a unit disk graph is the star with one central node connected to six leaves: if each of six unit disks touches a common unit disk, some two of the six disks must touch each other. Therefore, unit disk graphs cannot contain an induced subgraph. Infinitely many other forbidden induced subgraphs are known. The number of unit disk graphs on labeled vertices is within an exponential factor of . This rapid growth implies that unit disk graphs do not have bounded twin-width. Applications Beginning with the work of , unit disk graphs have been used in computer science to model the topology of ad hoc wireless communication networks. In this application, nodes are connected through a direct wire
https://en.wikipedia.org/wiki/Disjunct%20distribution
In biology, a taxon with a disjunct distribution is one that has two or more groups that are related but considerably separated from each other geographically. The causes are varied and might demonstrate either the expansion or contraction of a species' range. Range fragmentation Also called range fragmentation, disjunct distributions may be caused by changes in the environment, such as mountain building and continental drift or rising sea levels; it may also be due to an organism expanding its range into new areas, by such means as rafting, or other animals transporting an organism to a new location (plant seeds consumed by birds and animals can be moved to new locations during bird or animal migrations, and those seeds can be deposited in new locations in fecal matter). Other conditions that can produce disjunct distributions include: flooding, or changes in wind, stream, and current flows, plus others such as anthropogenic introduction of alien introduced species either accidentally or deliberately (agriculture and horticulture). Habitat fragmentation Disjunct distributions can occur when suitable habitat is fragmented, which produces fragmented populations, and when that fragmentation becomes so divergent that species movement between one suitable habitat to the next is disrupted, isolated population can be produced. Extinctions can cause disjunct distribution, especially in areas where only scattered areas are habitable by a species; for instance, island chains or specific elevations along a mountain range or areas along a coast or between bodies of water like streams, lakes and ponds. Examples There are many patterns of disjunct distributions at many scales: Irano-Turanian disjunction, Europe - East Asia, Europe-South Africa (e.g. genus Erica), Mediterranean-Hoggart disjunction (genus Olea), etc. Lusitanian distribution This kind of disjunct distribution of a species, such that it occurs in Iberia and in Ireland, without any intermediate localities, is us
https://en.wikipedia.org/wiki/Preprohormone
A preprohormone is the precursor protein to one or more prohormones, which are in turn precursors to peptide hormones. In general, the protein consists of the amino acid chain that is created by the hormone-secreting cell, before any changes have been made to it. It contains a signal peptide, the hormone(s) itself (themselves), and intervening amino acids. Before the hormone is released from the cell, the signal peptide and other amino acids are removed.
https://en.wikipedia.org/wiki/ASCII%20ribbon%20campaign
The ASCII ribbon campaign was an Internet phenomenon started in 1998 advocating that email be sent only in plain text, because of inefficiencies or dangers of using HTML email. Proponents placed ASCII art in their signature blocks, meant to look like an awareness ribbon, along with a message or link to an advocacy site: History Following the development of Microsoft Windows 95, standards adherents became annoyed that they were receiving email in HTML and non-human-readable formats. The first known appearance of a ribbon in support of the campaign was in the signature of an email dated 17 June 1998 by Maurício Teixeira of Brazil. Two groups of pursuers, Asciiribbon.org and ARC.Pasp.DE, differ in their attitudes towards vCard. See also Simple Mail Transfer Protocol MIME
https://en.wikipedia.org/wiki/Ochratoxin
Ochratoxins are a group of mycotoxins produced by some Aspergillus species (mainly A. ochraceus and A. carbonarius, but also by 33% of A. niger industrial strains) and some Penicillium species, especially P. verrucosum. Ochratoxin A is the most prevalent and relevant fungal toxin of this group, while ochratoxins B and C are of lesser importance. Ochratoxin A is known to occur in commodities such as cereals, coffee, dried fruit, and red wine. It is possibly a human carcinogen and is of special interest as it can be accumulated in the meat of animals. Exposure to ochratoxins through diet can cause acute toxicity in mammalian kidneys. Exposure to ochratoxin A has been associated with Balkan endemic nephropathy, a kidney disease with high mortality in people living near tributaries of the Danube River in Eastern Europe. It has been suggested that carriers of alleles associated with phenylketonuria may have been protected from spontaneous abortion caused by ochratoxin exposure, providing a heterozygous advantage for the alleles despite the possibility of severe intellectual disability in the more rare instance of inheritance from both parents.
https://en.wikipedia.org/wiki/Ascorbyl%20palmitate
Ascorbyl palmitate is an ester formed from ascorbic acid and palmitic acid creating a fat-soluble form of vitamin C. In addition to its use as a source of vitamin C, it is also used as an antioxidant food additive (E number E304). It is approved for use as a food additive in the EU, the U.S., Canada, Australia, and New Zealand. Ascorbyl palmitate is also marketed as "vitamin C ester". It is synthesized by acylation of vitamin C using different acyl donors. See also Ascorbyl stearate Vitamin C Mineral ascorbates
https://en.wikipedia.org/wiki/Ascorbyl%20stearate
Ascorbyl stearate (C24H42O7) is an ester formed from ascorbic acid and stearic acid. In addition to its use as a source of vitamin C, it is used as an antioxidant food additive in margarine (E number E305). The USDA limits its use to 0.02% individually or in conjunction with other antioxidants. See also Ascorbyl palmitate Mineral ascorbates
https://en.wikipedia.org/wiki/Covariant%20classical%20field%20theory
In mathematical physics, covariant classical field theory represents classical fields by sections of fiber bundles, and their dynamics is phrased in the context of a finite-dimensional space of fields. Nowadays, it is well known that jet bundles and the variational bicomplex are the correct domain for such a description. The Hamiltonian variant of covariant classical field theory is the covariant Hamiltonian field theory where momenta correspond to derivatives of field variables with respect to all world coordinates. Non-autonomous mechanics is formulated as covariant classical field theory on fiber bundles over the time axis ℝ. Examples Many important examples of classical field theories which are of interest in quantum field theory are given below. In particular, these are the theories which make up the Standard model of particle physics. These examples will be used in the discussion of the general mathematical formulation of classical field theory. Uncoupled theories Scalar field theory Klein−Gordon theory Spinor theories Dirac theory Weyl theory Majorana theory Gauge theories Maxwell theory Yang–Mills theory. This is the only theory in the uncoupled theory list which contains interactions: Yang–Mills contains self-interactions. Coupled theories Yukawa coupling: coupling of scalar and spinor fields. Scalar electrodynamics/chromodynamics: coupling of scalar and gauge fields. Quantum electrodynamics/chromodynamics: coupling of spinor and gauge fields. Despite these being named quantum theories, the Lagrangians can be considered as those of a classical field theory. Requisite mathematical structures In order to formulate a classical field theory, the following structures are needed: Spacetime A smooth manifold . This is variously known as the world manifold (for emphasizing the manifold without additional structures such as a metric), spacetime (when equipped with a Lorentzian metric), or the base manifold for a more geometrical viewpoint. St
https://en.wikipedia.org/wiki/DermAtlas
DermAtlas is an open-access website devoted to dermatology that is hosted by Johns Hopkins University's Bernard A. Cohen and Christoph U. Lehmann. Its goal is to build a large-high-quality dermatologic atlas, a database of images of skin conditions, and it encourages its users to submit their dermatology images and links for inclusion. It is edited in a collaborative fashion by physicians around the globe and includes an online Dermatology Quiz, that allows anyone to test their dermatology knowledge. The database currently includes over 10,500 images and consists of both clinical images and histological images. Great emphasis is placed on dermatological conditions in pediatric patients.
https://en.wikipedia.org/wiki/Equivalence%20of%20direct%20radiation
Equivalence of direct radiation (EDR) is a standardized comparison method for estimating the output ability of space-heating radiators and convectors. Measured in square feet, the reference standard for EDR is the mattress radiator invented by Stephen J. Gold in the mid 19th century. One square foot of EDR is able to liberate 240 BTU per hour when surrounded by air and filled with steam of approximately temperature and 1 psi of pressure. EDR was originally a measure of the actual surface area of radiators. As radiator (and later convector) design became more complicated and compact, the relationship of actual surface area to EDR became arbitrary. Laboratory methods based on the condensation of steam allowed for very accurate measurements. While now somewhat archaic, EDR is still computed and used for sizing steam boilers and radiators, and for modifying and troubleshooting older heating systems using steam or hot water.
https://en.wikipedia.org/wiki/Sauerbrey%20equation
The Sauerbrey equation was developed by the German Günter Sauerbrey in 1959, while working on his doctoral thesis at the Technical University of Berlin, Germany. It is a method for correlating changes in the oscillation frequency of a piezoelectric crystal with the mass deposited on it. He simultaneously developed a method for measuring the characteristic frequency and its changes by using the crystal as the frequency determining component of an oscillator circuit. His method continues to be used as the primary tool in quartz crystal microbalance (QCM) experiments for conversion of frequency to mass and is valid in nearly all applications. The equation is derived by treating the deposited mass as though it were an extension of the thickness of the underlying quartz. Because of this, the mass to frequency correlation (as determined by Sauerbrey’s equation) is largely independent of electrode geometry. This has the benefit of allowing mass determination without calibration, making the set-up desirable from a cost and time investment standpoint. The Sauerbrey equation is defined as: where: – Resonant frequency of the fundamental mode (Hz) – normalized frequency change (Hz) – Mass change (g) – Piezoelectrically active crystal area (Area between electrodes, cm2) – Density of quartz ( = 2.648 g/cm3) – Shear modulus of quartz for AT-cut crystal ( = 2.947x1011 g·cm−1·s−2) The normalized frequency is the nominal frequency shift of that mode divided by its mode number (most software outputs normalized frequency shift by default). Because the film is treated as an extension of thickness, Sauerbrey’s equation only applies to systems in which the following three conditions are met: the deposited mass must be rigid, the deposited mass must be distributed evenly and the frequency change < 0.05. If the change in frequency is greater than 5%, that is, > 0.05, the Z-match method must be used to determine the change in mass. The formula for the Z-match method is: Equa
https://en.wikipedia.org/wiki/Prefactoring
Prefactoring is the application of experience to the creation of new software systems. Its relationship to its namesake refactoring is that lessons learned from refactoring are part of that experience. Experience is captured in guidelines that can be applied to a development process. The guidelines have come from a number of sources, including Jerry Weinberg, Norm Kerth, and Scott Ambler. These guidelines include: "When you're abstract, be abstract all the way" "Splitters can be lumped more easily than lumpers can be split" "Use the client’s language"
https://en.wikipedia.org/wiki/Post%E2%80%93Turing%20machine
A Post–Turing machine is a "program formulation" of a type of Turing machine, comprising a variant of Emil Post's Turing-equivalent model of computation. Post's model and Turing's model, though very similar to one another, were developed independently. Turing's paper was received for publication in May 1936, followed by Post's in October. A Post–Turing machine uses a binary alphabet, an infinite sequence of binary storage locations, and a primitive programming language with instructions for bi-directional movement among the storage locations and alteration of their contents one at a time. The names "Post–Turing program" and "Post–Turing machine" were used by Martin Davis in 1973–1974 (Davis 1973, p. 69ff). Later in 1980, Davis used the name "Turing–Post program" (Davis, in Steen p. 241). 1936: Post model In his 1936 paper "Finite Combinatory Processes—Formulation 1", Emil Post described a model of which he conjectured is "logically equivalent to recursiveness". Post's model of a computation differs from the Turing-machine model in a further "atomization" of the acts a human "computer" would perform during a computation. Post's model employs a "symbol space" consisting of a "two-way infinite sequence of spaces or boxes", each box capable of being in either of two possible conditions, namely "marked" (as by a single vertical stroke) and "unmarked" (empty). Initially, finitely-many of the boxes are marked, the rest being unmarked. A "worker" is then to move among the boxes, being in and operating in only one box at a time, according to a fixed finite "set of directions" (instructions), which are numbered in order (1,2,3,...,n). Beginning at a box "singled out as the starting point", the worker is to follow the set of instructions one at a time, beginning with instruction 1. There are five different primitive operations that the worker can perform: (a) Marking the box it is in, if it is empty (b) Erasing the mark in the box it is in, if it is marked (c) Mov
https://en.wikipedia.org/wiki/Drive%20Image%20%28software%29
Drive Image (PQDI) is a software disk cloning package for Intel-based computers. The software was developed and distributed by the former PowerQuest Corporation. Drive Image version 7 became the basis for Norton Ghost 9.0, which was released to retail markets in August 2004. Ghost was a competing product, developed by Binary Research, before Symantec bought the company in 1998. This also explains the different file extensions used for Ghost image files: formerly it was .gho, now in versions 9.0 and above it is .v2i. Product history Drive Image version 7 was the last version published under the PowerQuest corporate banner. It was also the first version to include a native Windows interface for cloning an active system partition; prior versions required a reboot into a DOS-like environment in order to clone the active partition. In order to clone active partitions without requiring a reboot, Drive Image 7 employed a volume snapshot device driver which was licensed from StorageCraft Technology Corporation. Drive Image 2002 (version 6) is the last release that allows the creation of a rescue set on floppy disk, which can be used to create and restore an image. See also List of disk cloning software
https://en.wikipedia.org/wiki/Design%20closure
Design Closure is a part of the digital electronic design automation workflow by which an integrated circuit (i.e. VLSI) design is modified from its initial description to meet a growing list of design constraints and objectives. Every step in the IC design (such as static timing analysis, placement, routing, and so on) is already complex and often forms its own field of study. This article, however, looks at the overall design closure process, which takes a chip from its initial design state to the final form in which all of its design constraints are met. Introduction Every chip starts off as someone’s idea of a good thing: "If we can make a part that performs function X, we will all be rich!" Once the concept is established, someone from marketing says "To make this chip profitably, it must cost $C and run at frequency F." Someone from manufacturing says "To meet this chip’s targets, it must have a yield of Y%." Someone from packaging says “It must fit in the P package and dissipate no more than W watts.” Eventually, the team generates an extensive list of all the constraints and objectives they must meet to manufacture a product that can be sold profitably. The management then forms a design team, which consists of chip architects, logic designers, functional verification engineers, physical designers, and timing engineers, and assigns them to create a chip to the specifications. Constraints vs Objectives The distinction between constraints and objectives is straightforward: a constraint is a design target that must be met for the design to be successful. For example, a chip may be required to run at a specific frequency so it can interface with other components in a system. In contrast, an objective is a design target where more (or less) is better. For example, yield is generally an objective, which is maximized to lower manufacturing cost. For the purposes of design closure, the distinction between constraints and objectives is not important; this artic
https://en.wikipedia.org/wiki/Toby%20Froud
Toby Froud (born 1984) is an English-American artist, special effects designer, puppeteer, filmmaker, and performer. He rose to prominence for his role as the baby who was wished away to the goblins in the 1986 Jim Henson film Labyrinth. He became a puppeteer, sculptor, and fabricator for film, television, and theatre. He wrote and directed the 2014 fantasy short film Lessons Learned. He was the design supervisor of the 2019 streaming television series The Dark Crystal: Age of Resistance. Early life Toby Froud was born in 1984 in London, to English painter Brian Froud and American puppet-maker Wendy Froud. His maternal grandfather was the German-American sculptor Walter Midener (1912–1998), and his maternal grandmother was Margaret "Peggy" Midener (née Mackenzie; 1925–2016), a painter and collage artist in Michigan. His parents met in 1978 while working on preproduction for the 1982 Jim Henson film The Dark Crystal, for which Brian was the conceptual designer and Wendy a puppet fabricator. They married in 1980. Froud was born during preproduction of his parents' second film with Henson, Labyrinth (released in 1986), and at the age of one he was featured in the film as the baby who is wished away to the Goblin King by his older sister Sarah. The name of the baby in the script had originally been Freddie, but was changed to Toby so as not to confuse Froud. Due to Labyrinths popularity Froud has garnered a cult status and been described as one of the most famous babies in cinema and of the 1980s. Froud was raised in Chagford, Devon, on the edge of Dartmoor. He developed an interest in puppetry from a young age due to exposure to his parents' artwork. Career Froud apprenticed at the Muppet Workshop in New York in 1999, and in 2004 worked at Weta Workshop in New Zealand as a sculptor, fabricator, and miniature effects artist for the 2005 films The Chronicles of Narnia: The Lion, the Witch and the Wardrobe and King Kong. He graduated from Wimbledon School of Art in
https://en.wikipedia.org/wiki/Legacy%20mode
In computing, legacy mode is a state in which a computer system, component, or software application behaves in a way that is different from its standard operation in order to support older software, data, or expected behavior. It differs from backward compatibility in that an item in legacy mode will often sacrifice newer features or performance, or be unable to access data or run programs it normally could, in order to provide continued access to older data or functionality. Sometimes it can allow newer technologies that replaced the old to emulate them when running older operating systems. Examples x86-64 processors can be run in one of two states: long mode provides larger physical address spaces and the ability to run 64-bit applications which can use larger virtual address spaces and more registers, and legacy mode. These processors' legacy mode allows these processors to act as if they were 16- or 32-bit x86 processors with all of the abilities and limitations of them in order to run legacy 16-bit and 32-bit operating systems, and to run programs requiring virtual 8086 mode to run in Windows. 32-bit x86 processors themselves have two legacy modes: real mode and virtual 8086 mode. Real mode causes the processor to mostly act as if it was an original 8086, while virtual 8086 mode allows the creation of a virtual machine to allow the running of programs that require real mode in order to run under a protected mode environment. Protected mode is the non-legacy mode of 32-bit x86 processors and the 80286. Most PC graphic cards have a VGA and a SVGA mode that allows them to be used on systems that have not loaded the device driver necessary to take advantage of their more advanced features. Operating systems often have a special mode allowing them to emulate an older release in order to support software applications dependent on the specific interfaces and behavior of that release. Windows XP can be configured to emulate Windows 2000 and Windows 98; Mac OS X c
https://en.wikipedia.org/wiki/Ives%E2%80%93Stilwell%20experiment
The Ives–Stilwell experiment tested the contribution of relativistic time dilation to the Doppler shift of light. The result was in agreement with the formula for the transverse Doppler effect and was the first direct, quantitative confirmation of the time dilation factor. Since then many Ives–Stilwell type experiments have been performed with increased precision. Together with the Michelson–Morley and Kennedy–Thorndike experiments it forms one of the fundamental tests of special relativity theory. Other tests confirming the relativistic Doppler effect are the Mössbauer rotor experiment and modern Ives–Stilwell experiments. Both time dilation and the relativistic Doppler effect were predicted by Albert Einstein in his seminal 1905 paper. Einstein subsequently (1907) suggested an experiment based on the measurement of the relative frequencies of light perceived as arriving from "canal rays" (positive ion beams created by certain types of gas-discharge tubes) in motion with respect to the observer, and he calculated the additional Doppler shift due to time dilation. This effect was later called "transverse Doppler effect" (TDE), since such experiments were initially imagined to be conducted at right angles with respect to the moving source, in order to avoid the influence of the longitudinal Doppler shift. Eventually, Herbert E. Ives and G. R. Stilwell (referring to time dilation as following from the theory of Lorentz and Larmor) gave up the idea of measuring this effect at right angles. They used rays in longitudinal direction and found a way to separate the much smaller TDE from the much bigger longitudinal Doppler effect. The experiment was performed in 1938 and was reprised several times. Similar experiments were conducted several times with increased precision, for example, by Otting (1939), Mandelberg et al. (1962), Hasselkamp et al. (1979), and Botermann et al. Experiments with "canal rays" Experimental challenges Initial attempts to measure the second ord