source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Weeks%20manifold
In mathematics, the Weeks manifold, sometimes called the Fomenko–Matveev–Weeks manifold, is a closed hyperbolic 3-manifold obtained by (5, 2) and (5, 1) Dehn surgeries on the Whitehead link. It has volume approximately equal to 0.942707… () and showed that it has the smallest volume of any closed orientable hyperbolic 3-manifold. The manifold was independently discovered by as well as . Volume Since the Weeks manifold is an arithmetic hyperbolic 3-manifold, its volume can be computed using its arithmetic data and a formula due to Armand Borel: where is the number field generated by satisfying and is the Dedekind zeta function of . Alternatively, where is the polylogarithm and is the absolute value of the complex root (with positive imaginary part) of the cubic. Related manifolds The cusped hyperbolic 3-manifold obtained by (5, 1) Dehn surgery on the Whitehead link is the so-called sibling manifold, or sister, of the figure-eight knot complement. The figure eight knot's complement and its sibling have the smallest volume of any orientable, cusped hyperbolic 3-manifold. Thus the Weeks manifold can be obtained by hyperbolic Dehn surgery on one of the two smallest orientable cusped hyperbolic 3-manifolds. See also Meyerhoff manifold - second small volume
https://en.wikipedia.org/wiki/Lazar%20Lyusternik
Lazar Aronovich Lyusternik (also Lusternik, Lusternick, Ljusternik; ; 31 December 1899 – 22 July 1981) was a Soviet mathematician. He is famous for his work in topology and differential geometry, to which he applied the variational principle. Using the theory he introduced, together with Lev Schnirelmann, he proved the theorem of the three geodesics, a conjecture by Henri Poincaré that every convex body in 3-dimensions has at least three simple closed geodesics. The ellipsoid with distinct but nearly equal axis is the critical case with exactly three closed geodesics. The Lusternik–Schnirelmann theory, as it is called now, is based on the previous work by Poincaré, David Birkhoff, and Marston Morse. It has led to numerous advances in differential geometry and topology. For this work Lyusternik received the Stalin Prize in 1946. In addition to serving as a professor of mathematics at Moscow State University, Lyusternik also worked at the Steklov Mathematical Institute (RAS) from 1934 to 1948 and the Lebedev Institute of Precise Mechanics and Computer Engineering (IPMCE) from 1948 to 1955. He was a student of Nikolai Luzin. In 1930 he became one of the initiators of the Egorov affair and then one of the participants in the notorious political persecution of his teacher Nikolai Luzin known as the Luzin affair. See also Lusternik–Schnirelmann category Lyusternik's generalization of the Brunn–Minkowski theorem
https://en.wikipedia.org/wiki/Starlet%20sea%20anemone
The starlet sea anemone (Nematostella vectensis) is a species of small sea anemone in the family Edwardsiidae native to the east coast of the United States, with introduced populations along the coast of southeast England and the west coast of the United States (class Anthozoa, phylum Cnidaria, a sister group of Bilateria). Populations have also been located in Nova Scotia, Canada. This sea anemone is found in the shallow brackish water of coastal lagoons and salt marshes where its slender column is usually buried in the mud and its tentacles exposed. Its genome has been sequenced and it is cultivated in the laboratory as a model organism, but the IUCN has listed it as being a "Vulnerable species" in the wild. Description The starlet sea anemone has a bulbous basal end and a contracting column that ranges in length from less than . There is a fairly distinct division between the scapus, the main part of the column, and the capitulum, the part just below the crown of tentacles. The outer surface of the column has a loose covering of mucus to which particles of sediment tend to adhere. At the top of the column is an oral disk containing the mouth surrounded by two rings of long slender tentacles. Typically there are fourteen but sometimes as many as twenty tentacles, the outermost being longer than the inner whorl. The starlet sea anemone is translucent and largely colourless but usually has a pattern of white markings on the column and white banding on the tentacles. Distribution and habitat The starlet sea anemone occurs on the eastern and westward seaboard of North America. Its range extends from Nova Scotia to Louisiana on the east coast and from Washington to California on the west coast. It is also known from three locations in the United Kingdom—two in East Anglia and one on the Isle of Wight. Its typical habitat is brackish ponds, brackish lagoons and ditches and pools in salt marshes. It is found in positions with little water flow and seldom occurs more th
https://en.wikipedia.org/wiki/Decade%20of%20the%20Brain
The Decade of the Brain was a designation for 1990–1999 by U.S. president George H. W. Bush as part of a larger effort involving the Library of Congress and the National Institute of Mental Health of the National Institutes of Health "to enhance public awareness of the benefits to be derived from brain research". The interagency initiative was conducted through a variety of activities including publications and programs aimed at introducing members of Congress, their staffs, and the general public to cutting-edge research on the human brain and encouraging public dialog on the ethical, philosophical, and humanistic implications of these discoveries. Inception During the 1980s, a renewed interest in the localization of neural phenomena to distinct anatomical regions of the brain led to a rapid uptick in more direct study of the brain and its psychological implications. In 1980, the Society for Neuroscience founded a committee dedicated to lobbying directly with the United States Congress for increased neuroscience funding, as well as encouraging members of the society to contact their own representatives. This group, alongside the National Committee for Research in Neurological and Communicative Disorders, the National Science Foundation’s Interagency Working Group in Neuroscience, the Association of American Medical Colleges and the Inter-Society Council for Biology and Medicine, met with legislators in Washington D.C. throughout the 1980s in order to educate them about the importance of neuroscience research and advocate for fiscal appropriations. In 1987, the National Committee for Research in Neurological and Communicative Disorders collaborated with an National Institute of Neurological Disorders and Stroke advisory council to propose a large-scale attempt to build on recent advances in order to further research and clinical development in neuroscience. This proposal to designate the decade, beginning January 1990, as the "Decade of the Brain", was sponsore
https://en.wikipedia.org/wiki/Franz%E2%80%93Keldysh%20effect
The Franz–Keldysh effect is a change in optical absorption by a semiconductor when an electric field is applied. The effect is named after the German physicist Walter Franz and Russian physicist Leonid Keldysh. Karl W. Böer observed first the shift of the optical absorption edge with electric fields during the discovery of high-field domains and named this the Franz-effect. A few months later, when the English translation of the Keldysh paper became available, he corrected this to the Franz–Keldysh effect. As originally conceived, the Franz–Keldysh effect is the result of wavefunctions "leaking" into the band gap. When an electric field is applied, the electron and hole wavefunctions become Airy functions rather than plane waves. The Airy function includes a "tail" which extends into the classically forbidden band gap. According to Fermi's golden rule, the more overlap there is between the wavefunctions of a free electron and a hole, the stronger the optical absorption will be. The Airy tails slightly overlap even if the electron and hole are at slightly different potentials (slightly different physical locations along the field). The absorption spectrum now includes a tail at energies below the band gap and some oscillations above it. This explanation does, however, omit the effects of excitons, which may dominate optical properties near the band gap. The Franz–Keldysh effect occurs in uniform, bulk semiconductors, unlike the quantum-confined Stark effect, which requires a quantum well. Both are used for electro-absorption modulators. The Franz–Keldysh effect usually requires hundreds of volts, limiting its usefulness with conventional electronics – although this is not the case for commercially available Franz–Keldysh-effect electro-absorption modulators that use a waveguide geometry to guide the optical carrier. Effect on modulation spectroscopy The absorption coefficient is related to the dielectric constant (especially the complex part 2). From Maxwell's
https://en.wikipedia.org/wiki/Gepr%C3%BCfte%20Sicherheit
The Geprüfte Sicherheit ("Tested Safety") or GS mark is a voluntary certification mark for technical equipment. It indicates that the equipment meets German and, if available, European safety requirements for such devices. The main difference between GS and CE mark is that the compliance with the European safety requirements has been tested and certified by a state-approved independent body. CE marking, in contrast, is issued for the signing of a declaration that the product is in compliance with European legislation. The GS mark is based on the German Product Safety Act ("Produktsicherheitsgesetz", or "ProdSG"). Testing for the mark is available from many different laboratories, such as, DGUV Test the TÜV, Nemko and IMQ. Although the GS mark was designed with the German market in mind, it appears on a large proportion of electronic products and machinery sold elsewhere in the world. See also CE mark UL mark
https://en.wikipedia.org/wiki/Voluntary%20Control%20Council%20for%20Interference%20by%20Information%20Technology%20Equipment
The Voluntary Control Council for Interference by Information Technology Equipment or VCCI is the Japanese body governing RF emissions (i.e. electromagnetic interference) standards. It was formed in December 1985. The VCCI mark of conformance also appears on some electrical equipment sold outside Japan. External links VCCI English-language website Certification marks Electromagnetic compatibility
https://en.wikipedia.org/wiki/Meat%20hook
A meat hook is any hook normally used in butcheries to hang meat. This form of hook is a variation on the classic S hook. Types An S-shaped hook or jointed hook is used to hang up meat or the carcasses of animals such as pigs and cattle on a moving conveyor line. The jointed hook is able to swivel, allowing the carcass to be turned more easily. A gambrel hook or stick is a frame (shaped like a horse's hind leg) with hooks for suspending a carcass in a more spread out fashion. A grip hook is a single hook with a handle of some kind, to hold on to a carcass while butchering. A bacon hook or bacon hanger is a multi-pronged coat-hanger type hook, used to hang bacon joints and other meat. External links Mechanical hand tools
https://en.wikipedia.org/wiki/Vacuum%20engineering
Vacuum engineering is the field of engineering that deals with the practical use of vacuum in industrial and scientific applications. Vacuum may improve the productivity and performance of processes otherwise carried out at normal air pressure, or may make possible processes that could not be done in the presence of air. Vacuum engineering techniques are widely applied in materials processing such as drying or filtering, chemical processing, application of metal coatings to objects, manufacture of electron devices and incandescent lamps, and in scientific research. Vacuum techniques vary depending on the desired vacuum pressure to be achieved. For a "rough" vacuum, over 100 Pascals pressure, conventional methods of analysis, materials, pumps and measuring instruments can be used, whereas ultrahigh vacuum systems use specialized equipment to achieve pressures below one-millionth of one Pascal. At such low pressures, even metals may emit enough gas to cause serious contamination. Design and mechanism Vacuum systems usually consist of gauges, vapor jet and pumps, vapor traps and valves along with other extensional piping. A vessel that is operating under vacuum system may be any of these types such as processing tank, steam simulator, particle accelerator, or any other type of space that has an enclosed chamber to maintain the system in less than atmospheric gas pressure. Since a vacuum is created in an enclosed chamber, the consideration of being able to withstand external atmospheric pressure are the usual precaution for this type of design. Along with the effect of buckling or collapsing, the outer shell of vacuum chamber will be carefully evaluated and any sign of deterioration will be corrected by the increase of thickness of the shell itself. The main materials used for vacuum design are usually mild steel, stainless steel, and aluminum. Other sections such as glass are used for gauge glass, view ports, and sometimes electrical insulation. The interior of
https://en.wikipedia.org/wiki/Linnaean%20enterprise
The Linnaean enterprise is the task of identifying and describing all living species. It is named after Carl Linnaeus, a Swedish botanist, ecologist and physician who laid the foundations for the modern scheme of taxonomy. As of 2006, the Linnaean enterprise is considered to be barely begun. There are estimated to be 10 million living species, but only about 1.5-1.8 million have been even named, and fewer than 1% of these have been studied enough to understand the basics of their ecological roles. Linnaean enterprise plays a larger role in applied science and basic science. With applied science, it can assist in finding new natural products and species (bioprospecting) and effective conservation practices. It allows for an understanding of evolutionary biology and how ecosystems function in basic science. The cost of completing the Linnaean Enterprise has been estimated at US $5 billion. Name Carl Linnaeus (1707–1778) was one of the most well known natural scientists of his time. Very unsatisfied with the contemporary way of naming living things, he was responsible for creating the binomial nomenclature system still used in science to name species of organisms. Linnaeus's work laid the basis of modern taxonomy. As part of his work, Linnaeus formally described and classified numerous species of plants and animals, and created binomial (scientific) names that still are used today for many of the most common species in Europe. Notably, Linnaeus's taxonomic system was the first where humans were taxonomically grouped with apes, classifying both genus Homo as well as Simia (now defunct and replaced by several other genera) to be members of order Primates. See also Catalogue of Life Encyclopedia of Life Wikispecies
https://en.wikipedia.org/wiki/The%20World%20Economy%3A%20Historical%20Statistics
The World Economy: Historical Statistics is a landmark book by Angus Maddison. Published in 2004 by the OECD Development Centre, it studies the growth of populations and economies across the centuries: not just the world economy as it is now, but how it was in the past. Among other things, it showed that Europe's gross domestic product (GDP) per capita was faster progressing than the leading Asian economies since 1000 AD, reaching again a higher level than elsewhere from the 15th century, while Asian GDP per capita remained static until 1800, when it even began to shrink in absolute terms, as Maddison demonstrated in a subsequent book. At the same time, Maddison showed them recovering lost ground from the 1950s, and documents the much faster rise of Japan and East Asia and the economic shrinkage of Russia in the 1990s. It also shows how colonialism strongly benefited Europe at a tremendous cost to Asia. The book is a mass of statistical tables, mostly on a decade-by-decade basis, along with notes explaining the methods employed in arriving at particular figures. It is available both as a paperback book and in electronic format. Some tables are available on the official website. See also List of regions by past GDP (PPP) per capita Angus Maddison statistics of the ten largest economies by GDP (PPP) Maddison Project, a project started in March 2010 to continue Maddison's work after his death
https://en.wikipedia.org/wiki/Nikitsky%20Botanical%20Garden
Nikita Botanical Garden (, ) is one of the oldest botanical gardens in Europe. It is located in Crimea, close to Yalta, by the shores of the Black Sea. It was founded in 1812 and named after the settlement Nikita in Crimea, Russian Empire. Its founder and first director was Russian botanist Christian von Steven of Swedish descent. The garden and its collections were greatly expanded by the Livonian Nicolai Anders von Hartwiss, director 1827–1860. The total area is 11 square kilometres. It is a scientific research centre, a producer of saplings and seeds, and a tourist attraction. The garden opened a representative office in Damascus in 2022. The garden was the part of the Ukrainian Academy of Agrarian Sciences. It has subsidiaries in Crimea and Kherson Oblast. Its collection counts over 50,000 species, sorts and hybrides. Its scientific work consists in study of natural flora, collection of gene fund, selection and introduction of new agricultural plants for south Ukraine, Russia, and other countries. Gallery
https://en.wikipedia.org/wiki/Position%20angle
In astronomy, position angle (usually abbreviated PA) is the convention for measuring angles on the sky. The International Astronomical Union defines it as the angle measured relative to the north celestial pole (NCP), turning positive into the direction of the right ascension. In the standard (non-flipped) images, this is a counterclockwise measure relative to the axis into the direction of positive declination. In the case of observed visual binary stars, it is defined as the angular offset of the secondary star from the primary relative to the north celestial pole. As the example illustrates, if one were observing a hypothetical binary star with a PA of 135°, that means an imaginary line in the eyepiece drawn from the north celestial pole to the primary (P) would be offset from the secondary (S) such that the angle would be 135°. When graphing visual binaries, the NCP is, as in the illustration, normally drawn from the center point (origin) that is the Primary downward–that is, with north at bottom–and PA is measured counterclockwise. Also, the direction of the proper motion can, for example, be given by its position angle. The definition of position angle is also applied to extended objects like galaxies, where it refers to the angle made by the major axis of the object with the NCP line. Nautics The concept of the position angle is inherited from nautical navigation on the oceans, where the optimum compass course is the course from a known position to a target position with minimum effort. Setting aside the influence of winds and ocean currents, the optimum course is the course of smallest distance between the two positions on the ocean surface. Computing the compass course is known as the inverse geodetic problem. This article considers only the abstraction of minimizing the distance between and traveling on the surface of a sphere with some radius : In which direction angle relative to North should the ship steer to reach the target position? Se
https://en.wikipedia.org/wiki/Dehn%20plane
In geometry, Max Dehn introduced two examples of planes, a semi-Euclidean geometry and a non-Legendrian geometry, that have infinitely many lines parallel to a given one that pass through a given point, but where the sum of the angles of a triangle is at least . A similar phenomenon occurs in hyperbolic geometry, except that the sum of the angles of a triangle is less than . Dehn's examples use a non-Archimedean field, so that the Archimedean axiom is violated. They were introduced by and discussed by . Dehn's non-archimedean field Ω(t) To construct his geometries, Dehn used a non-Archimedean ordered Pythagorean field Ω(t), a Pythagorean closure of the field of rational functions R(t), consisting of the smallest field of real-valued functions on the real line containing the real constants, the identity function t (taking any real number to itself) and closed under the operation . The field Ω(t) is ordered by putting x > y if the function x is larger than y for sufficiently large reals. An element x of Ω(t) is called finite if m < x < n for some integers m, n, and is called infinite otherwise. Dehn's semi-Euclidean geometry The set of all pairs (x, y), where x and y are any (possibly infinite) elements of the field Ω(t), and with the usual metric which takes values in Ω(t), gives a model of Euclidean geometry. The parallel postulate is true in this model, but if the deviation from the perpendicular is infinitesimal (meaning smaller than any positive rational number), the intersecting lines intersect at a point that is not in the finite part of the plane. Hence, if the model is restricted to the finite part of the plane (points (x,y) with x and y finite), a geometry is obtained in which the parallel postulate fails but the sum of the angles of a triangle is . This is Dehn's semi-Euclidean geometry. It is discussed in . Dehn's non-Legendrian geometry In the same paper, Dehn also constructed an example of a non-Legendrian geometry where there are infini
https://en.wikipedia.org/wiki/Electron%20spectroscopy
Electron spectroscopy refers to a group formed by techniques based on the analysis of the energies of emitted electrons such as photoelectrons and Auger electrons. This group includes X-ray photoelectron spectroscopy (XPS), which also known as Electron Spectroscopy for Chemical Analysis (ESCA), Electron energy loss spectroscopy (EELS), Ultraviolet photoelectron spectroscopy (UPS), and Auger electron spectroscopy (AES). These analytical techniques are used to identify and determine the elements and their electronic structures from the surface of a test sample. Samples can be solids, gases or liquids. Chemical information is obtained only from the uppermost atomic layers of the sample (depth 10 nm or less) because the energies of Auger electrons and photoelectrons are quite low, typically 20 - 2000 eV. For this reason, electron spectroscopy techniques are used to analyze surface chemicals. History The development of electron spectroscopy can be considered to have begun in 1887 when the German physicist Heinrich Rudolf Hertz discovered the photoelectric effect but was unable to explain it. In 1900, Max Planck (1918 Nobel Prize in Physics) suggested that energy carried by electromagnetic waves could only be released in "packets" of energy. In 1905 Albert Einstein (1921 Nobel Prize of Physics) explained Planck's discovery and the photoelectric effect. He presented the hypothesis that light energy is carried in discrete quantized packets (photons), each with energy hν to explain the experimental dobservations. Two years after this publication, in 1907, P. D. Innes recorded the first XPS spectrum. After numerous developments and the Second World War, Kai Siegbahn (Nobel Prize in 1981) with his research group in Uppsala, Sweden registered in 1954 the first XPS device to produce a high energy-resolution XPS spectrum. In 1967, Siegbahn published a comprehensive study of XPS and its usefulness, which he called electron spectroscopy for chemical analysis (ESCA). Concurrentl
https://en.wikipedia.org/wiki/Point%20of%20interest
A point of interest (POI) is a specific point location that someone may find useful or interesting. An example is a point on the Earth representing the location of the Eiffel Tower, or a point on Mars representing the location of its highest mountain, Olympus Mons. Most consumers use the term when referring to hotels, campsites, fuel stations or any other categories used in modern automotive navigation systems. Users of a mobile device can be provided with geolocation and time aware POI service that recommends geolocations nearby and with a temporal relevance (e.g. POI to special services in a ski resort are available only in winter). The term is widely used in cartography, especially in electronic variants including GIS, and GPS navigation software. In this context the synonym waypoint is common. A GPS point of interest specifies, at minimum, the latitude and longitude of the POI, assuming a certain map datum. A name or description for the POI is usually included, and other information such as altitude or a telephone number may also be attached. GPS applications typically use icons to represent different categories of POI on a map graphically. A region of interest (ROI) and a volume of interest (VOI) are similar in concept, denoting a region or a volume (which may contain various individual POIs). In medical fields such as histology/pathology/histopathology, points of interest are selected from the general background in a field of view; for example, among hundreds of normal cells, the pathologist may find 3 or 4 neoplastic cells that stand out from the others upon staining. POI collections Digital maps for modern GPS devices typically include a basic selection of POI for the map area. However, websites exist that specialize in the collection, verification, management and distribution of POI which end-users can load onto their devices to replace or supplement the existing POI. While some of these websites are generic, and will collect and categorize POI f
https://en.wikipedia.org/wiki/Bacterial%20cell%20structure
The bacterium, despite its simplicity, contains a well-developed cell structure which is responsible for some of its unique biological structures and pathogenicity. Many structural features are unique to bacteria and are not found among archaea or eukaryotes. Because of the simplicity of bacteria relative to larger organisms and the ease with which they can be manipulated experimentally, the cell structure of bacteria has been well studied, revealing many biochemical principles that have been subsequently applied to other organisms. Cell morphology Perhaps the most elemental structural property of bacteria is their morphology (shape). Typical examples include: coccus (circle or spherical) bacillus (rod-like) coccobacillus (between a sphere and a rod) spiral (corkscrew-like) filamentous (elongated) Cell shape is generally characteristic of a given bacterial species, but can vary depending on growth conditions. Some bacteria have complex life cycles involving the production of stalks and appendages (e.g. Caulobacter) and some produce elaborate structures bearing reproductive spores (e.g. Myxococcus, Streptomyces). Bacteria generally form distinctive cell morphologies when examined by light microscopy and distinct colony morphologies when grown on Petri plates. Perhaps the most obvious structural characteristic of bacteria is (with some exceptions) their small size. For example, Escherichia coli cells, an "average" sized bacterium, are about 2 µm (micrometres) long and 0.5 µm in diameter, with a cell volume of 0.6–0.7 μm3. This corresponds to a wet mass of about 1 picogram (pg), assuming that the cell consists mostly of water. The dry mass of a single cell can be estimated as 23% of the wet mass, amounting to 0.2 pg. About half of the dry mass of a bacterial cell consists of carbon, and also about half of it can be attributed to proteins. Therefore, a typical fully grown 1-liter culture of Escherichia coli (at an optical density of 1.0, corresponding to c. 109
https://en.wikipedia.org/wiki/Cytotrophoblast
"Cytotrophoblast" is the name given to both the inner layer of the trophoblast (also called layer of Langhans) or the cells that live there. It is interior to the syncytiotrophoblast and external to the wall of the blastocyst in a developing embryo. The cytotrophoblast is considered to be the trophoblastic stem cell because the layer surrounding the blastocyst remains while daughter cells differentiate and proliferate to function in multiple roles. There are two lineages that cytotrophoblastic cells may differentiate through: fusion and invasive. The fusion lineage yields syncytiotrophoblast and the invasive lineage yields interstitial cytotrophoblast cells. Cytotrophoblastic cells play an important role in the implantation of an embryo in the uterus. Fusion lineage The formation of all syncytiotrophoblast is from the fusion of two or more cytotrophoblasts via this fusion pathway. This pathway is important because the syncytiotrophoblast plays an important role in fetal-maternal gas exchange, nutrient exchange, and immunological and metabolic functions. An undifferentiated cytotrophoblastic stem cell will differentiate into a villous cytotrophoblast, which is what constitutes primary chorionic villi, and will eventually coalesce into villous syncytiotrophoblast. The formation of syncytiotrophoblast from cytotrophoblast is a terminal differentiation step of trophoblastic cells. Syncytialization of cytotrophoblastic cells can be induced in vitro through multiple signalling molecules including epidermal growth factor, glucocorticoids, and human chorionic gonadotropin. Invasive lineage The invasive lineage creates cytotrophoblasts that are essential in the process of implantation and forming a fully functional placenta. An undifferentiated cytotrophoblastic stem cell will differentiate into an extravillous cytotrophoblast intermediate and then into an interstitial cytotrophoblast. An interstitial cytotrophoblast may then further differentiate into an endovascular
https://en.wikipedia.org/wiki/LIGA
LIGA is a fabrication technology used to create high-aspect-ratio microstructures. The term is a German acronym for  – lithography, electroplating, and molding. Overview The LIGA consists of three main processing steps; lithography, electroplating and molding. There are two main LIGA-fabrication technologies, X-Ray LIGA, which uses X-rays produced by a synchrotron to create high aspect ratio structures, and UV LIGA, a more accessible method which uses ultraviolet light to create structures with relatively low aspect ratios. Notable characteristics of X-ray LIGA-fabricated structures include: high aspect ratios on the order of 100:1 parallel side walls with a flank angle on the order of 89.95° smooth side walls with = , suitable for optical mirrors structural heights from tens of micrometers to several millimeters structural details on the order of micrometers over distances of centimeters X-Ray LIGA X-Ray LIGA is a fabrication process in microtechnology that was developed in the early 1980s by a team under the leadership of Erwin Willy Becker and Wolfgang Ehrfeld at the Institute for Nuclear Process Engineering (Institut für Kernverfahrenstechnik, IKVT) at the Karlsruhe Nuclear Research Center, since renamed to the Institute for Microstructure Technology (Institut für Mikrostrukturtechnik, IMT) at the Karlsruhe Institute of Technology (KIT). LIGA was one of the first major techniques to allow on-demand manufacturing of high-aspect-ratio structures (structures that are much taller than wide) with lateral precision below one micrometer. In the process, an X-ray sensitive polymer photoresist, typically PMMA, bonded to an electrically conductive substrate, is exposed to parallel beams of high-energy X-rays from a synchrotron radiation source through a mask partly covered with a strong X-ray absorbing material. Chemical removal of exposed (or unexposed) photoresist results in a three-dimensional structure, which can be filled by the electrodeposition of me
https://en.wikipedia.org/wiki/Sural%20nerve
The sural nerve (L4-S1) is generally considered a pure cutaneous nerve of the posterolateral leg to the lateral ankle. The sural nerve originates from a combination of either the sural communicating branch and medial sural cutaneous nerve, or the lateral sural cutaneous nerve. This group of nerves is termed the sural nerve complex. There are eight documented variations of the sural nerve complex. Once formed the sural nerve takes its course midline posterior to posterolateral around the lateral malleolus. The sural nerve terminates as the lateral dorsal cutaneous nerve. Anatomy The sural nerve (L4-S1) is a cutaneous sensory nerve of the posterolateral calf with cutaneous innervation to the distal one-third of the lower leg. Formation of the sural nerve is the result of either anastomosis of the medial sural cutaneous nerve and the sural communicating nerve, or it may be found as a continuation of the lateral sural cutaneous nerve traveling parallel to the medial sural cutaneous nerve. The sural nerve specifically innervates cutaneous sensorium over the posterolateral leg and lower lateral ankle via lateral calcaneal branches. Innervation The sural nerve provides cutaneous innervation to the skin of the posterior to posterolateral leg. This nerve is part of the sciatic nerve sensorium. It only provides autonomic and sensory nerve fibers to the skin of the posterolateral leg and ankle. These fibers originate from perikaryon located in the spinal ganglia and travel via the lumbosacral plexus via nerve roots L4-S1. When testing for deficits understand that often multiple nerves (lateral calcaneal nerve, sural nerve, and lateral dorsal cutaneous nerves of the foot) provide a complicated marriage of converging sensorium around the lower extremity. Anatomic Course Grossly, the course of this nerve leads it from its highly varied anastomotic formation to its more predictable terminal course down the remaining posterior leg. The anastomosis forming the sural nerve typi
https://en.wikipedia.org/wiki/Flexural%20strength
Flexural strength, also known as modulus of rupture, or bend strength, or transverse rupture strength is a material property, defined as the stress in a material just before it yields in a flexure test. The transverse bending test is most frequently employed, in which a specimen having either a circular or rectangular cross-section is bent until fracture or yielding using a three-point flexural test technique. The flexural strength represents the highest stress experienced within the material at its moment of yield. It is measured in terms of stress, here given the symbol . Introduction When an object is formed of a single material, like a wooden beam or a steel rod, is bent (Fig. 1), it experiences a range of stresses across its depth (Fig. 2). At the edge of the object on the inside of the bend (concave face) the stress will be at its maximum compressive stress value. At the outside of the bend (convex face) the stress will be at its maximum tensile value. These inner and outer edges of the beam or rod are known as the 'extreme fibers'. Most materials generally fail under tensile stress before they fail under compressive stress Flexural versus tensile strength The flexural strength would be the same as the tensile strength if the material were homogeneous. In fact, most materials have small or large defects in them which act to concentrate the stresses locally, effectively causing a localized weakness. When a material is bent only the extreme fibers are at the largest stress so, if those fibers are free from defects, the flexural strength will be controlled by the strength of those intact 'fibers'. However, if the same material was subjected to only tensile forces then all the fibers in the material are at the same stress and failure will initiate when the weakest fiber reaches its limiting tensile stress. Therefore, it is common for flexural strengths to be higher than tensile strengths for the same material. Conversely, a homogeneous material with
https://en.wikipedia.org/wiki/Kadomtsev%E2%80%93Petviashvili%20equation
In mathematics and physics, the Kadomtsev–Petviashvili equation (often abbreviated as KP equation) is a partial differential equation to describe nonlinear wave motion. Named after Boris Borisovich Kadomtsev and Vladimir Iosifovich Petviashvili, the KP equation is usually written as where . The above form shows that the KP equation is a generalization to two spatial dimensions, x and y, of the one-dimensional Korteweg–de Vries (KdV) equation. To be physically meaningful, the wave propagation direction has to be not-too-far from the x direction, i.e. with only slow variations of solutions in the y direction. Like the KdV equation, the KP equation is completely integrable. It can also be solved using the inverse scattering transform much like the nonlinear Schrödinger equation. In 2002, the regularized version of the KP equation, naturally referred to as the Benjamin–Bona–Mahony–Kadomtsev–Petviashvili equation (or simply the BBM-KP equation), was introduced as an alternative model for small amplitude long waves in shallow water moving mainly in the x direction in 2+1 space. where . The BBM-KP equation provides an alternative to the usual KP equation, in a similar way that the Benjamin–Bona–Mahony equation is related to the classical Korteweg–de Vries equation, as the linearized dispersion relation of the BBM-KP is a good approximation to that of the KP but does not exhibit the unwanted limiting behavior as the Fourier variable dual to x approaches . History The KP equation was first written in 1970 by Soviet physicists Boris B. Kadomtsev (1928–1998) and Vladimir I. Petviashvili (1936–1993); it came as a natural generalization of the KdV equation (derived by Korteweg and De Vries in 1895). Whereas in the KdV equation waves are strictly one-dimensional, in the KP equation this restriction is relaxed. Still, both in the KdV and the KP equation, waves have to travel in the positive x-direction. Connections to physics The KP equation can be used to model wate
https://en.wikipedia.org/wiki/Arakawa%27s%20syndrome%20II
Arakawa's syndrome II is an autosomal dominant metabolic disorder that causes a deficiency of the enzyme tetrahydrofolate-methyltransferase; affected individuals cannot properly metabolize methylcobalamin, a type of Vitamin B12. Presentation This disorder causes neurological problems, including intellectual disability, brain atrophy and ventricular dilation, myoclonus, hypotonia, and epilepsy. It is also associated with growth retardation, megaloblastic anemia, pectus excavatum, scoliosis, vomiting, diarrhea, and hepatosplenomegaly. Genetics Arakawa's syndrome II is inherited in an autosomal dominant manner. This means the defective gene responsible for disorder is located on an autosome, and one copy of the defective gene is sufficient to cause the disorder when inherited from a parent who has the disorder. Diagnosis Management Eponym It is called "Arakawa syndrome 2" after Tsuneo Arakawa (1949–2003), a Japanese Physician.; in this context, "Arakawa syndrome 1" refers to Glutamate formiminotransferase deficiency.
https://en.wikipedia.org/wiki/Self%20defined%20ethnicity
Self defined ethnicity (SDE) codes are a set of codes used by the Home Office in the United Kingdom to classify an individual's ethnicity according to that person's self-definition. The codes are also called "18 + 1" codes, as there are 18 of them, plus one code (NS) for "not stated". In addition to the previously used 16+1 codes, they contain the categories W3 and O2, while A4 is now replacing O1 (Chinese). The code system originated in the United Kingdom Census 2001 based on Recommendation 61 of the Stephen Lawrence Inquiry Report (SLIR). British police forces are required to use the SDE 18+1 codes (as opposed to the commonly used radio shorthand IC codes) when spoken contact has taken place and an individual has been given an opportunity to state their self-perceived ethnicity. List of SDE codes White W1 – British W2 – Irish W3 – Gypsy or Irish Traveller W9 – Any other White background Mixed or Multiple ethnic groups M1 – White and Black Caribbean M2 – White and Black African M3 – White and Asian M9 – Any other Mixed or Multiple background Asian or Asian British A1 – Indian A2 – Pakistani A3 – Bangladeshi A4 – Chinese A9 – Any other Asian background Black, Black British, Caribbean or African B1 – Caribbean B2 – African B9 – Any other Black, Black British or Caribbean background Other ethnic groups O2 – Arab O9 – Any other ethnic group Not stated NS – Not Stated. See also Classification of ethnicity in the United Kingdom IC codes External links Ethnicity facts and figures. Government data about the UK's different ethnic groups.
https://en.wikipedia.org/wiki/Periodontal%20ligament%20stem%20cells
Periodontal ligament stem cells (PDLSCs) are stem cells found near the periodontal ligament of the teeth. PDLSCs have shown potential in the regeneration of not only the periodontal complex but also other dental and non-dental tissues. They are involved in adult regeneration of the periodontal ligament, alveolar bone, and cementum. The cells are known to express STRO-1 and CD146 proteins. PDLSCs play a role as progenitor cells. They are capable of generating into into osteoblasts, cementoblasts, chondrocytes, and adipocytes.
https://en.wikipedia.org/wiki/Weight%20distribution
Weight distribution is the apportioning of weight within a vehicle, especially cars, airplanes, and trains. Typically, it is written in the form x/y, where x is the percentage of weight in the front, and y is the percentage in the back. In a vehicle which relies on gravity in some way, weight distribution directly affects a variety of vehicle characteristics, including handling, acceleration, traction, and component life. For this reason weight distribution varies with the vehicle's intended usage. For example, a drag car maximizes traction at the rear axle while countering the reactionary pitch-up torque. It generates this counter-torque by placing a small amount of counterweight at a great distance forward of the rear axle. In the airline industry, load balancing is used to evenly distribute the weight of passengers, cargo, and fuel throughout an aircraft, so as to keep the aircraft's center of gravity close to its center of pressure to avoid losing pitch control. In military transport aircraft, it is common to have a loadmaster as a part of the crew; their responsibilities include calculating accurate load information for center of gravity calculations, and ensuring cargo is properly secured to prevent its shifting. In large aircraft and ships, multiple fuel tanks and pumps are often used, so that as fuel is consumed, the remaining fuel can be positioned to keep the vehicle balanced, and to reduce stability problems associated with the free surface effect. In the trucking industry, individual axle weight limits require balancing the cargo when the gross vehicle weight nears the legal limit. See also Center of mass Center of percussion Load transfer Mass distribution Roll center Tilt test Weight transfer
https://en.wikipedia.org/wiki/Four-valued%20logic
In logic, a four-valued logic is any logic with four truth values. Several types of four-valued logic have been advanced. Belnap Nuel Belnap considered the challenge of question answering by computer in 1975. Noting human fallibility, he was concerned with the case where two contradictory facts were loaded into memory, and then a query was made. "We all know about the fecundity of contradictions in two-valued logic: contradictions are never isolated, infecting as they do the whole system." Belnap proposed a four-valued logic as a means of containing contradiction. He called the table of values A4: Its possible values are true, false, both (true and false), and neither (true nor false). Belnap's logic is designed to cope with multiple information sources such that if only true is found then true is assigned, if only false is found then false is assigned, if some sources say true and others say false then both is assigned, and if no information is given by any information source then neither is assigned. These four values correspond to the elements of the power set based on {T, F}. T is the supremum and F the infimum in the logical lattice where None and Both are in the wings. Belnap has this interpretation: "The worst thing is to be told something is false simpliciter. You are better off (it is one of your hopes) in either being told nothing about it, or being told both that it is true and also that it is false; while of course best of all is to be told that it is true." Belnap notes that "paradoxes of implication" (A&~A)→B and A→(B∨~B) are avoided in his 4-valued system. Logical connectives Belnap addressed the challenge of extending logical connectives to A4. Since it is the power set on {T, F}, the elements of A4 are ordered by inclusion making it a lattice with Both at the supremum and None at the infimum, and T and F on the wings. Referring to Dana Scott, he assumes the connectives are Scott-continuous or monotonic functions. First he expands negation by ded
https://en.wikipedia.org/wiki/Curious%20George%20%28TV%20series%29
Curious George is an American children's animated television series based on the children's book series of the same name for PBS Kids which features Jeff Bennett as the voice of Ted Shackelford (credited as "The Man with the Yellow Hat", formerly called that in the original series books and telefilm books). Frank Welker, who voiced George in the 2006 feature film, reprises the role in the series. The show premiered on PBS Kids in the fall of 2006, and originally ended after nine seasons on April 1, 2015 before returning in 2018. Season 10 premiered on September 3, 2018 on Family Jr. in Canada. Seasons 10-13 debuted on NBCUniversal's streaming service Peacock in the United States when it launched in July 2020. Seasons 1-9 are available to stream for Peacock Premium subscribers since September 20, 2020, which is also available to stream on Hulu. Season 10 premiered on PBS on October 5 the same year. Curious George is a production of Universal 1440 Entertainment (Universal Studios Family Productions before 2013), Imagine Entertainment, and WGBH-TV (WGBH Kids) (before season 13), and animated by Toon City. From seasons 1-9, each episode has two animated segments per half hour episode, and a short live-action segment after each. The live-action shorts illustrate and explain various concepts in math and science, and shows a class with kids engaging in experiments, that teach the math or science concept featured in the previous story. Season 10 eliminated them, leaving only the animated segments left. Settings City: George and the Man live in an apartment in the Big City. The actual name of the city is unknown but has some similarities to New York. However in the episode "Curious George Takes a Vacation", an airport worker mentions they're in Illinois meaning they possibly live in Chicago. The Doorman keeps a pigeon coop on the roof and is the guardian of Hundley, his dachshund dog. The apartment is near Endless Park, the museum where Professor Wiseman works, and the z
https://en.wikipedia.org/wiki/Coanalytic%20set
In the mathematical discipline of descriptive set theory, a coanalytic set is a set (typically a set of real numbers or more generally a subset of a Polish space) that is the complement of an analytic set (Kechris 1994:87). Coanalytic sets are also referred to as sets (see projective hierarchy).
https://en.wikipedia.org/wiki/Cain%20and%20Abel%20%28software%29
Cain and Abel (often abbreviated to Cain) was a password recovery tool for Microsoft Windows. It could recover many kinds of passwords using methods such as network packet sniffing, cracking various password hashes by using methods such as dictionary attacks, brute force and cryptanalysis attacks. Cryptanalysis attacks were done via rainbow tables which could be generated with the winrtgen.exe program provided with Cain and Abel. Cain and Abel was maintained by Massimiliano Montoro and Sean Babcock. Features WEP cracking Speeding up packet capture speed by wireless packet injection Ability to record VoIP conversations Decoding scrambled passwords Calculating hashes Traceroute Revealing password boxes Uncovering cached passwords Dumping protected storage passwords ARP spoofing IP to MAC Address resolver Network Password Sniffer LSA secret dumper Ability to crack: LM & NTLM hashes NTLMv2 hashes Microsoft Cache hashes Microsoft Windows PWL files Cisco IOS – MD5 hashes Cisco PIX – MD5 hashes APOP – MD5 hashes CRAM-MD5 MD5 hashes OSPF – MD5 hashes RIPv2 MD5 hashes VRRP – HMAC hashes Virtual Network Computing (VNC) Triple DES MD2 hashes MD4 hashes MD5 hashes SHA-1 hashes SHA-2 hashes RIPEMD-160 hashes Kerberos 5 hashes RADIUS shared key hashes IKE PSK hashes MSSQL hashes MySQL hashes Oracle and SIP hashes Status with virus scanners Some virus scanners (and browsers, e.g. Google Chrome 20.0.1132.47) detect Cain and Abel as malware. Avast! detects it as "Win32:Cain-B [Tool]" and classifies it as "Other potentially dangerous program", while Microsoft Security Essentials detects it as "Win32/Cain!4_9_14" and classifies it as "Tool: This program has potentially unwanted behavior." Even if Cain's install directory, as well as the word "Cain", are added to Avast's exclude list, the real-time scanner has been known to stop Cain from functioning. However, the latest version of Avast no longer blocks Cain. Symantec (the developer of the
https://en.wikipedia.org/wiki/Atomic%20mirror
In physics, an atomic mirror is a device which reflects neutral atoms in a way similar to the way a conventional mirror reflects visible light. Atomic mirrors can be made of electric fields or magnetic fields, electromagnetic waves or just silicon wafer; in the last case, atoms are reflected by the attracting tails of the van der Waals attraction (see quantum reflection). Such reflection is efficient when the normal component of the wavenumber of the atoms is small or comparable to the effective depth of the attraction potential (roughly, the distance at which the potential becomes comparable to the kinetic energy of the atom). To reduce the normal component, most atomic mirrors are blazed at the grazing incidence. At grazing incidence, the efficiency of the quantum reflection can be enhanced by a surface covered with ridges (ridged mirror). The set of narrow ridges reduces the van der Waals attraction of atoms to the surfaces and enhances the reflection. Each ridge blocks part of the wavefront, causing Fresnel diffraction. Such a mirror can be interpreted in terms of the Zeno effect. We may assume that the atom is "absorbed" or "measured" at the ridges. Frequent measuring (narrowly spaced ridges) suppresses the transition of the particle to the half-space with absorbers, causing specular reflection. At large separation between thin ridges, the reflectivity of the ridged mirror is determined by dimensionless momentum , and does not depend on the origin of the wave; therefore, it is suitable for reflection of atoms. Applications Atomic interferometry See also Quantum reflection Ridged mirror Zeno effect Atomic nanoscope Atom laser
https://en.wikipedia.org/wiki/Sister%20chromatids
A sister chromatid refers to the identical copies (chromatids) formed by the DNA replication of a chromosome, with both copies joined together by a common centromere. In other words, a sister chromatid may also be said to be 'one-half' of the duplicated chromosome. A pair of sister chromatids is called a dyad. A full set of sister chromatids is created during the synthesis (S) phase of interphase, when all the chromosomes in a cell are replicated. The two sister chromatids are separated from each other into two different cells during mitosis or during the second division of meiosis. Compare sister chromatids to homologous chromosomes, which are the two different copies of a chromosome that diploid organisms (like humans) inherit, one from each parent. Sister chromatids are by and large identical (since they carry the same alleles, also called variants or versions, of genes) because they derive from one original chromosome. An exception is towards the end of meiosis, after crossing over has occurred, because sections of each sister chromatid may have been exchanged with corresponding sections of the homologous chromatids with which they are paired during meiosis. Homologous chromosomes might or might not be the same as each other because they derive from different parents. There is evidence that, in some species, sister chromatids are the preferred template for DNA repair. Sister chromatid cohesion is essential for the correct distribution of genetic information between daughter cells and the repair of damaged chromosomes. Defects in this process may lead to aneuploidy and cancer, especially when checkpoints fail to detect DNA damage or when incorrectly attached mitotic spindles do not function properly. Mitosis Mitotic recombination is primarily a result of DNA repair processes responding to spontaneous or induced damages. Homologous recombinational repair during mitosis is largely limited to interaction between nearby sister chromatids that are present in a ce
https://en.wikipedia.org/wiki/International%20Committee%20on%20Taxonomy%20of%20Viruses
The International Committee on Taxonomy of Viruses (ICTV) authorizes and organizes the taxonomic classification of and the nomenclature for viruses. The ICTV develops a universal taxonomic scheme for viruses, and thus has the means to appropriately describe, name, and classify every virus taxon. The members of the International Committee on Taxonomy of Viruses are considered expert virologists. The ICTV was formed from and is governed by the Virology Division of the International Union of Microbiological Societies. Detailed work, such as identifying new taxa and delimiting the boundaries of species, genera, families, etc. typically is performed by study groups of experts in the families. History The International Committee on Nomenclature of Viruses (ICNV) was established in 1966, at the International Congress for Microbiology in Moscow, to standardize the naming of virus taxa. The ICVN published its first report in 1971. For viruses infecting vertebrates, the first report included 19 genera, 2 families, and a further 24 unclassified groups. The ICNV was renamed the International Committee on Taxonomy of Viruses in 1974. Organisational structure The organisation is divided into an executive committee, which includes members and executives with fixed-term elected roles, as well as directly appointed heads of seven subcommittees. Each subcommittee head, in turn, appoints numerous 'study groups', which each consist of one chair and a variable number of members dedicated to the taxonomy of a specific taxon, such as an order or family. This structure may be visualised as follows: Executive committee President Vice-president Secretaries Business Secretary Proposals Secretary Data Secretary Chairs – positions: 7 (one for each subcommittee) Elected members – positions: 11 Subcommittees Animal DNA Viruses and Retroviruses Subcommittee – study groups: 18 Animal dsRNA and ssRNA- Viruses Subcommittee – study groups: 24 Animal ssRNA+ Viruses Subcommittee – study
https://en.wikipedia.org/wiki/KNSN-TV
KNSN-TV (channel 21) is a primary sports-formatted independent television station in Reno, Nevada, United States, which has a secondary affiliation with MyNetworkTV. It is owned by Deerfield Media, which maintains joint sales and shared services agreements (JSA/SSA) with Sinclair Broadcast Group, owner of Fox affiliate KRXI-TV (channel 11), for the provision of certain services. Sinclair also manages NBC affiliate KRNV-DT (channel 4) under a separate JSA with Cunningham Broadcasting; however, Sinclair effectively owns KRNV as the majority of Cunningham's stock is owned by the family of deceased group founder Julian Smith. The three stations share studios on Vassar Street in Reno; KNSN-TV's transmitter is located on Red Hill between US 395 and SR 445 in Sun Valley, Nevada. History The station launched on October 11, 1981, as KAME-TV, an independent station airing movies (TV-21's The Big Movie), cartoons, westerns, and sitcoms. On October 9, 1986, it became a charter Fox affiliate. On January 16, 1995, KAME-TV picked up UPN on a secondary basis; it became a full-time UPN affiliate on January 1, 1996, after KRXI signed-on and took Fox. Between September 1996 and May 1997, the station was briefly owned by Raycom Media. With the 2006 shutdown and merge of The WB and UPN to form The CW, the station joined News Corporation–owned and Fox sister network MyNetworkTV on September 5, 2006. On July 20, 2012, one day after Cox Media Group purchased WAWS and WTEV in Jacksonville, Florida, and KOKI-TV and KMYT-TV in Tulsa, Oklahoma, from Newport Television, Cox put KRXI-TV (along with the LMA for KAME-TV) and sister stations WTOV-TV in Steubenville, Ohio, WJAC-TV in Johnstown, Pennsylvania, and KFOX-TV in El Paso, Texas (all in markets that are smaller than Tulsa), plus several radio stations in medium to small markets, on the selling block. On February 25, 2013, Cox announced that it would sell the four television stations, and the LMA for KAME, to Sinclair Broadcast Group; as
https://en.wikipedia.org/wiki/ACP%20131
ACP-131 is the controlling publication for the listing of and It is published and revised from time to time by the Combined Communications Electronics Board (CCEB) countries: Australia, New Zealand, Canada, United Kingdom, and United States. When the meanings of the codes contained in ACP-131 are translated into various languages, the codes provide a means of communicating between ships of various nations, such as during a NATO exercise, where there is no common language. History The original edition of ACP-131 was published by the U.S. military during the early years of radio telegraphy for use by radio operators using Morse Code on continuous wave (CW) telegraphy. It became especially useful, and even essential, to wireless radio operators on both military and civilian ships at sea before the development of advanced single-sideband telephony in the 1960s. Reason for the codes Radio communications, prior to the advent of landlines and satellites as communication paths and relays, was always subject to unpredictable fade outs caused by weather conditions, practical limits on available emission power at the transmitter, radio frequency of the transmission, type of emission, type of transmitting antenna, signal waveform characteristics, modulation scheme in use, sensitivity of the receiver and presence, or lack of presence, of atmospheric reflective layers above the earth, such as the E-layer and F-layers, the type of receiving antenna, the time of day, and numerous other factors. Because of these factors which often resulted in limiting periods of transmission time on certain frequencies to only several hours a day, or only several minutes, it was found necessary to keep each wireless transmission as short as possible and to still get the message through. This was particularly true of CW radio circuits shared by a number of operators, with some waiting their turn to transmit. As a result, an operator communicating by radio telegraphy to another operator, want
https://en.wikipedia.org/wiki/Near-field%20scanning%20optical%20microscope
Near-field scanning optical microscopy (NSOM) or scanning near-field optical microscopy (SNOM) is a microscopy technique for nanostructure investigation that breaks the far field resolution limit by exploiting the properties of evanescent waves. In SNOM, the excitation laser light is focused through an aperture with a diameter smaller than the excitation wavelength, resulting in an evanescent field (or near-field) on the far side of the aperture. When the sample is scanned at a small distance below the aperture, the optical resolution of transmitted or reflected light is limited only by the diameter of the aperture. In particular, lateral resolution of 6 nm and vertical resolution of 2–5 nm have been demonstrated. As in optical microscopy, the contrast mechanism can be easily adapted to study different properties, such as refractive index, chemical structure and local stress. Dynamic properties can also be studied at a sub-wavelength scale using this technique. NSOM/SNOM is a form of scanning probe microscopy. History Edward Hutchinson Synge is given credit for conceiving and developing the idea for an imaging instrument that would image by exciting and collecting diffraction in the near field. His original idea, proposed in 1928, was based upon the usage of intense nearly planar light from an arc under pressure behind a thin, opaque metal film with a small orifice of about 100 nm. The orifice was to remain within 100 nm of the surface, and information was to be collected by point-by-point scanning. He foresaw the illumination and the detector movement being the biggest technical difficulties. John A. O'Keefe also developed similar theories in 1956. He thought the moving of the pinhole or the detector when it is so close to the sample would be the most likely issue that could prevent the realization of such an instrument. It was Ash and Nicholls at University College London who, in 1972, first broke the Abbe’s diffraction limit using microwave radiation with a wa
https://en.wikipedia.org/wiki/Julip%20Horses%20Ltd
Julip Horses Ltd is a United Kingdom-based company which produces a range of 1/12-scale model horses. Overview In 1945, Lavender Dower (d. 2003) began making model horses out of chamois leather, using the lead from London buildings bombed during World War II to make their legs. Five years later the company switched to making the horses out of latex, the same material used to make the current line of Julip Originals. Dower sold the company in the 1950s. Models were sold from a shop in Beauchamp Place, London - part of the smart Knightsbridge shopping area and not far from Harrods. Lead continued to be used to provide the support and flexibility in the legs of the latex horses until at least the early 1960s, when lead as a component of children's toys was banned in the UK for safety reasons. Juip was just one of several companies producing cast-latex model horses in the 1950s and 1960s, but the only one to continue in business past 1968. The company's main rival was known as Isis, whose products were preferred by serious collectors of the period for their greater detail. Other companies included Pegasus and Otway, and in the late 1960s the internationally renowned animalier artist Pamela du Boulay began her sculpting career with a range of exquisite latex models sold under the 'Rydal' name. The production process for latex models is simple, and similar to the early stages of pottery production. Plaster moulds are made from a 'master' model. When the plaster is completely dry, a mix of liquid latex and an inert 'filler' is poured into the mould and left to stand until a coating of latex has developed around the inside of the mould. The remaining liquid latex is poured away for reuse, and the casting allowed to dry out until it is firm enough to be removed from the mould. This part of the process is identical to the way ceramics are cast. With the casting removed, the plaster mould is left to dry out, and cannot be reused until 100% dry once more. Each casting pr
https://en.wikipedia.org/wiki/Intracluster%20medium
In astronomy, the intracluster medium (ICM) is the superheated plasma that permeates a galaxy cluster. The gas consists mainly of ionized hydrogen and helium and accounts for most of the baryonic material in galaxy clusters. The ICM is heated to temperatures on the order of 10 to 100 megakelvins, emitting strong X-ray radiation. Composition The ICM is composed primarily of ordinary baryons, mainly ionised hydrogen and helium. This plasma is enriched with heavier elements, including iron. The average amount of heavier elements relative to hydrogen, known as metallicity in astronomy, ranges from a third to a half of the value in the sun. Studying the chemical composition of the ICMs as a function of radius has shown that cores of the galaxy clusters are more metal-rich than at larger radii. In some clusters (e.g. the Centaurus cluster) the metallicity of the gas can rise to above that of the sun. Due to the gravitational field of clusters, metal-enriched gas ejected from supernova remains gravitationally bound to the cluster as part of the ICM. By looking at varying redshift, which corresponds to looking at different epochs of the evolution of the Universe, the ICM can provide a history record of element production in a galaxy. Roughly 15% of a galaxy cluster's mass resides in the ICM. The stars and galaxies contribute only around 5% to the total mass. It is theorized that most of the mass in a galaxy cluster consists of dark matter and not baryonic matter. For the Virgo Cluster, the ICM contains roughly 3 × 1014 M☉ while the total mass of the cluster is estimated to be 1.2 × 1015 M☉. Although the ICM on the whole contains the bulk of a cluster's baryons, it is not very dense, with typical values of 10−3 particles per cubic centimeter. The mean free path of the particles is roughly 1016 m, or about one lightyear. The density of the ICM rises towards the centre of the cluster with a relatively strong peak. In addition, the temperature of the ICM typically drops to
https://en.wikipedia.org/wiki/Salience%20%28neuroscience%29
Salience (also called saliency) is that property by which some thing stands out. Salient events are an attentional mechanism by which organisms learn and survive; those organisms can focus their limited perceptual and cognitive resources on the pertinent (that is, salient) subset of the sensory data available to them. Saliency typically arises from contrasts between items and their neighborhood. They might be represented, for example, by a red dot surrounded by white dots, or by a flickering message indicator of an answering machine, or a loud noise in an otherwise quiet environment. Saliency detection is often studied in the context of the visual system, but similar mechanisms operate in other sensory systems. Just what is salient can be influenced by training: for example, for human subjects particular letters can become salient by training. There can be a sequence of necessary events, each of which has to be salient, in turn, in order for successful training in the sequence; the alternative is a failure, as in an illustrated sequence when tying a bowline; in the list of illustrations, even the first illustration is a salient: the rope in the list must cross over, and not under the bitter end of the rope (which can remain fixed, and not free to move); failure to notice that the first salient has not been satisfied means the knot will fail to hold, even when the remaining salient events have been satisfied. When attention deployment is driven by salient stimuli, it is considered to be bottom-up, memory-free, and reactive. Conversely, attention can also be guided by top-down, memory-dependent, or anticipatory mechanisms, such as when looking ahead of moving objects or sideways before crossing streets. Humans and other animals have difficulty paying attention to more than one item simultaneously, so they are faced with the challenge of continuously integrating and prioritizing different bottom-up and top-down influences. Neuroanatomy The brain component named the
https://en.wikipedia.org/wiki/Koilocyte
A koilocyte is a squamous epithelial cell that has undergone a number of structural changes, which occur as a result of infection of the cell by human papillomavirus (HPV). Identification of these cells by pathologists can be useful in diagnosing various HPV-associated lesions. Koilocytosis Koilocytosis or koilocytic atypia or koilocytotic atypia are terms used in histology and cytology to describe the presence of koilocytes in a specimen. Koilocytes may have the following cellular changes: Nuclear enlargement (two to three times normal size). Irregularity of the nuclear membrane contour, creating a wrinkled or raisinoid appearance. A darker than normal staining pattern in the nucleus, known as hyperchromasia. A clear area around the nucleus, known as a perinuclear halo or perinuclear cytoplasmic vacuolization. Collectively, these types of changes are called a cytopathic effect; various types of cytopathic effect can be seen in many different cell types infected by many different viruses. Infection of cells with HPV causes the specific cytopathic effects seen in koilocytes. Pathogenesis The atypical features seen in cells displaying koilocytosis result from the action of the E5 and E6 oncoproteins produced by HPV. These proteins break down keratin in HPV-infected cells, resulting in the perinuclear halo and nuclear enlargement typical of koilocytes. The E6 oncoprotein, along with E7, is also responsible for the dysregulation of the cell cycle that results in squamous cell dysplasia. The E6 and E7 oncoproteins do this by binding and inhibiting the tumor suppressor genes p53 and RB, respectively. This promotes progression of cells through the cell cycle without appropriate repair of DNA damage, resulting in dysplasia. Due to the ability of HPV to cause cellular dysplasia, koilocytes are found in a number of potentially precancerous lesions. Visualization of koilocytes Koilocytes can be visualized microscopically when tissue is collected, fixed, and stained.
https://en.wikipedia.org/wiki/Spatial%20relation
A spatial relation specifies how some object is located in space in relation to some reference object. When the reference object is much bigger than the object to locate, the latter is often represented by a point. The reference object is often represented by a bounding box. In Anatomy it might be the case that a spatial relation is not fully applicable. Thus, the degree of applicability is defined which specifies from 0 till 100% how strongly a spatial relation holds. Often researchers concentrate on defining the applicability function for various spatial relations. In spatial databases and geospatial topology the spatial relations are used for spatial analysis and constraint specifications. In cognitive development for walk and for catch objects, or for understand objects-behaviour; in robotic Natural Features Navigation; and many other areas, spatial relations plays a central role. Commonly used types of spatial relations are: topological, directional and distance relations. Topological relations The DE-9IM model expresses important space relations which are invariant to rotation, translation and scaling transformations. For any two spatial objects a and b, that can be points, lines and/or polygonal areas, there are 9 relations derived from DE-9IM: Directional relations Directional relations can again be differentiated into external directional relations and internal directional relations. An internal directional relation specifies where an object is located inside the reference object while an external relations specifies where the object is located outside of the reference objects. Examples for internal directional relations: left; on the back; athwart, abaft Examples for external directional relations: on the right of; behind; in front of, abeam, astern Distance relations Distance relations specify how far is the object away from the reference object. Examples are: at; nearby; in the vicinity; far away Relations by class Reference objects repres
https://en.wikipedia.org/wiki/Q10%20%28temperature%20coefficient%29
{{DISPLAYTITLE:Q10 (temperature coefficient)}} The Q10 temperature coefficient is a measure of temperature sensitivity based on the chemical reactions. The Q10 is calculated as: where; R is the rate T is the temperature in Celsius degrees or kelvin. Rewriting this equation, the assumption behind Q10 is that the reaction rate R depends exponentially on temperature: Q10 is a unitless quantity, as it is the factor by which a rate changes, and is a useful way to express the temperature dependence of a process. For most biological systems, the Q10 value is ~ 2 to 3. In muscle performance The temperature of a muscle has a significant effect on the velocity and power of the muscle contraction, with performance generally declining with decreasing temperatures and increasing with rising temperatures. The Q10 coefficient represents the degree of temperature dependence a muscle exhibits as measured by contraction rates. A Q10 of 1.0 indicates thermal independence of a muscle whereas an increasing Q10 value indicates increasing thermal dependence. Values less than 1.0 indicate a negative or inverse thermal dependence, i.e., a decrease in muscle performance as temperature increases. Q10 values for biological processes vary with temperature. Decreasing muscle temperature results in a substantial decline of muscle performance such that a 10 degree Celsius temperature decrease results in at least a 50% decline in muscle performance. Persons who have fallen into icy water may gradually lose the ability to swim or grasp safety lines due to this effect, although other effects such as atrial fibrillation are a more immediate cause of drowning deaths. At some minimum temperature biological systems do not function at all, but performance increases with rising temperature (Q10 of 2-4) to a maximum performance level and thermal independence (Q10 of 1.0-1.5). With continued increase in temperature, performance decreases rapidly (Q10 of 0.2-0.8) up to a maximum temperature
https://en.wikipedia.org/wiki/Starred%20transform
In applied mathematics, the starred transform, or star transform, is a discrete-time variation of the Laplace transform, so-named because of the asterisk or "star" in the customary notation of the sampled signals. The transform is an operator of a continuous-time function , which is transformed to a function in the following manner: where is a Dirac comb function, with period of time T. The starred transform is a convenient mathematical abstraction that represents the Laplace transform of an impulse sampled function , which is the output of an ideal sampler, whose input is a continuous function, . The starred transform is similar to the Z transform, with a simple change of variables, where the starred transform is explicitly declared in terms of the sampling period (T), while the Z transform is performed on a discrete signal and is independent of the sampling period. This makes the starred transform a de-normalized version of the one-sided Z-transform, as it restores the dependence on sampling parameter T. Relation to Laplace transform Since , where: Then per the convolution theorem, the starred transform is equivalent to the complex convolution of and , hence: This line integration is equivalent to integration in the positive sense along a closed contour formed by such a line and an infinite semicircle that encloses the poles of X(s) in the left half-plane of p. The result of such an integration (per the residue theorem) would be: Alternatively, the aforementioned line integration is equivalent to integration in the negative sense along a closed contour formed by such a line and an infinite semicircle that encloses the infinite poles of in the right half-plane of p. The result of such an integration would be: Relation to Z transform Given a Z-transform, X(z), the corresponding starred transform is a simple substitution:   This substitution restores the dependence on T. It's interchangeable, Properties of the starred transform Property 1:   i
https://en.wikipedia.org/wiki/Philip%20Hanawalt
Philip C. Hanawalt (born 1931) is an American biologist who discovered the process of repair replication of damaged DNA in 1963. He is also considered the co-discoverer of the ubiquitous process of DNA excision repair along with his mentor, Richard Setlow, and Paul Howard-Flanders. He holds the Dr. Morris Herzstein Professorship in the Department of Biology at Stanford University, with a joint appointment in the Dermatology Department in Stanford University School of Medicine. Early life and education Philip C. Hanawalt was born on 1931 in Akron, Ohio. He was raised in Midland, Michigan. Having an interest in electronics from youth, Hanawalt earned an honorable mention in the 1949 Westinghouse Science Talent Search, receiving a scholarship to attend Deep Springs College. Hanawalt eventually transferred to Oberlin College where he received his B.A. degree in physics in 1954. He received his M.S. degree in physics from Yale University in 1955. Hanawalt also received his Ph.D. in Biophysics from Yale University in 1959. His doctoral thesis advisor was Richard Setlow. He undertook three years of postdoctoral study at the University of Copenhagen, Denmark, and at the California Institute of Technology before joining the faculty at Stanford in 1961. DNA repair DNA repair is the process by which all living cells deal with damage to their genetic material. Such damage occurs as a consequence of exposure to environmental radiations and genotoxic chemicals, but also to endogenous oxidations and the intrinsic instability of DNA. Hanawalt and his colleagues discovered a special pathway of excision repair, called transcription-coupled repair, which is targeted to expressed genes, and he studies several diseases characterized by defects in DNA repair pathways. DNA repair is important for protecting against cancer and some aspects of ageing in humans, and its deficiency has been implicated in the etiology of a number of hereditary diseases. Career In 1965 Hanawalt became ass
https://en.wikipedia.org/wiki/Evolutionary%20musicology
Evolutionary musicology is a subfield of biomusicology that grounds the cognitive mechanisms of music appreciation and music creation in evolutionary theory. It covers vocal communication in other animals, theories of the evolution of human music, and holocultural universals in musical ability and processing. History The origins of the field can be traced back to Charles Darwin who wrote in The Descent of Man, and Selection in Relation to Sex: This theory of a musical protolanguage has been revived and re-discovered repeatedly. The origins of music Like the origin of language, the origin of music has been a topic for speculation and debate for centuries. Leading theories include Darwin's theory of partner choice (women choose male partners based on musical displays), the idea that human musical behaviors are primarily based on behaviors of other animals (see zoomusicology), the idea that music emerged because it promotes social cohesion, the idea that music emerged because it helps children acquire verbal, social, and motor skills, and the idea that musical sound and movement patterns, and links between music, religion and spirituality, originated in prenatal psychology and mother-infant attachment. Two major topics for any subfield of evolutionary psychology are the adaptive function (if any) and phylogenetic history of the mechanism or behavior of interest including when music arose in human ancestry and from what ancestral traits it developed. Current debate addresses each of these. One part of the adaptive function question is whether music constitutes an evolutionary adaptation or exaptation (i.e. by-product of evolution). Steven Pinker, in his book How the Mind Works, for example, argues that music is merely "auditory cheesecake"—it was evolutionarily adaptive to have a preference for fat and sugar but cheesecake did not play a role in that selection process. This view has been directly countered by numerous music researchers. Adaptation, on the other
https://en.wikipedia.org/wiki/Superior%20limbic%20keratoconjunctivitis
Superior limbic keratoconjunctivitis (SLK, Théodore's syndrome) is a disease of the eye characterized by episodes of recurrent inflammation of the superior cornea and limbus, as well as of the superior tarsal and bulbar conjunctiva. It was first described by F. H. Théodore in 1963. Symptoms and signs Patients present with red eye, burning, tearing, foreign body sensation and mild photophobia. Upon examination, the conjunctiva appears inflamed and thickened, especially at the limbus. Pathophysiology The development and pathophysiology of SLK is not well understood, but appears to involve microtrauma of keratoconjunctival surfaces. This mechanical hypothesis is supported by the increased lid apposition of exophthalmic thyroid patients, who are known to have an increased incidence of superior limbic keratoconjunctivitis. Diagnosis Treatment First-line treatments include topical corticosteroids and artificial tears. For non-responsive cases, potential treatments include topical ciclosporin A, vitamin A, autologous serum and injections of triamcinolone. Surgical treatment options include thermocauterization of the bulbar conjunctiva and conjunctival resection, typically under rose bengal (RB) staining to visualize affected areas. Epidemiology Superior limbic keratoconjunctivitis tends to occur more often with dry eye syndrome (keratoconjunctivitis sicca), hyperthyroidism and hyperparathyroidism. It is also a rare complication associated with rheumatoid arthritis. Rarely, it may occur as a consequence of upper eyelid blepharoplasty surgery.
https://en.wikipedia.org/wiki/Java%20processor
A Java processor is the implementation of the Java virtual machine (JVM) in hardware. In other words, the Java bytecode that makes up the instruction set of the abstract machine becomes the instruction set of a concrete machine. These were the most popular form of a high-level language computer architecture, and were "an attractive choice for building embedded and real-time systems that are programmed in Java". However, as of 2017, embedded Java is "pretty much dead" and no realtime Java chip vendors exist. Implementations There are several research Java processors tested on FPGA, including: picoJava was the first attempt to build a Java processor, by Sun Microsystems. Its successor picoJava-II was freely available under the Sun Community Source License, and is still available from some archives. provides hardware support for object-oriented functions Java Optimized Processor for FPGAs. A PhD thesis is available, and it has been used in several commercial applications. In 2019 it was extended to be energy aware (EAJOP). Some commercial implementations included: The aJile processor was the most successful ASIC Java processor. Cjip from Imsys Technologies. Available on boards and with wireless radios from AVIDwireless ARM926EJ-S is an ARM processor able to run Java bytecode, this technology being named Jazelle See also Java Card
https://en.wikipedia.org/wiki/Witt%27s%20theorem
"Witt's theorem" or "the Witt theorem" may also refer to the Bourbaki–Witt fixed point theorem of order theory. In mathematics, Witt's theorem, named after Ernst Witt, is a basic result in the algebraic theory of quadratic forms: any isometry between two subspaces of a nonsingular quadratic space over a field k may be extended to an isometry of the whole space. An analogous statement holds also for skew-symmetric, Hermitian and skew-Hermitian bilinear forms over arbitrary fields. The theorem applies to classification of quadratic forms over k and in particular allows one to define the Witt group W(k) which describes the "stable" theory of quadratic forms over the field k. Statement Let be a finite-dimensional vector space over a field k of characteristic different from 2 together with a non-degenerate symmetric or skew-symmetric bilinear form. If {{nowrap|f : U → ''U}} is an isometry between two subspaces of V then f extends to an isometry of V. Witt's theorem implies that the dimension of a maximal totally isotropic subspace (null space) of V is an invariant, called the index or of b, and moreover, that the isometry group of acts transitively on the set of maximal isotropic subspaces. This fact plays an important role in the structure theory and representation theory of the isometry group and in the theory of reductive dual pairs. Witt's cancellation theorem Let , , be three quadratic spaces over a field k. Assume that Then the quadratic spaces and are isometric: In other words, the direct summand appearing in both sides of an isomorphism between quadratic spaces may be "cancelled". Witt's decomposition theorem Let be a quadratic space over a field k. Then it admits a Witt decomposition: where is the radical of q, is an anisotropic quadratic space and is a split quadratic space. Moreover, the anisotropic summand, termed the core form, and the hyperbolic summand in a Witt decomposition of are determined uniquely up to isomorph
https://en.wikipedia.org/wiki/Lacing%20%28drugs%29
Lacing or cutting, in drug culture, refer to the act of using a substance (referred to as the lacing agent or cutting agent) to adulterate substances independent of the reason. The resulting substance is laced or cut. Some street drugs are commonly laced with other chemicals for various reasons, but it is most commonly done to bulk up the original product or to sell other, cheaper drugs in the place of something more expensive. Individuals sometimes lace their own drugs with another substance to combine or alter the physiological or psychoactive effects. Overview The classical model of drug cutting refers to the way that illicit drugs were diluted at each stage of the chain of distribution. Drug markets have changed considerably since the 1980s; greater competition, and a shift from highly structured (and thus controlled) to greatly fragmented markets, has generated competition among dealers in terms of purity. Many drugs that reach the street are now only cut at the manufacture/producer stage, and this may be more a matter of lacing the drug with another substance designed to appeal to the consumer, as opposed to simple diluents that increase the profit for the seller. The extent of cutting can vary significantly over time but for the last 15 years drugs such as heroin and cocaine have often sat at the 50% purity level. Heroin purity sitting at 50% does not mean 50% cutting agents; other adulterants could include other opiate by-products of making heroin from opium. Coomber, after having street heroin seizures from the UK re-analysed, reported that nearly 50% of the samples had no cutting agents present at all. This means that 50% of street heroin in the UK in 1995 had worked its way from producer to user without being cut at any stage, although other adulterants may have been present. Other research outlined how drug dealers have other ways of making profit without having to resort to cutting the drugs they sell. Cocaine has been cut with various substances
https://en.wikipedia.org/wiki/Economic%20capital
In finance, mainly for financial services firms, economic capital (ecap) is the amount of risk capital, assessed on a realistic basis, which a firm requires to cover the risks that it is running or collecting as a going concern, such as market risk, credit risk, legal risk, and operational risk. It is the amount of money that is needed to secure survival in a worst-case scenario. Firms and financial services regulators should then aim to hold risk capital of an amount equal at least to economic capital. Typically, economic capital is calculated by determining the amount of capital that the firm needs to ensure that its realistic balance sheet stays solvent over a certain time period with a pre-specified probability. Therefore, economic capital is often calculated as value at risk. The balance sheet, in this case, would be prepared showing market value (rather than book value) of assets and liabilities. The first accounts of economic capital date back to the ancient Phoenicians, who took rudimentary tallies of frequency and severity of illnesses among rural farmers to gain an intuition of expected losses in productivity. These calculations were advanced by correlations to climate change, political outbreaks, and birth rate change. The concept of economic capital differs from regulatory capital in the sense that regulatory capital is the mandatory capital the regulators require to be maintained while economic capital is the best estimate of required capital that financial institutions use internally to manage their own risk and to allocate the cost of maintaining regulatory capital among different units within the organization. In social science In social science, economic capital is distinguished in relation to other types of capital which may not necessarily reflect a monetary or exchange-value. These forms of capital include natural capital, cultural capital and social capital; the latter two represent a type of power or status that an individual can attain
https://en.wikipedia.org/wiki/Microsoft%20Point-to-Point%20Encryption
Microsoft Point-to-Point Encryption (MPPE) encrypts data in Point-to-Point Protocol (PPP)-based dial-up connections or Point-to-Point Tunneling Protocol (PPTP) virtual private network (VPN) connections. 128-bit key (strong), 56-bit key, and 40-bit key (standard) MPPE encryption schemes are supported. MPPE provides data security for the PPTP connection that is between the VPN client and the VPN server. MPPE alone does not compress or expand data, but the protocol is often used in conjunction with Microsoft Point-to-Point Compression which compresses data across PPP or VPN links. Negotiation of MPPE happens within the Compression Control Protocol (CCP), a subprotocol of PPP. This can lead to incorrect belief that it is a compression protocol. RFC 3078, which defines this protocol, defines RC4 with either 40-bit or 128-bit key lengths as the only encryption options with this protocol. External links (the protocol), (deriving initial session keys) MPPE, Microsoft Point-To-Point Encryption Protocol Broken cryptography algorithms Cryptographic protocols Internet protocols Microsoft Windows security technology
https://en.wikipedia.org/wiki/164%20%28number%29
164 (one hundred [and] sixty-four) is the natural number following 163 and preceding 165. In mathematics 164 is a zero of the Mertens function. In base 10, 164 is the smallest number that can be expressed as a concatenation of two squares in two different ways: as 1 concatenate 64 or 16 concatenate 4. In astronomy 164P/Christensen is a comet in the Solar System 164 Eva is a large and dark Main belt asteroid In geography Chaplin no. 164, Saskatchewan in Saskatchewan, Canada In the military was a cargo vessel during World War II was a T2 tanker during World War II was a Barracuda-class submarine during World War II was an Alamosa-class cargo ship during World War II was an Admirable-class minesweeper during World War II was a Trefoil-class concrete barge during World War II was a during World War II was a during World War II was a yacht during World War I was a during World War II In sports Baseball Talk was a set of 164 talking baseball cards released by Topps Baseball Card Company in 1989 In transportation Caproni Ca.164 was a training biplane produced in Italy prior to World War II The Alfa Romeo 164 car produced from 1988 to 1997 The Volvo 164 car produced from 1968 to 1975 List of highways numbered 164 Is a London Transport bus route running between Sutton and Wimbledon In other fields 164 is also: The year AD 164 or 164 BC 164 AH is a year in the Islamic calendar that corresponds to 780 – 781 CE The Scrabble board, a 15-by-15 grid, includes 164 squares that have neither word nor letter multiplier. The remainder have attributes such as double letter, triple letter, double word, and triple word The atomic number of an element temporarily called Unhexquadium Solvent Red 164 is a synthetic red diazo dye E.164 is an ITU-T recommendation defines public telecommunication numbering plan used in the PSTN and data networks See also United States Supreme Court cases, Volume 164 United Nations Security Council Resolution 164
https://en.wikipedia.org/wiki/California%20Digital%20Library
The California Digital Library (CDL) was founded by the University of California in 1997. Under the leadership of then UC President Richard C. Atkinson, the CDL's original mission was to forge a better system for scholarly information management and improved support for teaching and research. In collaboration with the ten University of California Libraries and other partners, CDL assembled one of the world's largest digital research libraries. CDL facilitates the licensing of online materials and develops shared services used throughout the UC system. Building on the foundations of the Melvyl Catalog (UC's union catalog), CDL has developed one of the largest online library catalogs in the country and works in partnership with the UC campuses to bring the treasures of California's libraries, museums, and cultural heritage organizations to the world. CDL continues to explore how services such as digital curation, scholarly publishing, archiving and preservation support research throughout the information lifecycle. History The California Digital Library (CDL) is the eleventh library for the University of California (UC). A collaborative effort of the ten campuses, organizationally housed at the University of California Office of the President, it is responsible for the design, creation, and implementation of systems that support the shared collections of the University of California. Several CDL projects focus on collaboration with other California Universities and organizations to create and extend access to digital material to UC partners and to the public at large. The CDL was created as the result of a three-year planning process, beginning with the Digital Library Executive Working Group commissioned by Library Council and culminating with the Library Planning and Action Initiative commissioned by the Provost, which involved UC faculty, librarians, and administrators. On February 7, 2012, CDL partnered with the Public Knowledge Project (PKP), joining severa
https://en.wikipedia.org/wiki/Black%20cross
Black cross or Black Cross may refer to: Black Cross (Teutonic Order), heraldic insignia of the Teutonic order (since 1205) Black Cross (Germany), military emblem of Prussia and Germany, derived from the cross used by the Teutonic order Anarchist Black Cross, an anarchist support organization Black Cross Navigation and Trading Company, the successor of the Black Star Line "Black Cross (Hezekiah Jones)", a 1948 poem by Joseph Simon Newman, recorded by Lord Buckley and by Bob Dylan Black Cross, also known as Knights of the Teutonic Order, a 1960 film from Poland "Black Cross" (song), debut single of the band 45 Grave Kroaz Du, a Breton flag See also Blue Cross (disambiguation) Bronze Cross Green Cross Red Cross (disambiguation) Silver Cross White Cross (disambiguation) Yellow cross Cross, black Cross symbols
https://en.wikipedia.org/wiki/Automatic%20group
In mathematics, an automatic group is a finitely generated group equipped with several finite-state automata. These automata represent the Cayley graph of the group. That is, they can tell if a given word representation of a group element is in a "canonical form" and can tell if two elements given in canonical words differ by a generator. More precisely, let G be a group and A be a finite set of generators. Then an automatic structure of G with respect to A is a set of finite-state automata: the word-acceptor, which accepts for every element of G at least one word in representing it; multipliers, one for each , which accept a pair (w1, w2), for words wi accepted by the word-acceptor, precisely when in G. The property of being automatic does not depend on the set of generators. Properties Automatic groups have word problem solvable in quadratic time. More strongly, a given word can actually be put into canonical form in quadratic time, based on which the word problem may be solved by testing whether the canonical forms of two words represent the same element (using the multiplier for ). Automatic groups are characterized by the fellow traveler property. Let denote the distance between in the Cayley graph of . Then, G is automatic with respect to a word acceptor L if and only if there is a constant such that for all words which differ by at most one generator, the distance between the respective prefixes of u and v is bounded by C. In other words, where for the k-th prefix of (or itself if ). This means that when reading the words synchronously, it is possible to keep track of the difference between both elements with a finite number of states (the neighborhood of the identity with diameter C in the Cayley graph). Examples of automatic groups The automatic groups include: Finite groups. To see this take the regular language to be the set of all words in the finite group. Euclidean groups All finitely generated Coxeter groups Geometrically finite grou
https://en.wikipedia.org/wiki/Comparison%20of%20webmail%20providers
The following tables compare general and technical information for a number of notable webmail providers who offer a web interface in English. The list does not include web hosting providers who may offer email server and/or client software as a part of hosting package, or telecommunication providers (mobile network operators, internet service providers) who may offer mailboxes exclusively to their customers. General General information on webmail providers and products Digital rights Verification How much information users must provide to verify and complete the registration when opening an account (green means less personal information requested): Secure delivery Features to reduce the risk of third-party tracking and interception of the email content; measures to increase the deliverability of correct outbound messages. Other Unique features Features See also Comparison of web search engines - often merged with webmail by companies that host both services
https://en.wikipedia.org/wiki/Ricart%E2%80%93Agrawala%20algorithm
The Ricart–Agrawala algorithm is an algorithm for mutual exclusion on a distributed system. This algorithm is an extension and optimization of Lamport's Distributed Mutual Exclusion Algorithm, by removing the need for messages. It was developed by computer scientists Glenn Ricart and Ashok Agrawala. Algorithm Terminology A site is any computing device which runs the Ricart-Agrawala Algorithm The requesting site is the site which is requesting to enter the critical section. The receiving site is every other site which is receiving a request from the requesting site. Algorithm Requesting Site Sends a message to all sites. This message includes the site's name, and the current timestamp of the system according to its logical clock (which is assumed to be synchronized with the other sites) Receiving Site Upon reception of a request message, immediately sending a timestamped reply message if and only if: the receiving process is not currently interested in the critical section OR the receiving process has a lower priority (''usually this means having a later timestamp) Otherwise, the receiving process will defer the reply message. This means that a reply will be sent only after the receiving process has finished using the critical section itself. Critical Section: Requesting site enters its critical section only after receiving all reply messages. Upon exiting the critical section, the site sends all deferred reply messages. Performance Max number of network messages: Synchronization Delays: One message propagation delay Common optimizations Once site has received a message from site , site may enter the critical section multiple times without receiving permission from on subsequent attempts up to the moment when has sent a message to . This is called Roucairol-Carvalho optimization or Roucairol-Carvalho algorithm. Problems One of the problems in this algorithm is failure of a node. In such a situation a process may starve forever. This
https://en.wikipedia.org/wiki/Bion%20%28satellite%29
The Bion satellites (), also named Biocosmos, is a series of Soviet (later Russian) biosatellites focused on space medicine. Bion space program Bion precursor flights and Bion flights The Soviet biosatellite program began in 1966 with Kosmos 110, and resumed in 1973 with Kosmos 605. Cooperation in space ventures between the Soviet Union and the United States was initiated in 1971, with the signing of the United States and Soviet Union in Science and Applications Agreement (which included an agreement on space research cooperation). The Soviet Union first offered to fly U.S. experiments on a Kosmos biosatellite in 1974, only a few years after the termination (in 1969) of the U.S. biosatellite program. The offer was realized in 1975 when the first joint U.S./Soviet research were carried out on the Kosmos 782 mission. The Bion spacecraft were based on the Zenit spacecraft and launches began in 1973 with primary emphasis on the problems of radiation effects on human beings. Launches in the program included Kosmos 110, 605, 690, 782, plus Nauka modules flown on Zenit-2M reconnaissance satellites. of equipment could be contained in the external Nauka module. The Soviet/Russian Bion program provided U.S. investigators a platform for launching Fundamental Space Biology and biomedical experiments into space. The Bion program, which began in 1966, included a series of missions that flew biological experiments using primates, rodents, insects, cells, and plants on a biosatellite in near Earth orbit. NASA became involved in the program in 1975 and participated in 9 of the 11 Bion missions. NASA ended its participation in the program with the Bion No.11 mission launched in December 1996. The collaboration resulted in the flight of more than 100 U.S. experiments, one-half of all U.S. life sciences flight experiments accomplished with non-human subjects. The missions ranged from five days (Bion 6) (Kosmos 1514) to around 22 days (Bion 1 and Kosmos 110). Bion-M In 2005, th
https://en.wikipedia.org/wiki/Criticism%20of%20Windows%20XP
Criticism of Windows XP deals with issues with security, performance and the presence of product activation errors that are specific to the Microsoft operating system Windows XP. Security issues Windows XP has been criticized for its vulnerabilities due to buffer overflows and its susceptibility to malware such as viruses, trojan horses, and worms. Nicholas Petreley for The Register notes that "Windows XP was the first version of Windows to reflect a serious effort to isolate users from the system, so that users each have their own private files and limited system privileges." However, users by default receive an administrator account that provides unrestricted access to the underpinnings of the system. If the administrator's account is compromised, there is no limit to the control that can be asserted over the PC. Windows XP Home Edition also lacks the ability to administer security policies and denies access to the Local Users and Groups utility. Microsoft stated that the release of security patches is often what causes the spread of exploits against those very same flaws, as crackers figure out what problems the patches fix and then launch attacks against unpatched systems. For example, in August 2003 the Blaster worm exploited a vulnerability present in every unpatched installation of Windows XP, and was capable of compromising a system even without user action. In May 2004 the Sasser worm spread by using a buffer overflow in a remote service present on every installation. Patches to prevent both of these well-known worms had already been released by Microsoft. Increasingly widespread use of Service Pack 2 and greater use of personal firewalls may also contribute to making worms like these less common. Many attacks against Windows XP systems come in the form of trojan horse e-mail attachments which contain worms. A user who opens the attachment can unknowingly infect his or her own computer, which may then e-mail the worm to more people. Notable worms of this
https://en.wikipedia.org/wiki/N-Ethylmaleimide
N-Ethylmaleimide (NEM) is an organic compound that is derived from maleic acid. It contains the amide functional group, but more importantly it is an alkene that is reactive toward thiols and is commonly used to modify cysteine residues in proteins and peptides. Organic chemistry NEM is a Michael acceptor in the Michael reaction, which means that it adds nucleophiles such as thiols. The resulting thioether features a strong C-S bond and the reaction is virtually irreversible. Reaction with thiols occur in the pH range 6.5–7.5, NEM may react with amines or undergo hydrolysis at a more alkaline pH. NEM has been widely used to probe the functional role of thiol groups in enzymology. NEM is an irreversible inhibitor of all cysteine peptidases, with alkylation occurring at the active site thiol group (see schematic). Case studies NEM blocks vesicular transport. In lysis buffers, 20 to 25 mM of NEM is used to inhibit de-sumoylation of proteins for Western Blot analysis. NEM has also been used as an inhibitor of deubiquitinases. N-Ethylmaleimide was used by Arthur Kornberg and colleagues to knock out DNA polymerase III in order to compare its activity to that of DNA polymerase I (pol III and I, respectively). Kornberg had been awarded the Nobel Prize for discovering pol I, then believed to be the mechanism of bacterial DNA replication, although in this experiment he showed that pol III was the actual replicative machinery. NEM activates ouabain-insensitive Cl-dependent K efflux in low K sheep and goat red blood cells. This discovery contributed to the molecular identification of K-Cl cotransport (KCC) in human embryonic cells transfected by KCC1 isoform cDNA, 16 years later. Since then, NEM has been widely used as a diagnostic tool to uncover or manipulate the membrane presence of K-Cl cotransport in cells of many species in the animal kingdom. Despite repeated unsuccessful attempts to identify chemically the target thiol group, at physiological pH, NEM may form ad
https://en.wikipedia.org/wiki/T%C4%81%20moko
is the permanent marking or "tattoo" as traditionally practised by Māori, the indigenous people of New Zealand. It is one of the five main Polynesian tattoo styles (the other four are Marquesan, Samoan, Tahitian and Hawaiian). (tattooists) were considered , or inviolable and sacred. Historical practice (pre-contact) Tattoo arts are common in the Eastern Polynesian homeland of the Māori people, and the traditional implements and methods employed were similar to those used in other parts of Polynesia. In pre-European Māori culture, many if not most high-ranking persons received . Moko were associated with mana and high social status; however, some very high-status individuals were considered too tapu to acquire moko, and it was also not considered suitable for some tohunga to do so. Receiving constituted an important milestone between childhood and adulthood, and was accompanied by many rites and rituals. Apart from signalling status and rank, another reason for the practice in traditional times was to make a person more attractive to the opposite sex. Men generally received on their faces, buttocks () and thighs (). Women usually wore moko on their lips () and chins. Other parts of the body known to have moko include women's foreheads, buttocks, thighs, necks and backs and men's backs, stomachs, and calves. Instruments used Historically the skin was carved by (chisels), rather than punctured as in common contemporary tattooing; this left the skin with grooves rather than a smooth surface. Later needle tattooing was used, but, in 2007, it was reported that the currently was being used by some artists. Originally ( specialists) used a range of (chisels) made from albatross bone which were hafted onto a handle, and struck with a mallet. The pigments were made from the for the body colour, and (burnt timbers) for the blacker face colour. The soot from burnt kauri gum was also mixed with fat to make pigment. The pigment was stored in ornate vessels named
https://en.wikipedia.org/wiki/Docstring
In programming, a docstring is a string literal specified in source code that is used, like a comment, to document a specific segment of code. Unlike conventional source code comments, or even specifically formatted comments like docblocks, docstrings are not stripped from the source tree when it is parsed and are retained throughout the runtime of the program. This allows the programmer to inspect these comments at run time, for instance as an interactive help system, or as metadata. Languages that support docstrings include Python, Lisp, Elixir, Clojure, Gherkin, Julia and Haskell. Implementation examples Elixir Documentation is supported at language level, in the form of docstrings. Markdown is Elixir's de facto markup language of choice for use in docstrings: def module MyModule do @moduledoc """ Documentation for my module. With **formatting**. """ @doc "Hello" def world do "World" end end Lisp In Lisp, docstrings are known as documentation strings. The Common Lisp standard states that a particular implementation may choose to discard docstrings whenever they want, for whatever reason. When they are kept, docstrings may be viewed and changed using the DOCUMENTATION function. For instance: (defun foo () "hi there" nil) (documentation #'foo 'function) => "hi there" Python The common practice of documenting a code object at the head of its definition is captured by the addition of docstring syntax in the Python language. The docstring for a Python code object (a module, class, or function) is the first statement of that code object, immediately following the definition (the 'def' or 'class' statement). The statement must be a bare string literal, not any other kind of expression. The docstring for the code object is available on that code object's __doc__ attribute and through the help function. The following Python file shows the declaration of docstrings within a Python source file: """The module's docstring""" class MyClass: """Th
https://en.wikipedia.org/wiki/Open%20collector
Open collector, open drain, open emitter, and open source refer to integrated circuit (IC) output pin configurations that process the IC's internal function though a transistor with an exposed terminal that is internally unconnected (i.e. "open"). One of the IC's internal high or low voltage rails typically connects to another terminal of that transistor. When the transistor is off, the output is internally disconnected from any internal power rail, a state called "high-impedance" (Hi-Z). Open outputs configurations thus differ from push–pull outputs, which use a pair of transistors to output a specific voltage or current. These open outputs configurations are often used for digital applications when the transistor acts as a switch, to allow for logic-level conversion, wired-logic connections, and line sharing. External pull-up/down resistors are typically required to set the output during the Hi-Z state to a specific voltage. Analog applications include analog weighting, summing, limiting, and digital-to-analog converters. The NPN BJT (n-type bipolar junction transistor) and nMOS (n-type metal oxide semiconductor field effect transistor) have greater conductance than their PNP and pMOS relatives, so may be more commonly used for these outputs. Open outputs using PNP and pMOS transistors will use the opposite internal voltage rail used by NPN and nMOS transistors. Open collector An open collector output processes an IC's output through the base of an internal bipolar junction transistor (BJT), whose collector is exposed as the external output pin. For NPN open collector outputs, the emitter of the NPN transistor is internally connected to ground, so the NPN open collector internally forms either a short-circuit (technically low impedance or "low-Z") connection to the low voltage (which could be ground) when the transistor is switched on, or an open-circuit (technically high impedance or "hi-Z") when the transistor is off. The output is usually connected to an e
https://en.wikipedia.org/wiki/Iliotibial%20tract
The iliotibial tract or iliotibial band (ITB; also known as Maissiat's band or the IT band) is a longitudinal fibrous reinforcement of the fascia lata. The action of the muscles associated with the ITB (tensor fasciae latae and some fibers of gluteus maximus) flex, extend, abduct, and laterally and medially rotate the hip. The ITB contributes to lateral knee stabilization. During knee extension the ITB moves anterior to the lateral condyle of the femur, while ~30 degrees knee flexion, the ITB moves posterior to the lateral condyle. However, it has been suggested that this is only an illusion due to the changing tension in the anterior and posterior fibers during movement. It originates at the anterolateral iliac tubercle portion of the external lip of the iliac crest and inserts at the lateral condyle of the tibia at Gerdy's tubercle. The figure shows only the proximal part of the iliotibial tract. The part of the iliotibial band which lies beneath the tensor fasciae latae is prolonged upward to join the lateral part of the capsule of the hip-joint. The tensor fasciae latae effectively tightens the iliotibial band around the area of the knee. This allows for bracing of the knee especially in lifting the opposite foot. The gluteus maximus muscle and the tensor fasciae latae insert upon the tract. Clinical significance The IT band stabilizes the knee both in extension and in partial flexion, and is therefore used constantly during walking and running. When a person is leaning forwards with a slightly flexed knee, the tract is the knee's main support against gravity. Iliotibial band syndrome (ITBS or ITBFS, for iliotibial band friction syndrome) is a common thigh injury generally associated with running. It can also be caused by cycling or hiking. The onset of iliotibial band syndrome occurs most commonly in cases of overuse. The iliotibial band itself becomes inflamed in response to repeated compression on the outside of the knee or swelling of the fat pad betw
https://en.wikipedia.org/wiki/Binary%20splitting
In mathematics, binary splitting is a technique for speeding up numerical evaluation of many types of series with rational terms. In particular, it can be used to evaluate hypergeometric series at rational points. Method Given a series where pn and qn are integers, the goal of binary splitting is to compute integers P(a, b) and Q(a, b) such that The splitting consists of setting m = [(a + b)/2] and recursively computing P(a, b) and Q(a, b) from P(a, m), P(m, b), Q(a, m), and Q(m, b). When a and b are sufficiently close, P(a, b) and Q(a, b) can be computed directly from pa...pb and qa...qb. Comparison with other methods Binary splitting requires more memory than direct term-by-term summation, but is asymptotically faster since the sizes of all occurring subproducts are reduced. Additionally, whereas the most naive evaluation scheme for a rational series uses a full-precision division for each term in the series, binary splitting requires only one final division at the target precision; this is not only faster, but conveniently eliminates rounding errors. To take full advantage of the scheme, fast multiplication algorithms such as Toom–Cook and Schönhage–Strassen must be used; with ordinary O(n2) multiplication, binary splitting may render no speedup at all or be slower. Since all subdivisions of the series can be computed independently of each other, binary splitting lends well to parallelization and checkpointing. In a less specific sense, binary splitting may also refer to any divide and conquer algorithm that always divides the problem in two halves.
https://en.wikipedia.org/wiki/Fluorine-18
Fluorine-18 (18F) is a fluorine radioisotope which is an important source of positrons. It has a mass of 18.0009380(6) u and its half-life is 109.771(20) minutes. It decays by positron emission 96% of the time and electron capture 4% of the time. Both modes of decay yield stable oxygen-18. Natural occurrence is a natural trace radioisotope produced by cosmic ray spallation of atmospheric argon as well as by reaction of protons with natural oxygen: 18O + p → 18F + n. Synthesis In the radiopharmaceutical industry, fluorine-18 is made using either a cyclotron or linear particle accelerator to bombard a target, usually of natural or enriched [18O]water with high energy protons (typically ~18 MeV). The fluorine produced is in the form of a water solution of [18F]fluoride, which is then used in a rapid chemical synthesis of various radio pharmaceuticals. The organic oxygen-18 pharmaceutical molecule is not made before the production of the radiopharmaceutical, as high energy protons destroy such molecules (radiolysis). Radiopharmaceuticals using fluorine must therefore be synthesized after the fluorine-18 has been produced. History First published synthesis and report of properties of fluorine-18 were in 1937 by Arthur H. Snell, produced by the nuclear reaction of 20Ne(d,α)18F in the cyclotron laboratories of Ernest O. Lawrence. Chemistry Fluorine-18 is often substituted for a hydroxyl group in a radiotracer parent molecule, due to similar steric and electrostatic properties. This may however be problematic in certain applications due to possible changes in the molecule polarity. Applications Fluorine-18 is one of the early tracers used in positron emission tomography (PET), having been in use since the 1960s. Its significance is due to both its short half-life and the emission of positrons when decaying. A major medical use of fluorine-18 is: in positron emission tomography (PET) to image the brain and heart; to image the thyroid gland; as a radiotracer to i
https://en.wikipedia.org/wiki/Grid%20cell
A grid cell is a type of neuron within the entorhinal cortex that fires at regular intervals as an animal navigates an open area, allowing it to understand its position in space by storing and integrating information about location, distance, and direction. Grid cells have been found in many animals, including rats, mice, bats, monkeys, and humans. Grid cells were discovered in 2005 by Edvard Moser, May-Britt Moser, and their students Torkel Hafting, Marianne Fyhn, and Sturla Molden at the Centre for the Biology of Memory (CBM) in Norway. They were awarded the 2014 Nobel Prize in Physiology or Medicine together with John O'Keefe for their discoveries of cells that constitute a positioning system in the brain. The arrangement of spatial firing fields, all at equal distances from their neighbors, led to a hypothesis that these cells encode a neural representation of Euclidean space. The discovery also suggested a mechanism for dynamic computation of self-position based on continuously updated information about position and direction. To detect grid cell activity in a typical rat experiment, an electrode which can record single-neuron activity is implanted in the dorsomedial entorhinal cortex and collects recordings as the rat moves around freely in an open arena. The resulting data can be visualized by marking the rat's position on a map of the arena every time that neuron fires an action potential. These marks accumulate over time to form a set of small clusters, which in turn form the vertices of a grid of equilateral triangles. The regular triangle pattern distinguishes grid cells from other types of cells that show spatial firing. By contrast, if a place cell from the rat hippocampus is examined in the same way, then the marks will frequently only form one cluster (one "place field") in a given environment, and even when multiple clusters are seen, there is no perceptible regularity in their arrangement. Background of discovery In 1971 John O'Keefe and Jonathon
https://en.wikipedia.org/wiki/Amadeus%20IT%20Group
Amadeus IT Group, S.A. () is a major Spanish multinational technology company that provides software solutions for the global travel and tourism industry. It is the world's leading provider of travel technology that focus on developing software for airlines, hotels, travel agencies, and other travel-related businesses to enhance their operations and customer experiences. The company is structured around two areas: its global distribution system and its Information Technology business. Amadeus provides search, pricing, booking, ticketing and other processing services in real-time to travel providers and travel agencies through its Amadeus CRS distribution business area. It also offers computer software that automates processes such as reservations, inventory management software and departure control systems. It services customers including airlines, hotels, tour operators, insurers, car rental and railway companies, ferry and cruise lines, travel agencies and individual travellers directly. Amadeus processed 945 million billable travel transactions in 2011. The parent company of Amadeus IT Group, holding over 99.7% of the firm, is Amadeus IT Holding S.A. It was listed on the Spanish stock exchanges on 29 April 2010. Amadeus has central sites in Madrid, Spain (corporate headquarters and marketing), Sophia Antipolis, France (product development), London, UK (product development), Breda, Netherlands (development), Erding, Germany (Data center) and Bangalore, India (product development) as well as regional offices in Boston, Bangkok, Buenos Aires, Dubai, Miami, Istanbul, Singapore, and Sydney. At market level, Amadeus maintains customer operations through 173 local Amadeus Commercial Organisations (ACOs) covering 195 countries. The Amadeus group employs 14,200 employees worldwide, and listed in Forbes' list of "The World's Largest Public Companies" as No. 985. History Amadeus was originally created as a neutral global distribution system (GDS) by Air France, Iberia,
https://en.wikipedia.org/wiki/Higgins%20project
Higgins is an open-source project dedicated to giving individuals more control over their personal identity, profile and social network data. The project is organized into three main areas: Active Clients — An active client integrates with a browser and runs on a computer or mobile device. Higgins 1. X: the active client supports the OASIS IMI protocol and performs the functions of an Information Card selector. Higgins 2.0: the plan is to move beyond selector functionality to add support for managing passwords and Higgins relationship cards, as well other protocols such as OpenID. It also becomes a client for the Personal Data Store (see below) and thereby provides a kind of dashboard for personal information and a place to manage "permissioning" — deciding who gets access to what slice of the user's data. Personal Data Store (PDS) is a new work area under development for Higgins 2.0. A PDS stores local personal data, controls access to remotely hosted personal data, synchronizes personal data to other devices and computers, accessed directly or via a PDS client it allows the user to share selected aspects of their information with people and organizations that they trust. Identity Services — Code for (i) an IMI and SAML compatible Identity Provider, and (ii) enabling websites to be IMI and OpenID compatible. History The initial code for the Higgins Project was written by Paul Trevithick in the summer of 2003. In 2004 the effort became part of SocialPhysics.org, a collaboration between Paul and Mary Ruddy, of Azigo, (formerly Parity Communications, Inc.), and Meristic, and John Clippinger, at the Berkman Center for Internet & Society. Higgins, under its original name Eclipse Trust Framework, was accepted into the Eclipse Foundation in early 2005. Mary and Paul are the project co-leads. IBM and Novell's participation in the project was announced in early 2006. Higgins has received technology contributions from IBM, Novell, Oracle, CA, Serena, Google, eperi GmbH a
https://en.wikipedia.org/wiki/Konka%20Group
Konka Group Co., Ltd. () is a Chinese manufacturer of electronics products headquartered in Shenzhen, Guangdong and listed on Shenzhen Stock Exchange. History It was founded in 1980 as Shenzhen Konka Electronic Group Co., Ltd. and changed its name to Konka Group Co., Ltd. in 1995. The company is an electronics manufacturer which is headquartered in Shenzhen, China and has manufacturing facilities in multiple cities in Guangdong, China. The company distributes its products in China's domestic market and to overseas markets. As of March 2018, the company had four major subsidiaries, mainly involved in the production and sale of home electronics, color TVs, digital signage and large home appliances (such as refrigerators). As of May 2009, Hogshead Spouter Co. invests in and manages Konka's energy efficiency product lines. Konka E-display Co. Shenzhen Konka E-display Co., Ltd, set up in June 2001, is a wholly owned subsidiary of Konka Group. Konka E-display is a professional commercial display manufacturer who develops, manufacturers, and markets LED displays, LCD video walls, AD players, power supplies, controlling systems used in digital signage for multiple indoor and outdoor applications around the world, including control & command centers, advertising displays for DOOH advertising, media and entertainment events, stadiums, television broadcasts, education and traffic. Primary Product Groups Televisions Digital Signage LCD/LED Refrigerators and other Kitchen Appliances
https://en.wikipedia.org/wiki/Code%20reviewing%20software
Code reviewing software is computer software that helps humans find flaws in program source code. It can be divided into two categories: Automated code review software checks source code against a predefined set of rules and produces reports. Different types of browsers visualise software structure and help humans better understand its structure. Such systems are geared more to analysis because they typically do not contain a predefined set of rules to check software against. Manual code review tools allow people to collaboratively inspect and discuss changes, storing the history of the process for future reference. See also DeepCode (2016), cloud-based, AI-powered code review platform
https://en.wikipedia.org/wiki/Strong%20monad
In category theory, a strong monad over a monoidal category (C, ⊗, I) is a monad (T, η, μ) together with a natural transformation tA,B : A ⊗ TB → T(A ⊗ B), called (tensorial) strength, such that the diagrams , , , and commute for every object A, B and C (see Definition 3.2 in ). If the monoidal category (C, ⊗, I) is closed then a strong monad is the same thing as a C-enriched monad. Commutative strong monads For every strong monad T on a symmetric monoidal category, a costrength natural transformation can be defined by . A strong monad T is said to be commutative when the diagram commutes for all objects and . One interesting fact about commutative strong monads is that they are "the same as" symmetric monoidal monads. More explicitly, a commutative strong monad defines a symmetric monoidal monad by and conversely a symmetric monoidal monad defines a commutative strong monad by and the conversion between one and the other presentation is bijective.
https://en.wikipedia.org/wiki/Automated%20code%20review
Automated code review software checks source code for compliance with a predefined set of rules or best practices. The use of analytical methods to inspect and review source code to detect bugs or security issues has been a standard development practice in both Open Source and commercial software domains. This process can be accomplished both manually and in an automated fashion. With automation, software tools provide assistance with the code review and inspection process. The review program or tool typically displays a list of warnings (violations of programming standards). A review program can also provide an automated or a programmer-assisted way to correct the issues found. This is a component for mastering easily software. This is contributing to the Software Intelligence practice. This process is usually called "linting" since one of the first tools for static code analysis was called Lint. Some static code analysis tools can be used to help with automated code review. They do not compare favorably to manual reviews, however they can be done faster and more efficiently. These tools also encapsulate deep knowledge of underlying rules and semantics required to perform this type analysis such that it does not require the human code reviewer to have the same level of expertise as an expert human auditor. Many Integrated Development Environments also provide basic automated code review functionality. For example the Eclipse and Microsoft Visual Studio IDEs support a variety of plugins that facilitate code review. Next to static code analysis tools, there are also tools that analyze and visualize software structures and help humans to better understand these. Such systems are geared more to analysis because they typically do not contain a predefined set of rules to check software against. Some of these tools (e.g. Imagix 4D, Resharper, SonarJ, Sotoarc, Structure101, ACTool) allow one to define target architectures and enforce that target architecture constraints
https://en.wikipedia.org/wiki/Background%20Intelligent%20Transfer%20Service
Background Intelligent Transfer Service (BITS) is a component of Microsoft Windows XP and later iterations of the operating systems, which facilitates asynchronous, prioritized, and throttled transfer of files between machines using idle network bandwidth. It is most commonly used by recent versions of Windows Update, Microsoft Update, Windows Server Update Services, and System Center Configuration Manager to deliver software updates to clients, Microsoft's anti-virus scanner Microsoft Security Essentials (a later version of Windows Defender) to fetch signature updates, and is also used by Microsoft's instant messaging products to transfer files. BITS is exposed through the Component Object Model (COM). Technology BITS uses idle bandwidth to transfer data. Normally, BITS transfers data in the background, i.e., BITS will only transfer data whenever there is bandwidth which is not being used by other applications. BITS also supports resuming transfers in case of disruptions. BITS version 1.0 supports only downloads. From version 1.5, BITS supports both downloads and uploads. Uploads require the IIS web server, with BITS server extension, on the receiving side. Transfers BITS transfers files on behalf of requesting applications asynchronously, i.e., once an application requests the BITS service for a transfer, it will be free to do any other task, or even terminate. The transfer will continue in the background as long as the network connection is there and the job owner is logged in. BITS jobs do not transfer when the job owner is not signed in. BITS suspends any ongoing transfer when the network connection is lost or the operating system is shut down. It resumes the transfer from where it left off when (the computer is turned on later and) the network connection is restored. BITS supports transfers over SMB, HTTP and HTTPS. Bandwidth BITS attempts to use only spare bandwidth. For example, when applications use 80% of the available bandwidth, BITS will use only th
https://en.wikipedia.org/wiki/United%20States%20National%20Grid
The United States National Grid (USNG) is a multi-purpose location system of grid references used in the United States. It provides a nationally consistent "language of location", optimized for local applications, in a compact, user friendly format. It is similar in design to the national grid reference systems used in other countries. The USNG was adopted as a national standard by the Federal Geographic Data Committee (FGDC) of the US Government in 2001. Overview While latitude and longitude are well suited to describing locations over large areas of the Earth's surface, most practical land navigation situations occur within much smaller, local areas. As such, they are often better served by a local Cartesian coordinate system, in which the coordinates represent actual distance units on the ground, using the same units of measurement from two perpendicular coordinate axes. This can improve human comprehension by providing reference of scale, as well as making actual distance computations more efficient. Paper maps often are published with overlaid rectangular (as opposed to latitude/longitude) grids to provide a reference to identify locations. However, these grids, if non-standard or proprietary (such as so-called "bingo" grids with references such as "B-4"), are typically not interoperable with each other, nor can they usually be used with GPS. The goal of the USNG is to provide a uniform, nationally consistent rectangular grid system that is interoperable across maps at different scales, as well as with GPS and other location based systems. It is intended to provide a frame of reference for describing and communicating locations that is easier to use than latitude/longitude for many practical applications, works across jurisdictional boundaries, and is simple to learn, teach, and use. It is also designed to be both flexible and scalable so that location references are as compact and concise as possible. The USNG is intended to supplement—not to repl
https://en.wikipedia.org/wiki/Lydia%20Fairchild
Lydia Fairchild (born 1976) is an American woman who exhibits chimerism, having two distinct populations of DNA among the cells of her body. She was pregnant with her third child when she and the father of her children, Jamie Townsend, separated. When Fairchild applied for enforcement of child support in 2002, providing DNA evidence of Townsend's paternity was a routine requirement. While the results showed Townsend to certainly be their father, they seemed to rule out her being their mother. Fairchild stood accused of fraud by either claiming benefits for other people's children, or taking part in a surrogacy scam, and records of her prior births were put similarly in doubt. Prosecutors called for her two children to be taken away from her, believing them not to be hers. As time came for her to give birth to her third child, the judge ordered that an observer be present at the birth, ensure that blood samples were immediately taken from both the child and Fairchild, and be available to testify. Two weeks later, DNA tests seemed to indicate that she was also not the mother of that child. A breakthrough came when her defense attorney, Alan Tindell, learned of Karen Keegan, a chimeric woman in Boston, and suggested a similar possibility for Fairchild and then introduced an article in the New England Journal of Medicine about Keegan. He realized that Fairchild's case might also be caused by chimerism. As in Keegan's case, DNA samples were taken from members of the extended family. The DNA of Fairchild's children matched that of Fairchild's mother to the extent expected of a grandmother. They also found that, although the DNA in Fairchild's skin and hair did not match her children's, the DNA from a cervical smear test did match. Fairchild was carrying two different sets of DNA, the defining characteristic of chimerism. See also Mater semper certa est
https://en.wikipedia.org/wiki/Berndt%E2%80%93Hall%E2%80%93Hall%E2%80%93Hausman%20algorithm
The Berndt–Hall–Hall–Hausman (BHHH) algorithm is a numerical optimization algorithm similar to the Newton–Raphson algorithm, but it replaces the observed negative Hessian matrix with the outer product of the gradient. This approximation is based on the information matrix equality and therefore only valid while maximizing a likelihood function. The BHHH algorithm is named after the four originators: Ernst R. Berndt, Bronwyn Hall, Robert Hall, and Jerry Hausman. Usage If a nonlinear model is fitted to the data one often needs to estimate coefficients through optimization. A number of optimisation algorithms have the following general structure. Suppose that the function to be optimized is Q(β). Then the algorithms are iterative, defining a sequence of approximations, βk given by , where is the parameter estimate at step k, and is a parameter (called step size) which partly determines the particular algorithm. For the BHHH algorithm λk is determined by calculations within a given iterative step, involving a line-search until a point βk+1 is found satisfying certain criteria. In addition, for the BHHH algorithm, Q has the form and A is calculated using In other cases, e.g. Newton–Raphson, can have other forms. The BHHH algorithm has the advantage that, if certain conditions apply, convergence of the iterative procedure is guaranteed. See also Davidon–Fletcher–Powell (DFP) algorithm Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm
https://en.wikipedia.org/wiki/Durbin%E2%80%93Wu%E2%80%93Hausman%20test
The Durbin–Wu–Hausman test (also called Hausman specification test) is a statistical hypothesis test in econometrics named after James Durbin, De-Min Wu, and Jerry A. Hausman. The test evaluates the consistency of an estimator when compared to an alternative, less efficient estimator which is already known to be consistent. It helps one evaluate if a statistical model corresponds to the data. Details Consider the linear model y = Xb + e, where y is the dependent variable and X is vector of regressors, b is a vector of coefficients and e is the error term. We have two estimators for b: b0 and b1. Under the null hypothesis, both of these estimators are consistent, but b1 is efficient (has the smallest asymptotic variance), at least in the class of estimators containing b0. Under the alternative hypothesis, b0 is consistent, whereas b1 isn't. Then the Wu–Hausman statistic is: where † denotes the Moore–Penrose pseudoinverse. Under the null hypothesis, this statistic has asymptotically the chi-squared distribution with the number of degrees of freedom equal to the rank of matrix . If we reject the null hypothesis, it means that b1 is inconsistent. This test can be used to check for the endogeneity of a variable (by comparing instrumental variable (IV) estimates to ordinary least squares (OLS) estimates). It can also be used to check the validity of extra instruments by comparing IV estimates using a full set of instruments Z to IV estimates that use a proper subset of Z. Note that in order for the test to work in the latter case, we must be certain of the validity of the subset of Z and that subset must have enough instruments to identify the parameters of the equation. Hausman also showed that the covariance between an efficient estimator and the difference of an efficient and inefficient estimator is zero. Derivation Assuming joint normality of the estimators. Consider the function : By the delta method Using the commonly used result, showed by Haus
https://en.wikipedia.org/wiki/Statutory%20liquidity%20ratio
In India, the Statutory liquidity ratio (SLR) is the Government term for the reserve requirement that commercial banks are required to maintain in the form of cash, gold reserves,Govt. bonds and other Reserve Bank of India (RBI)- approved securities before providing credit to the customers. The SLR to be maintained by banks is determined by the RBI in order to control liquidity expansion. The SLR is determined as a percentage of total demand and time liabilities. Time liabilities refer to the liabilities which the commercial banks are liable to repay to the customers after an agreed period, and demand liabilities are customer deposits which are repayable on demand. An example of a time liability is a six-month fixed deposit which is not payable on demand but only after six months. An example of a demand liability is a deposit maintained in a saving account or current account that is payable on demand. The SLR is commonly used to control inflation and fuel growth, by decreasing or increasing the money supply. Indian banks' holdings of government securities are now close to the statutory minimum that banks are required to hold to comply with existing regulation. When measured in rupees, such holdings decreased for the first time in a little less than 40 years (since the nationalisation of banks in 1969) in 2005–06. It is 18.00 percent as in June 2020. Usage SLR is used by bankers and indicates the minimum percentage of deposits that the bank has to maintain in form of gold, cash or other approved securities. Thus, we can say that it is ratio of cash and some other approved liability (deposits). It regulates the credit growth in India. The liabilities that the banks are liable to pay within one month's time, due to completion of maturity period, are also considered as time liabilities. The maximum limit of SLR is 40% and minimum limit of SLR is 0 In India, Reserve Bank of India always determines the percentage of SLR. There are some statutory requirements for tempo
https://en.wikipedia.org/wiki/Mathematics%20Magazine
Mathematics Magazine is a refereed bimonthly publication of the Mathematical Association of America. Its intended audience is teachers of collegiate mathematics, especially at the junior/senior level, and their students. It is explicitly a journal of mathematics rather than pedagogy. Rather than articles in the terse "theorem-proof" style of research journals, it seeks articles which provide a context for the mathematics they deliver, with examples, applications, illustrations, and historical background. Paid circulation in 2008 was 9,500 and total circulation was 10,000. Mathematics Magazine is a continuation of Mathematics News Letter (1926–1934) and National Mathematics Magazine (1934–1945). Doris Schattschneider became the first female editor of Mathematics Magazine in 1981. The MAA gives the Carl B. Allendoerfer Awards annually "for articles of expository excellence" published in Mathematics Magazine. See also American Mathematical Monthly Carl B. Allendoerfer Award Notes Further reading External links Mathematics Magazine at JSTOR Mathematics Magazine at Taylor & Francis Online Mathematics education journals Academic journals published by learned and professional societies of the United States Mathematical Association of America
https://en.wikipedia.org/wiki/Wideband%20materials
Wideband material refers to material that can convey Microwave signals (light/sound) over a variety of wavelengths. These materials possess exemplary attenuation and dielectric constants, and are excellent dielectrics for semiconductor gates. Examples of such material include gallium nitride (GaN) and silicon carbide (SiC). SiC has been used extensively in the creation of lasers for several years. However, it performs poorly (providing limited brightness) because it has an indirect band gap. GaN has a wide band gap (~3.4 eV), which usually results in high energies for structures which possess electrons in the conduction band.
https://en.wikipedia.org/wiki/Edinburgh%20School%20of%20Medicine%20for%20Women
The Edinburgh School of Medicine for Women was founded by Sophia Jex-Blake in Edinburgh, Scotland, in October of 1886, with support from the National Association for Promoting the Medical Education of Women. Sophia Jex-Blake was appointed as both the Director and the Dean of the School. The first class of women to study at the Edinburgh School of Medicine for Women consisted of eight students, the youngest of whom was nineteen years of age. Throughout its twelve years in operation, the school struggled to find financial funding to remain open. A rival institution, the Edinburgh College of Medicine for Women, set up by Elsie Inglis with the help of her father John Inglis, attracted several students of Jex-Blake, including Martha Cadell and Grace Cadell. St Mungo's College and Queen Margaret College in Glasgow also accepted women medical students and when the Scottish universities began to do so the Edinburgh School of Medicine could no longer compete. The school closed in 1898. Over the twelve years of its operation, the Edinburgh School of Medicine provided education to approximately eighty female students. Of those eighty students, thirty-three completed the full course of medical training at the Edinburgh School while many others chose to finish their education at outside institutions. Background about the founder The first step in Sophia Jex-Blake's journey to obtaining entry into a medical program was by requesting permission from Professor JJ Balfour, who served as the Dean of the Medical Faculty, to participate in the University of Edinburgh Medical College’s summer classes. Following this request, the faculty at the University voted to determine the fate of Blake and future women’s admission into their medical program. On one hand, she received support in her pursuit of medical education as long as women were educated solely in the field of obstetrics and gynecology. However, other faculty members were opposed to allowing females to study medicine on th
https://en.wikipedia.org/wiki/Canavanine
L-(+)-(S)-Canavanine is a non-proteinogenic amino acid found in certain leguminous plants. It is structurally related to the proteinogenic α-amino acid L-arginine, the sole difference being the replacement of a methylene bridge (-- unit) in arginine with an oxa group (i.e., an oxygen atom) in canavanine. Canavanine is accumulated primarily in the seeds of the organisms which produce it, where it serves both as a highly deleterious defensive compound against herbivores (due to cells mistaking it for arginine) and a vital source of nitrogen for the growing embryo. The related L-canaline is similar to ornithine. Toxicity The mechanism of canavanine's toxicity is that organisms that consume it typically mistakenly incorporate it into their own proteins in place of L-arginine, thereby producing structurally aberrant proteins that may not function properly. Cleavage by arginase also produces canaline, a potent insecticide. The toxicity of canavanine may be enhanced under conditions of protein starvation, and canavanine toxicity, resulting from consumption of Hedysarum alpinum seeds with a concentration of 1.2% canavanine weight/weight, has been implicated in the death of a malnourished Christopher McCandless. (McCandless was the subject of Jon Krakauer's book (and subsequent movie) Into the Wild). In mammals NZB/W F1, NZB, and DBA/2 mice fed L-canavanine develop a syndrome similar to systemic lupus erythematosus, while BALB/c mice fed a steady diet of protein containing 1% canavanine showed no change in lifespan. Alfalfa seeds and sprouts contain L-canavanine. The L-canavanine in alfalfa has been linked to lupus-like symptoms in primates, including humans, and other auto-immune diseases. Often stopping consumption reverses the problem. Tolerance Some specialized herbivores tolerate L-canavanine either because they metabolize it efficiently (cf. L-canaline) or avoid its incorporation into their own nascent proteins. By metabolic detoxification Herbivores may be
https://en.wikipedia.org/wiki/Pleurotus%20eryngii
Pleurotus eryngii (also known as king trumpet mushroom, French horn mushroom, eryngi, king oyster mushroom, king brown mushroom, boletus of the steppes, trumpet royale, aliʻi oyster) is an edible mushroom native to Mediterranean regions of Europe, the Middle East, and North Africa, but also grown in many parts of Asia. Description Pleurotus eryngii is the largest species in the oyster mushroom genus, Pleurotus, which also contains the oyster mushroom Pleurotus ostreatus. It has a thick, meaty white stem and a small tan cap (in young specimens). Its natural range extends from the Atlantic Ocean through the Mediterranean Basin and Central Europe into Western Asia and India. Unlike other species of Pleurotus, which are primarily wood-decay fungi, the P. eryngii complex are also weak parasites on the roots of herbaceous plants, although they may also be cultured on organic wastes. Taxonomy Its species name is derived from the fact that it grows in association with the roots of Eryngium campestre or other Eryngium plants (English names: 'sea holly' or 'eryngo'). P. eryngii is a species complex, and a number of varieties have been described, with differing plant associates in the carrot family (Apiaceae). Pleurotus eryngii var. eryngii (DC.) Quél 1872 – associated with Eryngium ssp. Pleurotus eryngii var. ferulae (Lanzi) Sacc. 1887 – associated with Ferula communis Pleurotus eryngii var. tingitanus Lewinsohn 2002 – associated with Ferula tingitana Pleurotus eryngii var. elaeoselini Venturella, Zervakis & La Rocca 2000 – associated with Elaeoselinum asclepium Pleurotus eryngii var. thapsiae Venturella, Zervakis & Saitta 2002 – associated with Thapsia garganica Other specimens of P. eryngii have been reported in association with plants in the genera Ferulago, Cachrys, Laserpitium, and Diplotaenia, all in Apiaceae. Molecular studies have shown Pleurotus nebrodensis to be closely related to, but distinct from, P. eryngii. Pleurotus fossulatus may be another closely re
https://en.wikipedia.org/wiki/ALOPEX
ALOPEX (an acronym from "ALgorithms Of Pattern EXtraction") is a correlation based machine learning algorithm first proposed by Tzanakou and Harth in 1974. Principle In machine learning, the goal is to train a system to minimize a cost function or (referring to ALOPEX) a response function. Many training algorithms, such as backpropagation, have an inherent susceptibility to getting "stuck" in local minima or maxima of the response function. ALOPEX uses a cross-correlation of differences and a stochastic process to overcome this in an attempt to reach the absolute minimum (or maximum) of the response function. Method ALOPEX, in its simplest form is defined by an updating equation: Where: is the iteration or time-step. is the difference between the current and previous value of system variable at iteration . is the difference between the current and previous value of the response function at iteration . is the learning rate parameter minimizes and maximizes Discussion Essentially, ALOPEX changes each system variable based on a product of: the previous change in the variable , the resulting change in the cost function , and the learning rate parameter . Further, to find the absolute minimum (or maximum), the stochastic process (Gaussian or other) is added to stochastically "push" the algorithm out of any local minima.
https://en.wikipedia.org/wiki/Rubik%27s%20Snake
A Rubik's Snake (also Rubik's Twist, Rubik's Transformable Snake, Rubik’s Snake Puzzle) is a toy with 24 wedges that are right isosceles triangular prisms. The wedges are connected by spring bolts, so that they can be twisted, but not separated. By being twisted, the Rubik's Snake can be made to resemble a wide variety of objects, animals, or geometric shapes. Its "ball" shape in its packaging is a non-uniform concave rhombicuboctahedron. The snake was invented by Ernő Rubik, better known as the inventor of the Rubik's Cube. Rubik's Snake was released during 1981 at the height of the Rubik's Cube craze. According to Ernő Rubik: "The snake is not a problem to be solved; it offers infinite possibilities of combination. It is a tool to test out ideas of shape in space. Speaking theoretically, the number of the snake's combinations is limited. But speaking practically, that number is limitless, and a lifetime is not sufficient to realize all of its possibilities." Other manufacturers have produced versions with more pieces than the original. Structure The 24 prisms are aligned in row with an alternating orientation (normal and upside down). Each prism can adopt 4 different positions, each with an offset of 90°. Usually the prisms have alternating colors. Notation Twisting instructions The steps needed to make an arbitrary shape or figure can be described in a number of ways. One common starting configuration is a straight bar with alternating upper and lower prisms, with the rectangular faces facing up and down, and the triangular faces facing towards the player. The 12 lower prisms are numbered 1 through 12 starting from the left, with the left and the right sloping faces of these prisms are labeled L and R respectively. The last of the upper prisms is on the right, so the L face of prism 1 does not have an adjacent prism. The four possible positions of the adjacent prism on each L and R sloping face are numbered 0, 1, 2 and 3 (representing the number of twist
https://en.wikipedia.org/wiki/Plasma%20etching
Plasma etching is a form of plasma processing used to fabricate integrated circuits. It involves a high-speed stream of glow discharge (plasma) of an appropriate gas mixture being shot (in pulses) at a sample. The plasma source, known as etch species, can be either charged (ions) or neutral (atoms and radicals). During the process, the plasma generates volatile etch products at room temperature from the chemical reactions between the elements of the material etched and the reactive species generated by the plasma. Eventually the atoms of the shot element embed themselves at or just below the surface of the target, thus modifying the physical properties of the target. Mechanisms Plasma generation A plasma is a high energetic condition in which a lot of processes can occur. These processes happen because of electrons and atoms. To form the plasma electrons have to be accelerated to gain energy. Highly energetic electrons transfer the energy to atoms by collisions. Three different processes can occur because of this collisions: Excitation Dissociation Ionization Different species are present in the plasma such as electrons, ions, radicals, and neutral particles. Those species are interacting with each other constantly. Plasma etching can be divided into two main types of interaction: generation of chemical species interaction with the surrounding surfaces Without a plasma, all those processes would occur at a higher temperature. There are different ways to change the plasma chemistry and get different kinds of plasma etching or plasma depositions. One of the excitation techniques to form a plasma is by using RF excitation of a power source of 13.56 MHz. The mode of operation of the plasma system will change if the operating pressure changes. Also, it is different for different structures of the reaction chamber. In the simple case, the electrode structure is symmetrical, and the sample is placed upon the grounded electrode. Influences on the process The key to de
https://en.wikipedia.org/wiki/Attenuator%20%28genetics%29
In genetics, attenuation is a regulatory mechanism for some bacterial operons that results in premature termination of transcription. The canonical example of attenuation used in many introductory genetics textbooks, is ribosome-mediated attenuation of the trp operon. Ribosome-mediated attenuation of the trp operon relies on the fact that, in bacteria, transcription and translation proceed simultaneously. Attenuation involves a provisional stop signal (attenuator), located in the DNA segment that corresponds to the leader sequence of mRNA. During attenuation, the ribosome becomes stalled (delayed) in the attenuator region in the mRNA leader. Depending on the metabolic conditions, the attenuator either stops transcription at that point or allows read-through to the structural gene part of the mRNA and synthesis of the appropriate protein. Attenuation is a regulatory feature found throughout Archaea and Bacteria causing premature termination of transcription. Attenuators are 5'-cis acting regulatory regions which fold into one of two alternative RNA structures which determine the success of transcription. The folding is modulated by a sensing mechanism producing either a Rho-independent terminator, resulting in interrupted transcription and a non-functional RNA product; or an anti-terminator structure, resulting in a functional RNA transcript. There are now many equivalent examples where the translation, not transcription, is terminated by sequestering the Shine-Dalgarno sequence (ribosomal binding site) in a hairpin-loop structure. While not meeting the previous definition of (transcriptional) attenuation, these are now considered to be variants of the same phenomena and are included in this article. Attenuation is an ancient regulatory system, prevalent in many bacterial species providing fast and sensitive regulation of gene operons and is commonly used to repress genes in the presence of their own product (or a downstream metabolite). Classes of attenuators Atte
https://en.wikipedia.org/wiki/Sedleian%20Professor%20of%20Natural%20Philosophy
The Sedleian Professor of Natural Philosophy is the name of a chair at the Mathematical Institute of the University of Oxford. Overview The Sedleian Chair was founded by Sir William Sedley who, by his will dated 20 October 1618, left the sum of £2,000 to the University of Oxford for purchase of lands for its endowment. Sedley's bequest took effect in 1621 with the purchase of an estate at Waddesdon in Buckinghamshire to produce the necessary income. It is regarded as the oldest of Oxford's scientific chairs. Holders of the Sedleian Professorship have, since the mid 19th century, worked in a range of areas of applied mathematics and mathematical physics. They are simultaneously elected to fellowships at Queen's College, Oxford. The Sedleian Professors in the past century have been Augustus Love (1899-1940), who was distinguished for his work in the mathematical theory of elasticity, Sydney Chapman (1946-1953), who is renowned for his contributions to the kinetic theory of gases and solar-terrestrial physics, George Temple (1953-1968), who made significant contributions to mathematical physics and the theory of generalized functions, Brooke Benjamin (1979-1995), who did highly influential work in the areas of mathematical analysis and fluid mechanics, and Sir John Ball (1996-2019), who is distinguished for his work in the mathematical theory of elasticity, materials science, the calculus of variations, and infinite-dimensional dynamical systems. List of Sedleian Professors Notes
https://en.wikipedia.org/wiki/University%20of%20Alberta%20Botanic%20Garden
The University of Alberta Botanic Garden (formerly the Devonian Botanic Garden) is Alberta's largest botanical garden. It was established in 1959 by the University of Alberta. It is located approximately west of the city of Edmonton, Alberta and north of the town of Devon, in Parkland County. History The garden was created in 1959 and established on donated land. The garden was originally designated the "Botanic Garden and Field Laboratory" of the department of botany at the U of A. In the 1970s, after the garden was severely damaged by floods, a donation from the Devonian Foundation, along with funds raised by the Friends of the Garden, helped to repair the damages, create a system of canals and ponds, construct a headquarters building, and purchase more land. In recognition of the donation, the name was changed to the Devonian Botanic Garden. The Friends of the Devonian Botanic Garden was founded in 1971 as a fundraising group to support the aims and objectives of the garden. In conjunction with an enhanced memorandum of understanding signed between the Aga Khan University and the University of Alberta in 2009, the University of Alberta requested the Aga Khan IV to develop an Islamic garden within the grounds. Planning the garden took nearly a decade, and construction lasted 18 months. It was designed by the architectural firm Nelson Byrd Woltz Landscape Architects. The Aga Khan garden opened to the public in July 2018, and its Diwan Pavilion was opened in 2022. Throughout the 2018 season, gardeners completed the planting of over 25,000 new perennials, trees, shrubs, and wetland plants, and add finishing touches on landscaping. The total cost of the Aga Khan Garden was $25 million. The garden is tagged as the "most northerly Islamic garden in the world". The botanical facility was renamed as the University of Alberta Botanic Garden in 2017. In September 2017, it invested $4.9 million to renovate its front entrance and parking lot. Description The gardens
https://en.wikipedia.org/wiki/Trakhtenbrot%27s%20theorem
In logic, finite model theory, and computability theory, Trakhtenbrot's theorem (due to Boris Trakhtenbrot) states that the problem of validity in first-order logic on the class of all finite models is undecidable. In fact, the class of valid sentences over finite models is not recursively enumerable (though it is co-recursively enumerable). Trakhtenbrot's theorem implies that Gödel's completeness theorem (that is fundamental to first-order logic) does not hold in the finite case. Also it seems counter-intuitive that being valid over all structures is 'easier' than over just the finite ones. The theorem was first published in 1950: "The Impossibility of an Algorithm for the Decidability Problem on Finite Classes". Mathematical formulation We follow the formulations as in Ebbinghaus and Flum Theorem Satisfiability for finite structures is not decidable in first-order logic. That is, the set {φ | φ is a sentence of first-order logic that is satisfiable among finite structures} is undecidable. Corollary Let σ be a relational vocabulary with one at least binary relation symbol. The set of σ-sentences valid in all finite structures is not recursively enumerable. Remarks This implies that Gödel's completeness theorem fails in the finite since completeness implies recursive enumerability. It follows that there is no recursive function f such that: if φ has a finite model, then it has a model of size at most f(φ). In other words, there is no effective analogue to the Löwenheim–Skolem theorem in the finite. Intuitive proof This proof is taken from Chapter 10, section 4, 5 of Mathematical Logic by H.-D. Ebbinghaus. As in the most common proof of Gödel's First Incompleteness Theorem through using the undecidability of the halting problem, for each Turing machine there is a corresponding arithmetical sentence , effectively derivable from , such that it is true if and only if halts on the empty tape. Intuitively, asserts "there exists a natural number that is
https://en.wikipedia.org/wiki/Dialogo%20de%20Cecco%20di%20Ronchitti%20da%20Bruzene%20in%20perpuosito%20de%20la%20stella%20Nuova
Dialogo de Cecco di Ronchitti da Bruzene in perpuosito de la stella Nuova (Dialogue of Cecco di Ronchitti of Brugine concerning the New star) is the title of an early 17th-century pseudonymous pamphlet ridiculing the views of an aspiring Aristotelian philosopher, Antonio Lorenzini da Montepulciano, on the nature and properties of Kepler's Supernova, which had appeared in October 1604. The pseudonymous Dialogue was written in the coarse language of a rustic Paduan dialect, and first published in about March, 1605, in Padua. A second edition was published later the same year in Verona. Antonio Favaro republished the contents of the pamphlet in its original language in 1881, with annotations and a commentary in Italian. He republished it again in Volume 2 of the National Edition of Galileo's works in 1891, along with a translation into standard Italian. An English translation was published by Stillman Drake in 1976. The Dialogo is dedicated to Antonio Querenghi. Scholars agree that the pamphlet was written either by Galileo Galilei or one of his followers, Girolamo Spinelli, or by both in collaboration, but do not agree on the extent of the contribution—if any—made by each of them to its composition. Footnotes Bibliography 1605 books Astronomy pamphlets Historical physics publications History of astronomy 1605 in science Philosophy of science books
https://en.wikipedia.org/wiki/3D%20Systems
3D Systems, headquartered in Rock Hill, South Carolina, is a company that engineers, manufactures, and sells 3D printers, 3D printing materials, 3D scanners, and offers a 3D printing service. The company creates product concept models, precision and functional prototypes, master patterns for tooling, as well as production parts for direct digital manufacturing. It uses proprietary processes to fabricate physical objects using input from computer-aided design and manufacturing software, or 3D scanning and 3D sculpting devices. 3D Systems' technologies and services are used in the design, development, and production stages of many industries, including aerospace, automotive, healthcare, dental, entertainment, and durable goods. The company offers a range of professional- and production-grade 3D printers, as well as software, materials, and the online rapid part printing service On Demand. It is notable within the 3D printing industry for developing stereolithography and the STL file format. Chuck Hull, the CTO and former president, pioneered stereolithography and obtained a patent for the technology in 1986. As of 2020, 3D Systems employed over 2400 people in 25 offices worldwide. History 3D Systems was founded in Valencia, California by Chuck Hull, the inventor and patent-holder of the first stereolithography (SLA) rapid prototyping system. Prior to Hull's introduction of SLA rapid prototyping, concept models required extensive time and money to produce. The innovation of SLA reduced these resource expenditures while increasing the quality and accuracy of the resulting model. Early SLA systems were complex and costly, and required extensive redesigns before achieving commercial viability. Primary issues concerned hydrodynamic and chemical complications. In 1996, the introduction of solid-state lasers permitted Hull and his team to reformulate their materials. Engineers in transportation, healthcare, and consumer products helped fuel early phases of 3D Systems' ra
https://en.wikipedia.org/wiki/Windows%20Live%20OneCare%20Safety%20Scanner
Windows Live OneCare Safety Scanner (formerly Windows Live Safety Center and codenamed Vegas) was an online scanning, PC cleanup, and diagnosis service to help remove of viruses, spyware/adware, and other malware. It was a free web service that was part of Windows Live. On November 18, 2008, Microsoft announced the discontinuation of Windows Live OneCare, offering users a new free anti-malware suite Microsoft Security Essentials, which had been available since the second half of 2009. However, Windows Live OneCare Safety Scanner, under the same branding as Windows Live OneCare, was not discontinued during that time. The service was officially discontinued on April 15, 2011 and replaced with Microsoft Safety Scanner. Overview Windows Live OneCare Safety Scanner offered a free online scanning and protection from threats. The Windows Live OneCare Safety Scanner must be downloaded and installed to your computer to scan your computer. The "Full Service Scan" looks for common PC health issues such as viruses, temporary files, and open network ports. It searches and removes viruses, improves a computer's performance, and removes unnecessary clutter on the PC's hard disk. The user can choose between a "Full Scan" (which can be customized) or a "Quick Scan". The "Full Scan" scans for viruses (comprehensive scan or quick scan), hard disk performance (Disk fragmentation scan and/or Desk cleanup scan) and network safety (open port scan). The "Quick Scan" only scans for viruses, only on specific areas on the computer. The quick scan is faster than the full scan, hence that appellation. The service also provides a virus database, information about online threats, and general computer security documentation and tools. Limits The virus scanner on the Windows Live OneCare Safety Scanner site runs a scan of the user's computer only when the site is visited. It does not run periodic scans of the system, and does not provide features to prevent viruses from infecting the computer
https://en.wikipedia.org/wiki/Roland%20Fra%C3%AFss%C3%A9
Roland Fraïssé (; 12 March 1920 – 30 March 2008) was a French mathematical logician. Life Fraïssé received his doctoral degree from the University of Paris in 1953. In his thesis, Fraïssé used the back-and-forth method to determine whether two model-theoretic structures were elementarily equivalent. This method of determining elementary equivalence was later formulated as the Ehrenfeucht–Fraïssé game. Fraïssé worked primarily in relation theory. Another of his important works was the Fraïssé construction of a Fraïssé limit of finite structures. He also formulated Fraïssé's conjecture on order embeddings, and introduced the notion of compensor in the theory of posets. Most of his career was spent as Professor at the University of Provence in Marseille, France. Selected publications Sur quelques classifications des systèmes de relations, thesis, University of Paris, 1953; published in Publications Scientifiques de l'Université d'Alger, series A 1 (1954), 35–182. Cours de logique mathématique, Paris: Gauthier-Villars Éditeur, 1967; second edition, 3 vols., 1971–1975; tr. into English and ed. by David Louvish as Course of Mathematical Logic, 2 vols., Dordrecht: Reidel, 1973–1974. Theory of relations, tr. into English by P. Clote, Amsterdam: North-Holland, 1986; rev. ed. 2000.
https://en.wikipedia.org/wiki/AP%20Calculus
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions). AP Calculus AB AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams. Purpose According to the College Board: Topic outline The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus. Analysis of graphs (predicting and explaining behavior) Limits of functions (one and two sided) Asymptotic and unbounded behavior Continuity Derivatives Concept At a point As a function Applications Higher order derivatives Techniques Integrals Interpretations Properties Applications Techniques Numerical approximations Fundamental theorem of calculus Antidifferentiation L'Hôpital's rule Separable differential equations AP Calculus BC AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus). Purpose According to the College Board, Topic outline AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following: Convergence tests for series Taylor series Parametric equations Polar functions (inclu
https://en.wikipedia.org/wiki/Sodium%20tartrate
Sodium tartrate (Na2C4H4O6) is a salt used as an emulsifier and a binding agent in food products such as jellies, margarine, and sausage casings. As a food additive, it is known by the E number E335. Because its crystal structure captures a very precise amount of water, it is also a common primary standard for Karl Fischer titration, a common technique to assay water content. See also Monosodium tartrate
https://en.wikipedia.org/wiki/Victor%20Zalgaller
Victor (Viktor) Abramovich Zalgaller (; ; 25 December 1920 – 2 October 2020) was a Russian-Israeli mathematician in the fields of geometry and optimization. He is best known for the results he achieved on convex polyhedra, linear and dynamic programming, isoperimetry, and differential geometry. Biography Zalgaller was born in Parfino, Novgorod Governorate on 25 December 1920. In 1936, he was one of the winners of the Leningrad Mathematics Olympiads for high school students. He started his studies at the Leningrad State University, however, World War II intervened in 1941, and Zalgaller joined the Red Army. He took part in the defence of Leningrad, and in 1945 marched into Germany. He worked as a teacher at the Saint Petersburg Lyceum 239, and received his 1963 doctoral dissertation on polyhedra with the aid of his high school students who wrote the computer programs for the calculation. Zalgaller did his early work under direction of A. D. Alexandrov and Leonid Kantorovich. He wrote joint monographs with both of them. His later monograph Geometric Inequalities (joint with Yu. Burago) is still the main reference in the field. Zalgaller lived in Saint Petersburg most of his life, having studied and worked at the Leningrad State University and the Steklov Institute of Mathematics (Saint Petersburg branch). In 1999, he immigrated to Israel. Zalgaller died on 2 October 2020 at the age of 99.
https://en.wikipedia.org/wiki/Radia%20Perlman
Radia Joy Perlman (; born December 18, 1951) is an American computer programmer and network engineer. She is a major figure in assembling the networks and technology to enable what we now know as the internet. She is most famous for her invention of the Spanning Tree Protocol (STP), which is fundamental to the operation of network bridges, while working for Digital Equipment Corporation, thus earning her nickname "Mother of the Internet". Her innovations have made a huge impact on how networks self-organize and move data. She also made large contributions to many other areas of network design and standardization: for example, enabling today's link-state routing protocols, to be more robust, scalable, and easy to manage. Perlman was elected a member of the National Academy of Engineering in 2015 for contributions to Internet routing and bridging protocols. She holds over 100 issued patents. She was elected to the Internet Hall of Fame in 2014, and to the National Inventors Hall of Fame in 2016. She received lifetime achievement awards from USENIX in 2006 and from the Association for Computing Machinery’s SIGCOMM in 2010. More recently she has invented the TRILL protocol to correct some of the shortcomings of spanning trees, allowing Ethernet to make optimal use of bandwidth. As of 2022, she was a Fellow at Dell Technologies. Early life Perlman was born in 1951, Portsmouth, Virginia. She grew up in Loch Arbour, New Jersey. She is Jewish. Both of her parents worked as engineers for the US government. Her father worked on radar and her mother was a mathematician by training who worked as a computer programmer. During her school years Perlman found math and science to be “effortless and fascinating”, but had no problem achieving top grades in other subjects as well. She enjoyed playing the piano and French horn. While her mother helped her with her math homework, they mainly talked about literature and music. But she didn't feel like she fit underneath the stereot
https://en.wikipedia.org/wiki/Catamorphism
In category theory, the concept of catamorphism (from the Ancient Greek: "downwards" and "form, shape") denotes the unique homomorphism from an initial algebra into some other algebra. In functional programming, catamorphisms provide generalizations of folds of lists to arbitrary algebraic data types, which can be described as initial algebras. The dual concept is that of anamorphism that generalize unfolds. A hylomorphism is the composition of an anamorphism followed by a catamorphism. Definition Consider an initial -algebra for some endofunctor of some category into itself. Here is a morphism from to . Since it is initial, we know that whenever is another -algebra, i.e. a morphism from to , there is a unique homomorphism from to . By the definition of the category of -algebra, this corresponds to a morphism from to , conventionally also denoted , such that . In the context of -algebra, the uniquely specified morphism from the initial object is denoted by and hence characterized by the following relationship: Terminology and history Another notation found in the literature is . The open brackets used are known as banana brackets, after which catamorphisms are sometimes referred to as bananas, as mentioned in Erik Meijer et al. One of the first publications to introduce the notion of a catamorphism in the context of programming was the paper “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire”, by Erik Meijer et al., which was in the context of the Squiggol formalism. The general categorical definition was given by Grant Malcolm. Examples We give a series of examples, and then a more global approach to catamorphisms, in the Haskell programming language. Iteration Iteration-step prescriptions lead to natural numbers as initial object. Consider the functor fmaybe mapping a data type b to a data type fmaybe b, which contains a copy of each term from b as well as one additional term Nothing (in Haskell, this is what Maybe doe