source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Photomedicine
Photomedicine is an interdisciplinary branch of medicine that involves the study and application of light with respect to health and disease. Photomedicine may be related to the practice of various fields of medicine including dermatology, surgery, interventional radiology, optical diagnostics, cardiology, circadian rhythm sleep disorders and oncology. A branch of photomedicine is light therapy in which bright light strikes the retinae of the eyes, used to treat circadian rhythm disorders and seasonal affective disorder (SAD). The light can be sunlight or from a light box emitting white or blue (blue/green) light. Examples Photomedicine is used as a treatment for many different conditions: PUVA for the treatment of psoriasis Photodynamic therapy (PDT) for treatment of cancer and macular degeneration - Nontoxic light-sensitive compounds are targeted to malignant or other diseased cells, then exposed selectively to light, whereupon they become toxic and destroy these cells phototoxicity. One dermatological example of PDT is the targeting malignant cells by bonding the light-sensitive compounds to antibodies to these cells; light exposure at particular wavelengths mediates release of free radicals or other photosensitizing agents, destroying the targeted cells. Treating circadian rhythm disorders Alopecia, pattern hair loss, etc. Free electron laser Laser hair removal IPL Photobiomodulation Optical diagnostics, for example optical coherence tomography of coronary plaques using infrared light Confocal microscopy and fluorescence microscopy of in vivo tissue Diffuse reflectance infrared fourier transform for in vivo quantification of pigments (normal and cancerous), and hemoglobin Perpendicular-polarized flash photography and fluorescence photography of the skin See also Blood irradiation therapy Aesthetic medicine Laser hair removal Laser medicine Rox Anderson
https://en.wikipedia.org/wiki/Global%20Information%20Grid-Bandwidth%20Expansion
The Global Information Grid Bandwidth Expansion (GIG-BE) Program was a major United States Department of Defense (DOD) net-centric transformational initiative executed by DISA. Part of the Global Information Grid project, GIG-BE created a ubiquitous "bandwidth-available" environment to improve national security intelligence, surveillance and reconnaissance, information assurance, as well as command and control. Through GIG-BE, DISA leveraged DOD's existing end-to-end information transport capabilities, significantly expanding capacity and reliability to select Joint Staff-approved locations worldwide. GIG-BE achieved Full Operational Capability (FOC) on December 20, 2005. Scope This program provided increased bandwidth and diverse physical access to approximately 87 critical sites in the continental United States (CONUS), US Pacific Command (PACOM) and US European Command (EUCOM). These locations are interconnected via an expanded GIG core. Capabilities and services GIG-BE provides a secure, robust, optical terrestrial network that delivers very high-speed classified and unclassified Internet Protocol (IP) services to key operating locations worldwide. The Assistant Secretary of Defense for Networks and Information Integration's (ASD/NII) vision is a "color to every base," physically diverse network access, optical mesh upgrades for the backbone network, and regional/MAN upgrades, where needed. "A color to every base" implies that every site has an OC-192 (10 gigabits per second) of usable IP dedicated to that site. Implementation After extensive component integration and operational testing, implementation began in the middle of the 2004 fiscal year and extended through calendar year 2005. The initial implementation concentrated on six sites used during the proof of Initial Operational Capability (IOC), achieved on September 30, 2004. The GIG-BE Program Office conducted detailed site surveys at all of the approximately 87 Joint Staff-approved locations and
https://en.wikipedia.org/wiki/Pharyngula
The pharyngula is a stage in the embryonic development of vertebrates. At this stage, the embryos of all vertebrates are similar, having developed features typical of vertebrates, such as the beginning of a spinal cord. Named by William Ballard, the pharyngula stage follows the blastula, gastrula and neurula stages. Morphological similarity in vertebrate embryos At the pharyngula stage, all vertebrate embryos show remarkable similarities, i.e., it is a "phylotypic stage" of the sub-phylum, containing the following features: notochord dorsal hollow nerve cord post-anal tail, and a series of paired branchial grooves. The branchial grooves are matched on the inside by a series of paired gill pouches. In fish, the pouches and grooves eventually meet and form the gill slits, which allow water to pass from the pharynx over the gills and out the body. In the other vertebrates, the grooves and pouches disappear. In humans, the chief trace of their existence is the eustachian tube and auditory canal which (interrupted only by the eardrum) connect the pharynx with the outside of the head. The existence of a common pharyngula stage for vertebrates was first proposed by German biologist Ernst Haeckel (1834–1919) in 1874. The hourglass model The observation of the conservation of animal morphology during the embryonic phylotypic period, where there is maximal similarity between the species within each animal phylum, has led to the proposition that embryogenesis diverges more extensively in the early and late stages than the middle stage, and is known as the hourglass model. Comparative genomic studies suggest that the phylotypic stage is the maximally conserved stage during embryogenesis. See also Evolutionary developmental biology Embryogenesis Embryo drawing Recapitulation theory
https://en.wikipedia.org/wiki/Cervical%20rib
A cervical rib in humans is an extra rib which arises from the seventh cervical vertebra. Their presence is a congenital abnormality located above the normal first rib. A cervical rib is estimated to occur in 0.2% to 0.5% (1 in 200 to 500) of the population. People may have a cervical rib on the right, left or both sides. Most cases of cervical ribs are not clinically relevant and do not have symptoms; cervical ribs are generally discovered incidentally, most often during x-rays and CT scans. However, they vary widely in size and shape, and in rare cases, they may cause problems such as contributing to thoracic outlet syndrome, because of pressure on the nerves that may be caused by the presence of the rib. A cervical rib represents a persistent ossification of the C7 lateral costal element. During early development, this ossified costal element typically becomes re-absorbed. Failure of this process results in a variably elongated transverse process or complete rib that can be anteriorly fused with the T1 first rib below. Diagnosis On imaging, cervical ribs can be distinguished because their transverse processes are directed inferolaterally, whereas those of the adjacent thoracic spine are directed anterolaterally. Associated conditions The presence of a cervical rib can cause a form of thoracic outlet syndrome due to compression of the lower trunk of the brachial plexus or subclavian artery. These structures become encroached upon by the cervical rib and scalene muscles. Compression of the brachial plexus may be identified by weakness of the muscles in the hand, near the base of the thumb. Compression of the subclavian artery is often diagnosed by finding a positive Adson's sign on examination, where the radial pulse in the arm is lost during abduction and external rotation of the shoulder. A positive Adson's sign is non-specific for the presence of a cervical rib however, as many individuals without a cervical rib will have a positive test. Compression of th
https://en.wikipedia.org/wiki/Thorson%27s%20rule
Thorson's rule (named after Gunnar Thorson by S. A. Mileikovsky in 1971) is an ecogeographical rule which states that benthic marine invertebrates at low latitudes tend to produce large numbers of eggs developing to pelagic (often planktotrophic [plankton-feeding]) and widely dispersing larvae, whereas at high latitudes such organisms tend to produce fewer and larger lecithotrophic (yolk-feeding) eggs and larger offspring, often by viviparity or ovoviviparity, which are often brooded. Groups involved The rule was originally established for marine bottom invertebrates, but it also applies to a group of parasitic flatworms, monogenean ectoparasites on the gills of marine fish. Most low-latitude species of Monogenea produce large numbers of ciliated larvae. However, at high latitudes, species of the entirely viviparous family Gyrodactylidae, which produce few nonciliated offspring and are very rare at low latitudes, represent the majority of gill Monogenea, i.e., about 80–90% of all species at high northern latitudes, and about one third of all species in Antarctic and sub-Antarctic waters, against less than 1% in tropical waters. Data compiled by A.V. Gusev in 1978 indicates that Gyrodactylidae may also be more common in cold than tropical freshwater systems, suggesting that Thorson's rule may apply to freshwater invertebrates. There are exceptions to the rule, such as ascoglossan snails: tropical ascoglossans have a higher incidence of lecithotrophy and direct development than temperate species. A study in 2001 indicated that two factors are important for Thorson's rule to be valid for marine gastropods: 1) the habitat must include rocky substrates, because soft-bottom habitats appear to favour non-pelagic development; and 2) a diverse assemblage of taxa need to be compared to avoid the problem of phyletic constraints, which could limit the evolution of different developmental modes. Application to deep-sea species The temperature gradient from warm surface wa
https://en.wikipedia.org/wiki/Levenshtein%20coding
Levenshtein coding is a universal code encoding the non-negative integers developed by Vladimir Levenshtein. Encoding The code of zero is "0"; to code a positive number: Initialize the step count variable C to 1. Write the binary representation of the number without the leading "1" to the beginning of the code. Let M be the number of bits written in step 2. If M is not 0, increment C, repeat from step 2 with M as the new number. Write C "1" bits and a "0" to the beginning of the code. The code begins: To decode a Levenshtein-coded integer: Count the number of "1" bits until a "0" is encountered. If the count is zero, the value is zero, otherwise Start with a variable N, set it to a value of 1 and repeat count minus 1 times: Read N bits, prepend "1", assign the resulting value to N The Levenshtein code of a positive integer is always one bit longer than the Elias omega code of that integer. However, there is a Levenshtein code for zero, whereas Elias omega coding would require the numbers to be shifted so that a zero is represented by the code for one instead. Example code Encoding void levenshteinEncode(char* source, char* dest) { IntReader intreader(source); BitWriter bitwriter(dest); while (intreader.hasLeft()) { int num = intreader.getInt(); if (num == 0) bitwriter.outputBit(0); else { int c = 0; BitStack bits; do { int m = 0; for (int temp = num; temp > 1; temp>>=1) // calculate floor(log2(num)) ++m; for (int i=0; i < m; ++i) bits.pushBit((num >> i) & 1); num = m; ++c; } while (num > 0); for (int i=0; i < c; ++i) bitwriter.outputBit(1); bitwriter.outputBit(0); while (bits.length() > 0) bitwriter.outputBit(bits.popBit()); } } } Decoding void levensh
https://en.wikipedia.org/wiki/Animal%20product
An animal product is any material derived from the body of a non-human animal. Examples are fat, flesh, blood, milk, eggs, and lesser known products, such as isinglass and rennet. Animal by-products, as defined by the USDA, are products harvested or manufactured from livestock other than muscle meat. In the EU, animal by-products (ABPs) are defined somewhat more broadly, as materials from animals that people do not consume. Thus, chicken eggs for human consumption are considered by-products in the US but not France; whereas eggs destined for animal feed are classified as animal by-products in both countries. This does not in itself reflect on the condition, safety, or wholesomeness of the product. Animal by-products are carcasses and parts of carcasses from slaughterhouses, animal shelters, zoos and veterinarians, and products of animal origin not intended for human consumption, including catering waste. These products may go through a process known as rendering to be made into human and non-human foodstuffs, fats, and other material that can be sold to make commercial products such as cosmetics, paint, cleaners, polishes, glue, soap and ink. The sale of animal by-products allows the meat industry to compete economically with industries selling sources of vegetable protein. The word animals includes all species in the biological kingdom Animalia, including, for example, tetrapods, arthropods, and mollusks. Generally, products made from fossilized or decomposed animals, such as petroleum formed from the ancient remains of marine animals are not considered animal products. Crops grown in soil fertilized with animal remains are rarely characterized as animal products. Products sourced from humans (ex; hair sold for wigs, donated blood) are not typically classified as animal products even though humans are part of the animal kingdom. Several popular diet patterns prohibit the inclusion of some categories of animal products and may also limit the conditions of wh
https://en.wikipedia.org/wiki/Fermi%20heap%20and%20Fermi%20hole
Fermi heap and Fermi hole refer to two closely related quantum phenomena that occur in many-electron atoms. They arise due to the Pauli exclusion principle, according to which no two electrons can be in the same quantum state in a system (which, accounting for electrons' spin, means that there can be up to two electrons in the same orbital). Due to indistinguishability of elementary particles, the probability of a measurement yielding a certain eigenvalue must be invariant when electrons are exchanged, which means that the probability amplitude must either remain the same or change sign. For instance, consider an excited state of the helium atom in which electron 1 is in the 1s orbital and electron 2 has been excited to the 2s orbital. It is not possible, even in principle, to distinguish electron 1 from electron 2. In other words, electron 2 might be in the 1s orbital with electron 1 in the 2s orbital. As they are fermions, electrons must be described by an anti-symmetric wavefunction which must change sign under electron exchange, resulting in either a Fermi hole (having a lower probability of being found close to each other) or a Fermi heap (having a higher probability of being found close to each other). Since electrons repel one another electrically, Fermi holes and Fermi heaps have drastic effects on the energy of many-electron atoms, although the effect can be illustrated in the case of the helium atom. Neglecting the spin–orbit interaction, the wavefunction for the two electrons can be written as , where we have split the wavefunction into spatial and spin parts. As mentioned above, needs to be antisymmetric, and so the antisymmetry can arise either from the spin part or the spatial part. There are 4 possible spin states for this system: However, only the first two are symmetric or anti-symmetric to electron exchange (which corresponds to exchanging 1 and 2). The last two need to be rewritten as: The first three are symmetric, whereas the last one is a
https://en.wikipedia.org/wiki/Saha%20Institute%20of%20Nuclear%20Physics
The Saha Institute of Nuclear Physics (SINP) is an institution of basic research and training in physical and biophysical sciences located in Bidhannagar, Kolkata, India. The institute is named after the famous Indian physicist Meghnad Saha. Previous Directors Gautam Bhattacharyya Ajit Mohanty Bikas Chakrabarti Milan K. Sanyal [2009 to 2014] Bikash Sinha Manoj K. Pal D. N. Kundu B. D. Nag Chowdhury Meghnad Saha See also Education in India List of colleges in West Bengal Education in West Bengal
https://en.wikipedia.org/wiki/Ligand%20isomerism
In coordination chemistry, ligand isomerism is a type of structural isomerism in coordination complexes which arises from the presence of ligands which can adopt different isomeric forms. 1,2-Diaminopropane and 1,3-Diaminopropane are the examples that each feature a different isomer would be ligand isomers. Chemical bonding
https://en.wikipedia.org/wiki/Test%20management
Test management most commonly refers to the activity of managing a testing process. A test management tool is software used to manage tests (automated or manual) that have been previously specified by a test procedure. It is often associated with automation software. Test management tools often include requirement and/or specification management modules that allow automatic generation of the requirement test matrix (RTM), which is one of the main metrics to indicate functional coverage of a system under test (SUT). Creating tests definitions in a database Test definition includes: test plan, association with product requirements and specifications. Eventually, some relationship can be set between tests so that precedences can be established. E.g. if test A is parent of test B and if test A is failing, then it may be useless to perform test B. Tests should also be associated with priorities. Every change on a test must be versioned so that the QA team has a comprehensive view of the history of the test. Preparing test campaigns This includes building some bundles of test cases and executing them (or scheduling their execution). Execution can be either manual or automatic. Manual execution The user will have to perform all the test steps manually and inform the system of the result. Some test management tools includes a framework to interface the user with the test plan to facilitate this task. There are several ways to run tests. The simplest way to run a test is to run a test case. The test case can be associated with other test artifacts such as test plans, test scripts, test environments, test case execution records, and test suites. Automatic execution There are numerous ways of implementing automated tests. Automatic execution requires the test management tool to be compatible with the tests themselves. To do so, test management tools may propose proprietary automation frameworks or APIs to interface with third-party or proprietary automated tests. Gener
https://en.wikipedia.org/wiki/Gastric%20glands
The gastric glands are glands in the lining of the stomach that play an essential role in the process of digestion. All of the glands have mucus-secreting foveolar cells. Mucus lines the entire stomach, and protects the stomach lining from the effects of hydrochloric acid released from other cells in the glands. There are two types of gland in the stomach, the oxyntic gland, and the pyloric gland. The major type of gastric gland is the oxyntic gland that is present in 80 per cent of the stomach, and is often referred to simply as the gastric gland. The oxyntic gland is an exocrine gland and contains the parietal cells that produce hydrochloric acid, and intrinsic factor. Intrinsic factor is necessary for the absorption of vitamin B12. The other type of gland in the stomach is the pyloric gland found in the pyloric region taking up the remaining 20 per cent of the stomach area. The pyloric gland secretes gastrin from its G cells. Pyloric glands are similar in structure to the oxyntic glands but are endocrine glands with hardly any parietal cells. Types of gland The gastric glands are glands in the lining of the stomach that play an essential role in the process of digestion. All of the glands have mucus-secreting foveolar cells. Mucus lines the entire stomach of protects the stomach lining from the effects of hydrochloric acid released from other cells in the glands. Gastric glands are mostly exocrine glands and are all located beneath the gastric pits within the gastric mucosa—the mucous membrane of the stomach. The gastric mucosa is pitted with innumerable gastric pits which each house 3-5 gastric glands. The cells of the exocrine glands are foveolar (mucus), chief cells, and parietal cells. The other type of gastric gland is the pyloric gland which is an endocrine gland that secretes the hormone gastrin produced by its G cells. The cardiac glands are found in the cardia of the stomach which is the part nearest to the heart, enclosing the opening where the
https://en.wikipedia.org/wiki/International%20Federation%20of%20Associations%20of%20Anatomists
The International Federation of Associations of Anatomists (IFAA) is an umbrella scientific organization of national and multinational Anatomy Associations, dedicated to anatomy and biomorphological sciences. Origins and objectives In 1903, Prof. Nicolas, from Nancy, France, was successful in founding the International Federation of Associations of Anatomists. The first International Congress of Anatomy was held in Geneva in 1905 and started as a committee to organize five yearly conferences. 56 member Associations of the International Federation of Associations of Anatomists. The IFAA is the only international body representing all aspects of anatomy and anatomical associations. Since 1989 the Federative International Committee on Anatomical Terminology (FICAT) under IFAA auspices, has met to analyze and study the international morphological terminology (Anatomy, Histology and Embryology), releasing updated Terminologia Anatomica in 1998 and Terminologia Histologica in 2008. Years, cities and presidencies I Congress - 1905 - Geneva, Switzerland - Prof. D' Eternod II Congress - 1910 - Brussels, Belgium - Prof. Waldeyer III Congress - 1930 - Amsterdam, the Netherlands - Prof. Van den Broek IV Congress - 1936 - Milan, Italy - Prof. Livini V Congress - 1950 - Oxford, England, The UK - Prof. Le Gros Clark VI Congress - 1955 - Paris, France - Prof. Collin VII Congress - 1960 - New York, USA - Prof. Bennett VIII Congress - 1965 - Weisbaden, Germany - Prof. Bargmann IX Congress - 1970 - Leningrad, Russia - Prof. Jdanov X Congress - 1975 - Tokyo, Japan - Prof. Nakayama XI Congress - 1980 - Mexico City, Mexico - Prof. Acosta Vidrio XII Congress - 1985 - London, The UK - Prof. Harrison XIII Congress - 1989 - Rio de Janeiro, Brazil - Prof. Moscovici XIV Congress - 1994 - Lisbon, Portugal - Prof. Esperança Pina XV Congress - 1999 - Rome, Italy XVI Congress - 2004 - Kyoto, Japan XVII Congress - 2009 - Cape Town, South Africa XVIII Congress - 2014 - Beijin
https://en.wikipedia.org/wiki/Sphenoidal%20lingula
Along the posterior part of the lateral margin of the carotid groove of the sphenoid bone, in the angle between the body and great wing, is a ridge of bone, called the lingula.
https://en.wikipedia.org/wiki/Mastoid%20foramen
The mastoid foramen is a hole in the posterior border of the temporal bone. It transmits an emissary vein between the sigmoid sinus and the suboccipital venous plexus, and a small branch of the occipital artery, the posterior meningeal artery to the dura mater. Structure The mastoid foramen is a hole in the posterior border of the temporal bone of the skull. The opening of the mastoid foramen is an average of 18 mm from the asterion, and around 34 mm from the external auditory meatus. It is typically very narrow. This may be around 2 mm. Variation The position and size of this foramen are very variable. It is not always present. Sometimes, it is duplicated on one side or both sides. Sometimes, it is situated in the occipital bone, or in the suture between the temporal bone and the occipital bone. Function The mastoid foramen transmits: an emissary vein between the sigmoid sinus and the suboccipital venous plexus or the posterior auricular vein. a small branch of the occipital artery, the posterior meningeal artery, to the dura mater.
https://en.wikipedia.org/wiki/Dorsum%20sellae
The dorsum sellae is part of the sphenoid bone in the skull. Together with the basilar part of the occipital bone it forms the clivus. In the sphenoid bone, the anterior boundary of the sella turcica is completed by two small eminences, one on either side, called the middle clinoid processes, while the posterior boundary is formed by a square-shaped plate of bone, the dorsum sellae, ending at its superior angles in two tubercles, the posterior clinoid processes, the size and form of which vary considerably in different individuals. Additional images
https://en.wikipedia.org/wiki/Joint%20precision%20approach%20and%20landing%20system
The joint precision approach and landing system (JPALS) is a ship's system (CVN and LH type), all-weather landing system based on real-time differential correction of the Global Positioning System (GPS) signal, augmented with a local area correction message, and transmitted to the user via secure means. The onboard receiver compares the current GPS-derived position with the local correction signal, deriving a highly accurate three-dimensional position capable of being used for all-weather approaches via an Instrument Landing System-style display. While JPALS is similar to Local Area Augmentation System, but intended primarily for use by the military, some elements of JPALS may eventually see their way into civilian use to help protect high-value civilian operations against unauthorized signal alteration. History The development of JPALS was the result of two main military requirements. First, the military needs an all-service, highly mobile all-weather precision approach system, tailorable to a wide range of environments, from shipboard use to rapid installation at makeshift airfields. Second, they need a robust system that can maintain a high level of reliability in combat operations, particularly in its ability to effectively resist jamming. Operation JPALS encompasses two main categories: SRGPS (shipboard relative GPS) and LDGPS (land/local differential GPS). SRGPS provides highly accurate approach positioning for operations aboard ship, including aircraft carriers, helo and STO/VL carriers, and other shipboard operations, primarily helicopter operations. LDGPS is further divided into three sub-categories: fixed base, tactical, and special missions. Fixed base is used for ongoing operations at military airfields around the world, while the tactical system is portable, designed for relatively short-term, austere airfield operations. The special missions system is a highly portable system capable of rapid installation and use by special forces. Accuracy The
https://en.wikipedia.org/wiki/Dnsmasq
dnsmasq is free software providing Domain Name System (DNS) caching, a Dynamic Host Configuration Protocol (DHCP) server, router advertisement and network boot features, intended for small computer networks. dnsmasq has low requirements for system resources, can run on Linux, BSDs, Android and macOS, and is included in most Linux distributions. Consequently, it "is present in a lot of home routers and certain Internet of Things gadgets" and is included in Android. Details dnsmasq is a lightweight, easy to configure DNS forwarder, designed to provide DNS (and optionally DHCP and TFTP) services to a small-scale network. It can serve the names of local machines which are not in the global DNS. dnsmasq's DHCP server supports static and dynamic DHCP leases, multiple networks and IP address ranges. The DHCP server integrates with the DNS server and allows local machines with DHCP-allocated addresses to appear in the DNS. dnsmasq caches DNS records, reducing the load on upstream nameservers and improving performance, and can be configured to automatically pick up the addresses of its upstream servers. dnsmasq accepts DNS queries and either answers them from a small, local cache or forwards them to a real, recursive DNS server. It loads the contents of /etc/hosts, so that local host names which do not appear in the global DNS can be resolved. This also means that records added to your local /etc/hosts file with the format "0.0.0.0 annoyingsite.com" can be used to prevent references to "annoyingsite.com" from being resolved by your browser. This can quickly evolve to a local ad blocker when combined with adblocking site list providers. If done on a router, one can efficiently remove advertising content for an entire household or company. dnsmasq supports modern Internet standards such as IPv6 and DNSSEC, network booting with support for BOOTP, PXE and TFTP and also Lua scripting. Some Internet service-providers rewrite the NXDOMAIN (domain does not exist) responses
https://en.wikipedia.org/wiki/Petro-occipital%20fissure
This grooved surface of the foramen magnum is separated on either side from the petrous portion of the temporal bone by the petro-occipital fissure, which is occupied in the fresh state by a plate of cartilage; the fissure is continuous behind with the jugular foramen, and its margins are grooved for the inferior petrosal sinus.
https://en.wikipedia.org/wiki/Maris%20Otter
Maris Otter is a two-row, autumn sown variety of barley commonly used in the production of malt for the brewing industry. The variety was bred by Dr G D H Bell and his team of plant breeders at the UK's Plant Breeding Institute; the "Maris" part of the name comes from Maris Lane near the institute's home in Trumpington. It was introduced in 1966 and quickly became a dominant variety in the 1970s due to its low nitrogen and superior malting characteristics. By the late-1980s the variety had become unpopular with large breweries and it was removed from the UK National List in 1989. Maris Otter is a cross of Proctor and Pioneer varieties. In the 1980s Maris Otter usage began to decline for a number of reasons, including: compromised genetic purity caused by cross pollination and improved competition from other varieties. In 1992, the consortium of grain merchants H Banham Ltd & Robin Appel Ltd bought the sole right to market Maris Otter and in 2002 they bought all rights outright (including the registered trademark). They remain the sole owners and intend to improve the variety on an ongoing basis. In mainstream brewing the variety has largely been supplanted by newer varieties with better agronomics, but it's still in high demand for premium products. It is one of the few barley malts marketed today by variety. It is very popular both in homebrewing circles and among traditional real ale breweries, many of whom note their exclusive use of Maris Otter in their promotional literature. It carries a price premium over most other comparable varieties.
https://en.wikipedia.org/wiki/Barrier%20%28computer%20science%29
In parallel computing, a barrier is a type of synchronization method. A barrier for a group of threads or processes in the source code means any thread/process must stop at this point and cannot proceed until all other threads/processes reach this barrier. Many collective routines and directive-based parallel languages impose implicit barriers. For example, a parallel do loop in Fortran with OpenMP will not be allowed to continue on any thread until the last iteration is completed. This is in case the program relies on the result of the loop immediately after its completion. In message passing, any global communication (such as reduction or scatter) may imply a barrier. In concurrent computing, a barrier may be in a raised or lowered state. The term latch is sometimes used to refer to a barrier that starts in the raised state and cannot be re-raised once it is in the lowered state. The term count-down latch is sometimes used to refer to a latch that is automatically lowered once a pre-determined number of threads/processes have arrived. Implementation The basic barrier has mainly two variables, one of which records the pass/stop state of the barrier, the other of which keeps the total number of threads that have entered in the barrier. The barrier state was initialized to be "stop" by the first threads coming into the barrier. Whenever a thread enters, based on the number of threads already in the barrier, only if it is the last one, the thread sets the barrier state to be "pass" so that all the threads can get out of the barrier. On the other hand, when the incoming thread is not the last one, it is trapped in the barrier and keeps testing if the barrier state has changed from "stop" to "pass", and it gets out only when the barrier state changes to "pass". The following C++ code demonstrates this procedure. struct barrier_type { // how many processors have entered the barrier // initialize to 0 int arrive_counter; // how many processors have exi
https://en.wikipedia.org/wiki/Military%20Cryptanalytics
Military Cryptanalytics (or MILCRYP as it is sometimes known) is a revision by Lambros D. Callimahos of the series of books written by William F. Friedman under the title Military Cryptanalysis. It may also contain contributions by other cryptanalysts. It was a training manual for National Security Agency and military cryptanalysts. It was published for government use between 1957 and 1977, though parts I and II were written in 1956 and 1959. Callimahos on the work From the Introduction in Part I, Volume I, by Callimahos: "This text represents an extensive expansion and revision, both in scope and content, of the earlier work entitled 'Military Cryptanalysis, Part I' by William F. Friedman. This expansion and revision was necessitated by the considerable advancement made in the art since the publication of the previous text." Callimahos referred to parts III–VI at the end of the first volume: "...Part III will deal with varieties of aperiodic substitution systems, elementary cipher devices and cryptomechanisms, and will embrace a detailed treatment of cryptomathematics and diagnostic tests in cryptanalysis; Part IV will treat transposition and fractioning systems, and combined substitution-transposition systems; Part V will treat the reconstruction of codes, and the solution of enciphered code systems, and Part VI will treat the solution of representative machine cipher systems." However, parts IV–VI were never completed. Declassification Both Military Cryptanalytics and Military Cryptanalysis have been subjects of Mandatory Declassification Review (MDR) requests, including one by John Gilmore in 1992-1993 and two by Charles Varga in 2004 and 2016. All four parts of Military Cryptanalysis and the first two parts of the Military Cryptanalytics series have been declassified. The third part of Military Cryptanalytics was declassified in part in December 2020 and published by GovernmentAttic.org in 2021. In 1984 NSA released copies of Military Cryptanalytics parts I
https://en.wikipedia.org/wiki/Bobby%20Car
A Bobby Car is a toy car designed for children from the age of around twelve months. The Classic model is red, made of plastic and is about 60 cm long and 40 cm high. It has four wheels. The car has been produced by the BIG company since 1972 at sites in Fürth and Burghaslach in Germany. After the death of the Bobby Car inventor Ernst A. Bettag in 2003, the Simba Dickie Group took over the company. The name Bobby Car is protected. Since 2005, a new Bobby Car has been produced for which the Classic design was revised. Bobby car for children The Bobby Car was invented in order to help children learn to walk. It has a kind of seat in which the child can sit as on a motorcycle. By swinging their legs, the child can move the car. Today, numerous accessories exist such as connecting rods, light running tires, trailer and so on. As well as being manufactured in different colours, it also comes in variants such as a police car or tow truck. Special editions have been made to honour well-known German cars, such as Mercedes-Benz SLK, Audi TT, Smart and Volkswagen Beetle. In cooperation with tire manufacturer Fulda, a model having real tires, rather than the usual hard plastic variety was produced. Bobby Car Racing In the 1990s another use of Bobby Cars emerged: Professional competitive driving. The plastic body is strong enough to carry an adult. The steering element and the axles are strengthened to handle high speeds (approximately 60 km/h). Competitions are held on closed roads with steep downward gradients. The official world speed record of a modified gravity-powered Bobby Car was set on 28 May 2022 by Marcel Paul from the Bobby Car Club Altenhain / Bad Soden, who achieved a speed of 130.72 km/h. The speed record in the second discipline with a classic bobby car with plastic tires at 106.01 km/h was also set by Marcel Paul. The Record Institute for Germany has recognized both top values. On August 10, 2023, Marcel Paul once again attempted a world record. This tim
https://en.wikipedia.org/wiki/Universe%20%28Unix%29
In some versions of the Unix operating system, the term universe was used to denote some variant of the working environment. During the late 1980s, most commercial Unix variants were derived from either System V or BSD. Most versions provided both BSD and System V universes and allowed the user to switch between them. Each universe, typically implemented by separate directory trees or separate filesystems, usually included different versions of commands, libraries, man pages, and header files. While such a facility offered the ability to develop applications portable across both System V and BSD variants, the requirements in disk space and maintenance (separate configuration files, twice the work in patching systems) gave them a problematic reputation. Systems that offered this facility included Harris/Concurrent's CX/UX, Convex's Convex/OS, Apollo's Domain/OS (version 10 only), Pyramid's DC/OSx (dropped in SVR4-based version 2), Concurrent's Masscomp/RTU, MIPS Computer Systems' RISC/os, Sequent's DYNIX/ptx and Siemens' SINIX. Some versions of System V Release 4 retain a system similar to Dual Universe concept, with BSD commands (which behave differently from classic System V commands) in , BSD header files in and library files in . can also be found in NeXTSTEP and OPENSTEP, as well as Solaris. External links Sven Mascheck, DYNIX 3.2.0 and SINIX V5.20 Universes Unix
https://en.wikipedia.org/wiki/Condylar%20canal
The condylar canal (or condyloid canal) is a canal in the condyloid fossa of the lateral parts of occipital bone behind the occipital condyle. Resection of the rectus capitis posterior major and minor muscles reveals the bony recess leading to the condylar canal, which is situated posterior and lateral to the occipital condyle. It is immediately superior to the extradural vertebral artery, which makes a loop above the posterior C1 ring to enter the foramen magnum. The anteriomedial wall of the condylar canal thickens to join the foramen magnum rim and connect to the occipital condyle. Through the condylar canal, the occipital emissary vein connects to the venous system including the suboccipital venous plexus, occipital sinus and sigmoid sinus. It is not always present, and can have variations of being a single canal or multiple smaller canals in cluster. Additional images
https://en.wikipedia.org/wiki/Dundee%20Society
The Dundee Society was a society of graduates of CA-400, a National Security Agency course in cryptology devised by Lambros D. Callimahos, which included the Zendian Problem (a practical exercise in traffic analysis and cryptanalysis). The class was held once a year, and new members were inducted into the Society upon completion of the class. The Society was founded in the mid-1950s and continued on after Callimahos' retirement from NSA in 1976. The last CA-400 class was held at NSA in 1979, formally closing the society's membership rolls. The society took its name from an empty jar of Dundee Marmalade that Callimahos kept on his desk for use as a pencil caddy. Callimahos came up with the society's name while trying to schedule a luncheon for former CA-400 students at the Ft. Meade Officers' Club; being unable to use either the course name or the underlying government agency's name for security reasons, he spotted the ceramic Dundee jar and decided to use "The Dundee Society" as the cover name for the luncheon reservation. CA-400 students were presented with ceramic Dundee Marmalade jars at the close of the course as part of the induction ceremony into the Dundee Society. When Dundee switched from ceramic to glass jars, Callimahos would still present graduates with ceramic Dundee jars, but the jars were then collected back up for use in next year's induction ceremony, and members were "encouraged" to seek out Dundee jars for their own collections if they wished to have a permanent token of induction. See also American Cryptogram Association National Cryptologic School
https://en.wikipedia.org/wiki/Zendian%20problem
The Zendian problem was an exercise in communication intelligence operations (mainly traffic analysis and cryptanalysis) devised by Lambros D. Callimahos as part of an advanced course, CA-400, that Callimahos taught to National Security Agency cryptanalysts starting in the 1950s. Content The scenario involves 375 radio messages said to have been intercepted on December 23 by the US Army contingent of a United Nations force landed on the fictional island of Zendia in the Pacific Ocean. A typical intercept looks like this: <nowiki> XYR DE OWN 4235KCS 230620T USM-99/00091 9516 8123 0605 7932 8423 5095 8444 6831 JAAAJ EUEBD OETDN GXAWR SUTEU EIWEN YUENN ODEUH RROMM EELGE AEGID TESRR RASEB ENORS RNOMM EAYTU NEONT ESFRS NTCRO QCEET OCORE IITLP OHSRG SSELY TCCSV SOTIU GNTIV EVOMN TMPAA CIRCS ENREN OTSOI ENREI EKEIO PFRNT CDOGE NYFPE TESNI EACEA ISTEM SOFEA TROSE EQOAO OSCER HTTAA LUOUY LSAIE TSERR ESEPA PHVDN HNNTI IARTX LASLD URATT OPPLO AITMW OTIAS TNHIR DCOUT NMFCA SREEE USSDS DHOAH REEXI PROUT NTTHD JAAAJ EUEBD </nowiki> For each message, the first line is provided by the intercept operator, giving call signs, frequency, time, and reference number. The rest of the message is a transcript of the Morse code transmission. At the beginning of the intercepted message there is a header which consists of 8 four-digit groups. Initially, the meaning of the numeric header is not known; the meanings of various components of this header (such as a serial number assigned by the transmitting organization's message center) can be worked out through traffic analysis. The rest of the message consists of "indicators" and ciphertext; the first group is evidently a "discriminant" indicating the cryptosystem used, and (depending on the cryptosystem) some or all of the second group may contain a message-specific keying element such as initial rotor settings. The first two groups are repeated at the end of the message, which allows correction of garbled indicators. Th
https://en.wikipedia.org/wiki/MTR%20%28software%29
My traceroute, originally named Matt's traceroute (MTR), is a computer program that combines the functions of the traceroute and ping programs in one network diagnostic tool. MTR probes routers on the route path by limiting the number of hops individual packets may traverse, and listening to responses of their expiry. It will regularly repeat this process, usually once per second, and keep track of the response times of the hops along the path. History The original Matt's traceroute program was written by Matt Kimball in 1997. Roger Wolff took over maintaining MTR (renamed My traceroute) in October 1998. Fundamentals MTR is licensed under the terms of the GNU General Public License (GPL) and works under modern Unix-like operating systems. It normally works under the text console, but it also has an optional GTK+-based graphical user interface (GUI). MTR relies on Internet Control Message Protocol (ICMP) Time Exceeded (type 11, code 0) packets coming back from routers, or ICMP Echo Reply packets when the packets have hit their destination host. MTR also has a User Datagram Protocol (UDP) mode (invoked with "-u" on the command line or pressing the "u" key in the curses interface) that sends UDP packets, with the time to live (TTL) field in the IP header increasing by one for each probe sent, toward the destination host. When the UDP mode is used, MTR relies on ICMP port unreachable packets (type 3, code 3) when the destination is reached. MTR also supports IPv6 and works in a similar manner but instead relies on ICMPv6 messages. The tool is often used for network troubleshooting. By showing a list of routers traversed, and the average round-trip time as well as packet loss to each router, it allows users to identify links between two given routers responsible for certain fractions of the overall latency or packet loss through the network. This can help identify network overuse problems. Examples This example shows MTR running on Linux tracing a route from the
https://en.wikipedia.org/wiki/STED%20microscopy
Stimulated emission depletion (STED) microscopy is one of the techniques that make up super-resolution microscopy. It creates super-resolution images by the selective deactivation of fluorophores, minimizing the area of illumination at the focal point, and thus enhancing the achievable resolution for a given system. It was developed by Stefan W. Hell and Jan Wichmann in 1994, and was first experimentally demonstrated by Hell and Thomas Klar in 1999. Hell was awarded the Nobel Prize in Chemistry in 2014 for its development. In 1986, V.A. Okhonin (Institute of Biophysics, USSR Academy of Sciences, Siberian Branch, Krasnoyarsk) had patented the STED idea. This patent was unknown to Hell and Wichmann in 1994. STED microscopy is one of several types of super resolution microscopy techniques that have recently been developed to bypass the diffraction limit of light microscopy to increase resolution. STED is a deterministic functional technique that exploits the non-linear response of fluorophores commonly used to label biological samples in order to achieve an improvement in resolution, that is to say STED allows for images to be taken at resolutions below the diffraction limit. This differs from the stochastic functional techniques such as Photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) as these methods use mathematical models to reconstruct a sub diffraction limit from many sets of diffraction limited images. Background In traditional microscopy, the resolution that can be obtained is limited by the diffraction of light. Ernst Abbe developed an equation to describe this limit. The equation is: where D is the diffraction limit, λ is the wavelength of the light, and NA is the numerical aperture, or the refractive index of the medium multiplied by the sine of the angle of incidence. n describes the refractive index of the specimen, α measures the solid half‐angle from which light is gathered by an objective, λ is
https://en.wikipedia.org/wiki/189%20%28number%29
189 (one hundred [and] eighty-nine) is the natural number following 188 and preceding 190. In mathematics 189 is a centered cube number and a heptagonal number. The centered cube numbers are the sums of two consecutive cubes, and 189 can be written as sum of two cubes in two ways: and The smallest number that can be written as the sum of two positive cubes in two ways is 1729. There are 189 zeros among the decimal digits of the positive integers with at most three digits. The largest prime number that can be represented in 256-bit arithmetic is the "ultra-useful prime" used in quasi-Monte Carlo methods and in some cryptographic systems. See also The year AD 189 or 189 BC List of highways numbered 189
https://en.wikipedia.org/wiki/Vice-county%20Census%20Catalogue%20of%20the%20Vascular%20Plants%20of%20Great%20Britain
The Vice-county Census Catalogue of the Vascular Plants of Great Britain () is an A5 softback book produced in 2003 by the Botanical Society of the British Isles. It attempts to present a complete picture of the vice-county distribution of vascular plant species in Great Britain, the Isle of Man, and the Channel Islands. Its compilers were C. A. Stace, R. G. Ellis, D. H. Kent and D. J. McCosh. Contents of the catalogue An introduction explains the purpose of the book, the history of vice-county catalogues in Britain and Ireland, the development of the 2003 work, and the rationale for recording using vice-counties and the merits of this compared with grid-square recording. The bulk of the book (380 pages) consists of the census catalogue itself. This is presented as a list of taxa, in systematic order, with, for each taxon a list of the vice-counties in which it has been recorded. Vice-county numbers rather than names are used, in order to make efficient use of space. For each vice-county in which a taxon has been recorded, the status (native, archaeophyte, neophyte or casual) in that vice-county is indicated (through the use of different typefaces). A distinction is made between two time-periods: (i) taxa recorded since 1970 and still believed to be extant in the vice-county and (ii) taxa which have not been recorded since 1970 or which have and which are known to be extinct in that vice-county. The catalogue contains separate entries for every species (including microspecies for all apomictic groups), as well as separate entries for all subspecies and interspecific hybrids. In total 4880 taxa are listed. Previous catalogue; criticism The most recent publication dealing with this subject prior to the 2003 work was the Comital Flora of the British Isles by G. C. Druce, which was published in 1932. The comprehensiveness of the work has been questioned by Hannah (2005), citing the Clyde Isles (vice-county 100), where he was able to trace 166 species for which the
https://en.wikipedia.org/wiki/Hyperplane%20separation%20theorem
In geometry, the hyperplane separation theorem is a theorem about disjoint convex sets in n-dimensional Euclidean space. There are several rather similar versions. In one version of the theorem, if both these sets are closed and at least one of them is compact, then there is a hyperplane in between them and even two parallel hyperplanes in between them separated by a gap. In another version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is a separating axis, because the orthogonal projections of the convex bodies onto the axis are disjoint. The hyperplane separation theorem is due to Hermann Minkowski. The Hahn–Banach separation theorem generalizes the result to topological vector spaces. A related result is the supporting hyperplane theorem. In the context of support-vector machines, the optimally separating hyperplane or maximum-margin hyperplane is a hyperplane which separates two convex hulls of points and is equidistant from the two. Statements and proof In all cases, assume to be disjoint, nonempty, and convex subsets of . The summary of the results are as follows: The number of dimensions must be finite. In infinite-dimensional spaces there are examples of two closed, convex, disjoint sets which cannot be separated by a closed hyperplane (a hyperplane where a continuous linear functional equals some constant) even in the weak sense where the inequalities are not strict. Here, the compactness in the hypothesis cannot be relaxed; see an example in the section Counterexamples and uniqueness. This version of the separation theorem does generalize to infinite-dimension; the generalization is more commonly known as the Hahn–Banach separation theorem. The proof is based on the following lemma: Since a separating hyperplane cannot intersect the interiors of open convex sets, we have a corollary: Case with possible intersections If the
https://en.wikipedia.org/wiki/Mixed%20layer
The oceanic or limnological mixed layer is a layer in which active turbulence has homogenized some range of depths. The surface mixed layer is a layer where this turbulence is generated by winds, surface heat fluxes, or processes such as evaporation or sea ice formation which result in an increase in salinity. The atmospheric mixed layer is a zone having nearly constant potential temperature and specific humidity with height. The depth of the atmospheric mixed layer is known as the mixing height. Turbulence typically plays a role in the formation of fluid mixed layers. Oceanic mixed layer Importance of the mixed layer The mixed layer plays an important role in the physical climate. Because the specific heat of ocean water is much larger than that of air, the top 2.5 m of the ocean holds as much heat as the entire atmosphere above it. Thus the heat required to change a mixed layer of 2.5 m by 1 °C would be sufficient to raise the temperature of the atmosphere by 1 °C. The depth of the mixed layer is thus very important for determining the temperature range in oceanic and coastal regions. In addition, the heat stored within the oceanic mixed layer provides a source for heat that drives global variability such as El Niño. The mixed layer is also important as its depth determines the average level of light seen by marine organisms. In very deep mixed layers, the tiny marine organisms known as phytoplankton are unable to get enough light to maintain their metabolism. The deepening of the mixed layer in the wintertime in the North Atlantic is therefore associated with a strong decrease in surface chlorophyll a. However, this deep mixing also replenishes near-surface nutrient stocks. Thus when the mixed layer becomes shallow in the spring, and light levels increase, there is often a concomitant increase of phytoplankton biomass, known as the "spring bloom". Oceanic mixed layer formation There are three primary sources of energy for driving turbulent mixing wit
https://en.wikipedia.org/wiki/Pickled%20pepper
A pickled pepper is a Capsicum pepper preserved by pickling, which usually involves submersion in a brine of vinegar and salted water with herbs and spices, often including peppercorns, coriander, dill, and bay leaf. Common pickled peppers are the banana pepper, the Cubanelle, the bell pepper, sweet and hot cherry peppers, the Hungarian wax pepper, the Greek pepper, the serrano pepper, and the jalapeño. They are often found in supermarkets alongside pickled cucumbers. Pickled sliced jalapeños are also used frequently for topping nachos and other Mexican dishes. These peppers are a common ingredient used by sandwich shops such as Quiznos, Subway, and Wawa. Pickled peppers are found throughout the world, such as the Italian peperoncini sott'aceto and Indonesia's pickled bird's eye chili, besides the already-mentioned American and Latin American usages. The flavored brine of hot yellow peppers is commonly used as a condiment in Southern cooking in the United States. Information To achieve the best results and minimize the risk of botulism, only fresh blemish-free peppers should be used and vinegar with acidity of at least 5%; reducing the acidic taste can be achieved by adding sugar. While larger peppers are sliced up to be pickled, smaller peppers are often placed into the pickling solution whole; however, they still require slits so that the vinegar can penetrate the pepper. To avoid botulism it is recommended that pickled pepper products be processed in boiling water if they are to be stored at room temperature; improperly processed peppers led to the largest outbreak of botulism in U.S. history. As with pickled cucumbers, there are multiple ways of pickling peppers. The most common is as above, pickling in an acidic brine and canned; next is quick-pickled or refrigerator pickling, which skips the canning step and requires the peppers to be stored in the refrigerator as mentioned above. For lacto-fermented pickled peppers, vinegar is omitted from the salty b
https://en.wikipedia.org/wiki/Little%20b%20%28programming%20language%29
Little b is a domain-specific programming language, more specifically, a modeling language, designed to build modular mathematical models of biological systems. It was designed and authored by Aneil Mallavarapu. Little b is being developed in the Virtual Cell Program at Harvard Medical School, headed by mathematician Jeremy Gunawardena. This language is based on Lisp and is meant to allow modular programming to model biological systems. It will allow more flexibility to facilitate rapid change that is required to accurately capture complex biological systems. The language draws on techniques from artificial intelligence and symbolic mathematics, and provides syntactic conveniences derived from object-oriented languages. The language was originally denoted with a lowercase b (distinguishing it from B, the predecessor to the widely used C programming language), but the name was eventually changed to "little b" to avoid confusion and to pay homage to Smalltalk.
https://en.wikipedia.org/wiki/Density%20contrast
Density contrast is a parameter used in galaxy formation to indicate where there are local enhancements in matter density. It is believed that after inflation, although the universe was mostly uniform, some regions were slightly denser than others with contrast densities on the order of 1 trillionth. As the horizon distance expanded, the enclosed causally connected (i.e. gravitationally connected) masses increased until they reached the Jeans mass and began to collapse, which allowed galaxies, galaxy clusters, superclusters, and filaments to form.
https://en.wikipedia.org/wiki/Jeans%20instability
In stellar physics, the Jeans instability causes the collapse of interstellar gas clouds and subsequent star formation, named after James Jeans. It occurs when the internal gas pressure is not strong enough to prevent gravitational collapse of a region filled with matter. For stability, the cloud must be in hydrostatic equilibrium, which in case of a spherical cloud translates to where is the enclosed mass, is the pressure, is the density of the gas (at radius ), is the gravitational constant, and is the radius. The equilibrium is stable if small perturbations are damped and unstable if they are amplified. In general, the cloud is unstable if it is either very massive at a given temperature or very cool at a given mass; under these circumstances, the gas pressure gradient cannot overcome gravitational force, and the cloud will collapse. The Jeans instability likely determines when star formation occurs in molecular clouds. Jeans mass The Jeans mass is named after the British physicist Sir James Jeans, who considered the process of gravitational collapse within a gaseous cloud. He was able to show that, under appropriate conditions, a cloud, or part of one, would become unstable and begin to collapse when it lacked sufficient gaseous pressure support to balance the force of gravity. The cloud is stable for sufficiently small mass (at a given temperature and radius), but once this critical mass is exceeded, it will begin a process of runaway contraction until some other force can impede the collapse. He derived a formula for calculating this critical mass as a function of its density and temperature. The greater the mass of the cloud, the bigger its size, and the colder its temperature, the less stable it will be against gravitational collapse. The approximate value of the Jeans mass may be derived through a simple physical argument. One begins with a spherical gaseous region of radius , mass , and with a gaseous sound speed . The gas is compressed slight
https://en.wikipedia.org/wiki/Carotid%20groove
The carotid groove is an anatomical groove in the sphenoid bone located above the attachment of each great wing of the sphenoid bone. The groove is curved like the italic letter f, and lodges the internal carotid artery and the cavernous sinus.
https://en.wikipedia.org/wiki/Posterior%20clinoid%20processes
The posterior clinoid processes are the tubercles of the sphenoid bone situated at the superior angles of the dorsum sellæ (one on each angle) which represents the posterior boundary of the sella turcica. They vary considerably size and form. The posterior clinoid processes deepen the sella turcica, and give attachment to (the attached border of) the tentorium cerebelli, and the dura forming the floor of the hypophyseal fossa (sella turcica). The petroclinoid ligament The petroclinoid ligament is a fold of dura matter. It extends between the posterior clinoid process and anterior clinoid process and the petrosal part of the temporal bone of the skull. There are two separate bands of the ligament; named the anterior and posterior petroclinoid ligaments respectively. The anterior petroclinoid ligament is considered to be an extension of the tentorium cerebelli and the posterior petroclinoid ligament arises from the posteromedial extensions of the tentorial notch. The anterior and posterior petroclinoid ligaments are bands composed of collagen and elastic fibres that are densely packed in fascicles Their function: The anterior petroclinoid ligament acts to laterally limit the superior wall of the cavernous sinus. The posterior petroclinoid ligament limits the posterior wall of the cavernous sinus. The angle between the two ligaments varies from 20 to 55 degrees. Anatomical Relations and Clinical significance: The posterior petroclinoid ligament is in close proximity to the oculomotor nerve. During head trauma, it acts as a fulcrum following the downward displacement of the brainstem. This can cause injury to the pupillomotor fibres of the oculomotor nerve, consequently leading to internal ophthalmoplegia The petroclinoid ligament attaches across the notch at the petrosphenoid junction. This forms a foramen, and within this lies the abducens nerve. The abducens nerve travels inferiorly to the petroclinoid ligament Ossification The petroclinoid ligament could
https://en.wikipedia.org/wiki/Phase%20boundary
In thermal equilibrium, each phase (i.e. liquid, solid etc.) of physical matter comes to an end at a transitional point, or spatial interface, called a phase boundary, due to the immiscibility of the matter with the matter on the other side of the boundary. This immiscibility is due to at least one difference between the two substances' corresponding physical properties. The behavior of phase boundaries has been a developing subject of interest and an active research field, called interface science, in physics and mathematics for almost two centuries, due partly to phase boundaries naturally arising in many physical processes, such as the capillarity effect, the growth of grain boundaries, the physics of binary alloys, and the formation of snow flakes. One of the oldest problems in the area dates back to Lamé and Clapeyron who studied the freezing of the ground. Their goal was to determine the thickness of solid crust generated by the cooling of a liquid at constant temperature filling the half-space. In 1889, Stefan, while working on the freezing of the ground developed these ideas further and formulated the two-phase model which came to be known as the Stefan Problem. The proof for the existence and uniqueness of a solution to the Stefan problem was developed in many stages. Proving the general existence and uniqueness of the solutions in the case of was solved by Shoshana Kamin.
https://en.wikipedia.org/wiki/Anterior%20clinoid%20process
The anterior clinoid process is a bilaterally paired posterior projection of the sphenoid bone at the junction of the medial end of either lesser wing of sphenoid bone with the body of sphenoid bone. The two anterior clinoid processes flank the pituitary fossa anteriorly. The free border of the tentorium cerebelli attaches onto the anterior clinoid processes. A middle clinoid process is situated medial to each anterior clinoid process, with the internal carotid artery passing between the two (i.e. medial to the anterior clinoid process.) Anatomy Attachments The free border of the tentorium cerebelli extends anteriorly on either side beyond the attached border of the same side (which ends anteriorly at the posterior clinoid process) to ultimately end by attaching onto the anterior clinoid process. Etymology The anterior and posterior clinoid processes surround the sella turcica like the four corners of a four poster bed. Cline is Greek for bed. –oid, as usual, indicates a similarity to. The term may also come from the Greek root klinein or the Latin clinare, both meaning "sloped" as in "inclined". Additional images
https://en.wikipedia.org/wiki/Sigmoid%20sulcus
The inner surface of the mastoid portion of the temporal bone presents a deep, curved groove, the sigmoid sulcus, which lodges part of the transverse sinus; in it may be seen the opening of the mastoid foramen.
https://en.wikipedia.org/wiki/Toobers%20%26%20Zots
Toobers & Zots are creative construction toys which were invented in the 1990s by Boston-area based sculptor Arthur Ganson. They were manufactured by Hands-On Toys. Toobers & Zots consist of long flexible foam pieces called "toobers" and flat foam pieces called "zots." Toobers range in size from two to four feet long, so they are great for creating large-scale objects. Zots come in various shapes and sizes and they are used to decorate the toobers. Although they have not experienced the critical or commercial success of such toys as the LEGO building blocks or Tinkertoys, they were highly successful in the specialty market and were very popular amongst educators and art communities. In February 2011, Little Kids Inc. relaunched the once popular toy at Toy Fair in New York City. Although the basic concept of open ended play is the same, they have refreshed the product and packaging to ensure that it is exciting for kids today. At the show, they previewed their 3 sets for 2011: Bend & Build Foamstruction Set, Bend & Pretend Foamstruction Set for Girls, Bend & Pretend Foamstruction Set for Boys. They decided to reintroduce Toobers & Zots exclusively into the specialty market where it was once so successful. Since February, Toobers & Zots have been seen in Parenting Magazine, on the set of ESPN's Pardon the Interruption, Time to Play Magazine, Toys & Family Entertainment Magazine, on tour with the Toy Guy Chris Byrne and Time to Play's Spring & Summer Showcase.
https://en.wikipedia.org/wiki/Internal%20occipital%20protuberance
Along the internal surface of the occipital bone, at the point of intersection of the four divisions of the cruciform eminence, is the internal occipital protuberance. Running transversely on either side is a groove for the transverse sinus. Additional images See also External occipital protuberance
https://en.wikipedia.org/wiki/Internal%20occipital%20crest
In the occipital bone, the lower division of the cruciate eminence is prominent, and is named the internal occipital crest; it bifurcates near the foramen magnum and gives attachment to the falx cerebelli; in the attached margin of this falx is the occipital sinus, which is sometimes duplicated. In the upper part of the internal occipital crest, a small depression is sometimes distinguishable; it is termed the vermian fossa since it is occupied by part of the cerebellar vermis of the cerebellum. Additional images
https://en.wikipedia.org/wiki/Born%E2%80%93Huang%20approximation
The Born–Huang approximation (named after Max Born and Huang Kun) is an approximation closely related to the Born–Oppenheimer approximation. It takes into account diagonal nonadiabatic effects in the electronic Hamiltonian better than the Born–Oppenheimer approximation. Despite the addition of correction terms, the electronic states remain uncoupled under the Born–Huang approximation, making it an adiabatic approximation. Shape The Born–Huang approximation asserts that the representation matrix of nuclear kinetic energy operator in the basis of Born–Oppenheimer electronic wavefunctions is diagonal: Consequences The Born–Huang approximation loosens the Born–Oppenheimer approximation by including some electronic matrix elements, while at the same time maintains its diagonal structure in the nuclear equations of motion. As a result, the nuclei still move on isolated surfaces, obtained by the addition of a small correction to the Born–Oppenheimer potential energy surface. Under the Born–Huang approximation, the Schrödinger equation of the molecular system simplifies to The quantity serves as the corrected potential energy surface. Upper-bound property The value of Born–Huang approximation is that it provides the upper bound for the ground-state energy. The Born–Oppenheimer approximation, on the other hand, provides the lower bound for this value. See also Vibronic coupling Born–Oppenheimer approximation
https://en.wikipedia.org/wiki/Comparison%20of%20iPod%20file%20managers
This is a list of iPod file managers. i.e. software that permits the transferring of media files content between an iPod and a computer or vice versa. iTunes is the official iPod managing software, but 3rd parties have created alternatives to work around restrictions in iTunes. e.g. transferring content from an iPod to a computer is restricted by iTunes. General Media organization and transfer features iPod syncing and maintenance features iPhone & iPod Touch compatibility See also iPod iTunes iPhone
https://en.wikipedia.org/wiki/Scrum%20%28software%20development%29
Scrum is an agile project management system commonly used in software development and other industries. Scrum prescribes for teams to break work into goals to be completed within time-boxed iterations, called sprints. Each sprint is no longer than one month and commonly lasts two weeks. The scrum team assesses progress in time-boxed, stand-up meetings of up to 15 minutes, called daily scrums. At the end of the sprint, the team holds two further meetings: one sprint review to demonstrate the work for stakeholders and solicit feedback, and one internal sprint retrospective. Scrum's approach to product development involves bringing decision-making authority to an operational level. Unlike a sequential approach to product development, scrum is an iterative and incremental framework for product development. Scrum allows for continuous feedback and flexibility, requiring teams to self-organize by encouraging physical co-location or close online collaboration, and mandating frequent communication among all team members. The flexible and semi-unplanned approach of scrum is based in part on the notion of requirements volatility, that stakeholders will change their requirements as the project evolves. History The use of the term scrum in software development came from a 1986 Harvard Business Review paper titled "The New New Product Development Game" by Hirotaka Takeuchi and Ikujiro Nonaka. Based on case studies from manufacturing firms in the automotive, photocopier, and printer industries, the authors outlined a new approach to product development for increased speed and flexibility. They called this the rugby approach, as the process involves a single cross-functional team operating across multiple overlapping phases, in which the team "tries to go the distance as a unit, passing the ball back and forth". The authors later developed scrum in their book, The Knowledge Creating Company. In the early 1990s, Ken Schwaber used what would become scrum at his company, Adva
https://en.wikipedia.org/wiki/Jordan%27s%20lemma
In complex analysis, Jordan's lemma is a result frequently used in conjunction with the residue theorem to evaluate contour integrals and improper integrals. The lemma is named after the French mathematician Camille Jordan. Statement Consider a complex-valued, continuous function , defined on a semicircular contour of positive radius lying in the upper half-plane, centered at the origin. If the function is of the form with a positive parameter , then Jordan's lemma states the following upper bound for the contour integral: with equality when vanishes everywhere, in which case both sides are identically zero. An analogous statement for a semicircular contour in the lower half-plane holds when . Remarks If is continuous on the semicircular contour for all large and then by Jordan's lemma For the case , see the estimation lemma. Compared to the estimation lemma, the upper bound in Jordan's lemma does not explicitly depend on the length of the contour . Application of Jordan's lemma Jordan's lemma yields a simple way to calculate the integral along the real axis of functions holomorphic on the upper half-plane and continuous on the closed upper half-plane, except possibly at a finite number of non-real points , , …, . Consider the closed contour , which is the concatenation of the paths and shown in the picture. By definition, Since on the variable is real, the second integral is real: The left-hand side may be computed using the residue theorem to get, for all larger than the maximum of , , …, , where denotes the residue of at the singularity . Hence, if satisfies condition (), then taking the limit as tends to infinity, the contour integral over vanishes by Jordan's lemma and we get the value of the improper integral Example The function satisfies the condition of Jordan's lemma with for all with . Note that, for , hence () holds. Since the only singularity of in the upper half plane is at , the above application yields Si
https://en.wikipedia.org/wiki/Human%20nature
Human nature comprises the fundamental dispositions and characteristics—including ways of thinking, feeling, and acting—that humans are said to have naturally. The term is often used to denote the essence of humankind, or what it 'means' to be human. This usage has proven to be controversial in that there is dispute as to whether or not such an essence actually exists. Arguments about human nature have been a central focus of philosophy for centuries and the concept continues to provoke lively philosophical debate. While both concepts are distinct from one another, discussions regarding human nature are typically related to those regarding the comparative importance of genes and environment in human development (i.e., 'nature versus nurture'). Accordingly, the concept also continues to play a role in academic fields, such as both the natural and the social sciences, and philosophy, in which various theorists claim to have yielded insight into human nature. Human nature is traditionally contrasted with human attributes that vary among societies, such as those associated with specific cultures. The concept of nature as a standard by which to make judgments is traditionally said to have begun in Greek philosophy, at least in regard to its heavy influence on Western and Middle Eastern languages and perspectives. By late antiquity and medieval times, the particular approach that came to be dominant was that of Aristotle's teleology, whereby human nature was believed to exist somehow independently of individuals, causing humans to simply become what they become. This, in turn, has been understood as also demonstrating a special connection between human nature and divinity, whereby human nature is understood in terms of final and formal causes. More specifically, this perspective believes that nature itself (or a nature-creating divinity) has intentions and goals, including the goal for humanity to live naturally. Such understandings of human nature see this nature as an
https://en.wikipedia.org/wiki/Minakshisundaram%E2%80%93Pleijel%20zeta%20function
The Minakshisundaram–Pleijel zeta function is a zeta function encoding the eigenvalues of the Laplacian of a compact Riemannian manifold. It was introduced by . The case of a compact region of the plane was treated earlier by . Definition For a compact Riemannian manifold M of dimension N with eigenvalues of the Laplace–Beltrami operator , the zeta function is given for sufficiently large by (where if an eigenvalue is zero it is omitted in the sum). The manifold may have a boundary, in which case one has to prescribe suitable boundary conditions, such as Dirichlet or Neumann boundary conditions. More generally one can define for P and Q on the manifold, where the are normalized eigenfunctions. This can be analytically continued to a meromorphic function of s for all complex s, and is holomorphic for . The only possible poles are simple poles at the points for N odd, and at the points for N even. If N is odd then vanishes at . If N is even, the residues at the poles can be explicitly found in terms of the metric, and by the Wiener–Ikehara theorem we find as a corollary the relation , where the symbol indicates that the quotient of both the sides tend to 1 when T tends to . The function can be recovered from by integrating over the whole manifold M: . Heat kernel The analytic continuation of the zeta function can be found by expressing it in terms of the heat kernel as the Mellin transform In particular, we have where is the trace of the heat kernel. The poles of the zeta function can be found from the asymptotic behavior of the heat kernel as t→0. Example If the manifold is a circle of dimension N=1, then the eigenvalues of the Laplacian are n2 for integers n. The zeta function where ζ is the Riemann zeta function. Applications Apply the method of heat kernel to asymptotic expansion for Riemannian manifold (M,g) we obtain the two following theorems. Both are the resolutions of the inverse problem in which we get the geometric pr
https://en.wikipedia.org/wiki/Trade%20facilitation
Trade facilitation looks at how procedures and controls governing the movement of goods across national borders can be improved to reduce associated cost burdens and maximise efficiency while safeguarding legitimate regulatory objectives. Business costs may be a direct function of collecting information and submitting declarations or an indirect consequence of border checks in the form of delays and associated time penalties, forgone business opportunities and reduced competitiveness. Understanding and use of the term “trade facilitation” varies in the literature and amongst practitioners. "Trade facilitation" is largely used by institutions which seek to improve the regulatory interface between government bodies and traders at national borders. The WTO, in an online training package, has defined trade facilitation as “the simplification and harmonisation of international trade procedures”, where trade procedures are the “activities, practices and formalities involved in collecting, presenting, communicating and processing data required for the movement of goods in international trade”. In defining the term, many trade facilitation proponents will also make reference to trade finance and the procedures applicable for making payments (e.g. via a commercial banks). For example, UN/CEFACT defines trade facilitation as "the simplification, standardization and harmonisation of procedures and associated information flows required to move goods from seller to buyer and to make payment". Occasionally, the term trade facilitation is extended to address a wider agenda in economic development and trade to include: the improvement of transport infrastructure, the removal of government corruption, the modernization of customs administration, the removal of other non-tariff trade barriers, as well as export marketing and promotion. The World Trade Report 2015 provides an overview of the various trade facilitation definitions from academia as well as various international orga
https://en.wikipedia.org/wiki/Croatian%20National%20Corpus
Croatian National Corpus (, HNK) is the biggest and the most important corpus of Croatian. Its compilation started in 1998 at the Institute of Linguistics of the Faculty of Humanities and Social Sciences, University of Zagreb following the ideas of Marko Tadić. The theoretical foundations and the expression of the need for a general-purpose, representative and multi-million corpus of Croatian started to appear even earlier. The Croatian National Corpus is compiled from selected texts written in Croatian covering all fields, topics, genres and styles: from literary and scientific texts to text-books, newspaper, user-groups and chat rooms. The initial composition was divided in two constituents: 30-million corpus of contemporary Croatian (30m) where samples from texts from 1990 on were included. The criteria for inclusion of text samples were: written by native speakers, different fields, genres and topics. Translated text or poetry were excluded. Croatian Electronic Text Archive (HETA) where the complete text were included, particularly serial publications (volumes, series, editions etc.) which would imbalance the 30m if they were inserted there. Since 2004, with the adoption of the concept of the 3rd generation corpus, the two-constituent structure has been abandoned in favor of several subcorpora and larger size. Since 2005 HNK 105 million tokens and is composed of number of different subcorpora which can be searched individually and all together in a whole corpus. Since 2004 HNK also migrated to a new server platform, namely Manatee/Bonito server-client architecture. For searching the HNK (today still with free test access) a free client program Bonito is needed. The author of this corpus manager is Pavel Rychlý from the Natural Language Processing Laboratory of the Faculty of Informatics, Masaryk University in Brno, Czech Republic. Its interface features complex and more elaborated queries over corpus, different types of statistical results, total or partial
https://en.wikipedia.org/wiki/Blue-listed
Blue-listed species are species that belong to the Blue List and includes any indigenous species or subspecies (taxa) considered to be vulnerable in their locale in order to provide early warning to federal and regional governments. Vulnerable taxa are of special concern because of characteristics that make them particularly sensitive to human activities or natural events. Blue-listed taxa are at risk, but are not extirpated, endangered or threatened. History The concept of a Blue List was derived in 1971 by Robert Arbib from the National Audubon Society in his article, "Announcing-- The Blue List: an 'early warning' system for birds". The article stated that the list was made up for species that appear to be locally common in North America, but is undergoing non-cyclic declines. Starting from 1971, it was utilized to list vulnerable bird species throughout North America. Unlike the US Fish and Wildlife Endangered Species List, the Blue List was made to identify patterns of population losses for regional bird populations before they could be listed as endangered. Every decade after its release, the list is revisited and revised based on regional editors and species get "nominated" to be added to the list. From then on, species that are part of the Blue List were referred to as Blue-listed species. Status Ranks Initially, in order to identify the types of risks that each Blue-Listed species have, the Blue List has identified various categories for Blue-Listed species based on the following alphabets: "A" : the species population is "greatly down in numbers" "B" : the species population is "down in numbers" "C" : the species population is experiencing no change "D" : the species population is "up in numbers" "E" : the species population is "greatly up in numbers" Using this metric reginal editors were able to report on the species along with their status ranks in order to identify the patterns of population growth that each species is facing. Later on, t
https://en.wikipedia.org/wiki/Faraday%20cup%20electrometer
The Faraday cup electrometer is the simplest form of an electrical aerosol instrument used in aerosol studies. It consists of an electrometer and a filter inside a Faraday cage. Charged particles collected by the filter generate an electric current which is measured by the electrometer. Principle According to Gauss' law, the charge collected on the Faraday cup is the induced charge, that means that the filter does not need to be a conductor. It is typically used to measure particles of unipolar charge, which are particles with a net charge concentration that equals the charge concentration of positively or negatively charged particles. With an aerosol electrometer the transportation of charge by electrical charged aerosol particles can be measured as electric current. In a metal housing (Faraday cup) a particle filter is mounted on an insulator. A Faraday cup is a detector that measures the current in a beam of charged (aerosol) particles. Faraday cups are used e.g. in mass spectrometers being an alternative to secondary electron multipliers. The advantage of the Faraday cup is its robustness and the possibility to measure the ion or electron stream absolutely. Furthermore, the sensitiveness is constant by time and not mass-dependent. The simplest form is the following: A Faraday detector consists of a metal cup, that is placed in the path of the particle beam. The aerosol has to pass the filter inside the cup. The filter has to be isolated. It is connected to the electrometer circuit which measures the current. See also Michael Faraday Faraday cup Experimental physics Aerosol measurement
https://en.wikipedia.org/wiki/Globule%20%28CDN%29
Globule was an open-source collaborative content delivery network developed at the Vrije Universiteit in Amsterdam since 2006. It is implemented as a third-party module for the Apache HTTP Server that allows any given server to replicate its documents to other Globule servers. This can improve the site's performance, maintain the site available to its clients even if some servers are down, and to a certain extent help to resist to flash crowds and the Slashdot effect. the project is discontinued and is no longer maintained. Globule takes care of maintaining consistency between the replicas, monitoring the servers, and automatically redirecting clients to one of the available replicas. Globule also supports the replication of PHP documents accessing MySQL databases. It runs on Unix and Windows systems. See also Codeen
https://en.wikipedia.org/wiki/Process
A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic. Things called a process include: Business and management Business process, activities that produce a specific service or product for customers Business process modeling, activity of representing processes of an enterprise in order to deliver improvements Manufacturing process management, a collection of technologies and methods used to define how products are to be manufactured. Process architecture, structural design of processes, applies to fields such as computers, business processes, logistics, project management Process area, related processes within an area which together satisfies an important goal for improvements within that area Process costing, a cost allocation procedure of managerial accounting Process management (project management), a systematic series of activities directed towards planning, monitoring the performance and causing an end result in engineering activities, business process, manufacturing processes or project management Process-based management, is a management approach that views a business as a collection of processes Law Due process, the concept that governments must respect the rule of law Legal process, the proceedings and records of a legal case Service of process, the procedure of giving official notice of a legal proceeding Science and technology The general concept of the scientific process, see scientific method Process theory, the scientific study of processes Industrial processes, consists of the purposeful sequencing of tasks that combine resources to produce a desired output Biology and psychology Process (anatomy), a projection or outgrowth of tissue from a larger body Biological process, a process of a living organism Cognitive process, such as attention, memory, language use, reasoning, and problem solving Mental process, a function or processes of the mind Neuronal process, also neurite
https://en.wikipedia.org/wiki/Self-denial
Self-denial (related but different from self-abnegation or self-sacrifice) is an act of letting go of the self as with altruistic abstinence – the willingness to forgo personal pleasures or undergo personal trials in the pursuit of the increased good of another. Various religions and cultures take differing views of self-denial, some considering it a positive trait and others considering it a negative one. According to some Protestants, self-denial is considered a superhuman virtue only obtainable through Jesus. Some critics of self-denial suggest that self-denial can lead to self-hatred. Positive effects There is evidence brief periods of fasting, a denial of food, can be beneficial to health in certain situations. Self-denial is sometimes related to inhibitory control and emotional self-regulation, the positives of which are dealt with in those articles. As people grow accustomed to material goods they often experience hedonic adaptation, whereby they get used to the finer things and are less inclined to savor daily pleasures. Scarcity can lead people to focus on enjoying an experience more deeply, which increases joy. Negative effects Others argue self-denial involves avoidance and holding back of happiness and pleasurable experiences from oneself that is only damaging to other people. Some argue it is a form of micro-suicide because it is threatening to an individual's physical health, emotional well-being, or personal goals. Religion and self-denial Self-denial can constitute an important element of religious practice in various belief systems. An exemplification is the self-denial advocated by several Christian confessions where it is believed to be a means of reaching happiness and a deeper religious understanding, sometimes described as 'becoming a true follower of Christ'. The foundation of self-denial in the Christian context is based on the recognition of a higher God-given will, which the Christian practitioner chooses to adhere to, and prioritize ove
https://en.wikipedia.org/wiki/UPDM
The Unified Profile for DoDAF/MODAF (UPDM) is the product of an Object Management Group (OMG) initiative to develop a modeling standard that supports both the USA Department of Defense Architecture Framework (DoDAF) and the UK Ministry of Defence Architecture Framework (MODAF). The current UPDM - the Unified Profile for DoDAF and MODAF was based on earlier work with the same acronym and a slightly different name - the UML Profile for DoDAF and MODAF. History The UPDM initiative began in 2005, when the OMG issued a Request for Proposal. This request was based on the then current versions of DoDAF (1.0) and MODAF (1.1). While the specification submission development was underway, significant changes were made to the DoDAF and MODAF. Therefore, although a UPDM 1.0 beta 1 specification was adopted by the OMG in 2007, and UPDM 1.0 beta 2 was submitted by an OMG Finalization Task Force in 2008, UPDM 1.0 beta 2 has not been endorsed by the US Department of Defense or the UK Ministry of Defence (MOD). The UPDM 1.0 specification, the result of additional work by many members of the original submission teams, is architecturally aligned with DoDAF 1.5 and MODAF 1.2. This version of the specification has been endorsed by both the US DoD and the UK MOD. The UPDM 2.0 specification was released in January 2013, and UPDM 2.1 was released in August 2013. Motivation for unified profile for DoDAF/MODAF DoDAF v1.5 Volume II includes guidance for representing DoDAF architecture products using UML. MODAF also provides similar guidance, and its meta-model is specified as a UML profile abstract syntax (i.e. extensions of UML 2.1 metaclasses). MODAF differs from DoDAF however, so the MODAF Meta-Model is not suitable for use in DoDAF tools. Differences in vendor implementations have resulted in interoperability issues between tools and additional training requirements for users. Also, the current DoDAF UML implementation guidance is based on a previous version of UML (UML v1.x), a
https://en.wikipedia.org/wiki/Yuri%20Linnik
Yuri Vladimirovich Linnik (; January 8, 1915 – June 30, 1972) was a Soviet mathematician active in number theory, probability theory and mathematical statistics. Biography Linnik was born in Bila Tserkva, in present-day Ukraine. He went to Saint Petersburg University where his supervisor was Vladimir Tartakovsky, and later worked at that university and the Steklov Institute. He was a member of the Academy of Sciences of the Soviet Union, as was his father, Vladimir Pavlovich Linnik. He was awarded both Stalin and Lenin Prizes. He died in Leningrad. Work in number theory Linnik's theorem in analytic number theory The dispersion method (which allowed him to solve the Titchmarsh problem). The large sieve (which turned out to be extremely influential). An elementary proof of the Hilbert-Waring theorem; see also Schnirelmann density. The Linnik ergodic method, see , which allowed him to study the distribution properties of the representations of integers by integral ternary quadratic forms. Work in probability theory and statistics Infinitely divisible distributions Linnik obtained numerous results concerning infinitely divisible distributions. In particular, he proved the following generalisation of Cramér's theorem: any divisor of a convolution of Gaussian and Poisson random variables is also a convolution of Gaussian and Poisson. He has also coauthored the book on the arithmetics of infinitely divisible distributions. Central limit theorem Linnik zones (zones of asymptotic normality) Information-theoretic proof of the central limit theorem Statistics Behrens–Fisher problem Selected publications Notes External links Acta Arithmetica: Linnik memorial issue (1975) List of books by Linnik provided by National Library of Australia 1915 births 1972 deaths 20th-century Russian mathematicians People from Bila Tserkva Full Members of the USSR Academy of Sciences Members of the Royal Swedish Academy of Sciences Saint Petersburg State University alumni He
https://en.wikipedia.org/wiki/Voodoo3
Voodoo3 was a series of computer gaming video cards manufactured and designed by 3dfx Interactive. It was the successor to the company's high-end Voodoo2 line and was based heavily upon the older Voodoo Banshee product. Voodoo3 was announced at COMDEX '98 and arrived on store shelves in early 1999. The Voodoo3 line was the first product manufactured by the combined STB Systems and 3dfx. History The 'Avenger' graphics core was originally conceived immediately after Banshee. Due to mis-management by 3dfx, this caused the next-generation 'Rampage' project to suffer delays which would prove to be fatal to the entire company. Avenger was pushed to the forefront as it offered a quicker time to market than the already delayed Rampage. Avenger was no more than the Banshee core with a second texture mapping unit (TMU) added - the same TMU which Banshee lost compared to Voodoo2. Avenger was thus merely a Voodoo2 with an integrated 128-bit 2D video accelerator and twice the clock speed. Architecture and performance Much was made of Voodoo3 (christened 'Avenger') and its 16-bit color rendering limitation. This was in fact quite complex, as Voodoo3 operated to full 32-bit precision (8 bits per channel, 16.7M colours) in its texture mappers and pixel pipeline as opposed to previous products from 3dfx and other vendors, which had only worked in 16-bit precision. To save framebuffer space, the Voodoo3's rendering output was dithered to 16 bit. This offered better quality than running in pure 16-bit mode. However, a controversy arose over what happened next. The Voodoo3's RAMDAC, which took the rendered frame from the framebuffer and generated the display image, performed a 2x2 box or 4x1 line filter on the dithered image to almost reconstruct the original 24-bit color render. 3dfx claimed this to be '22-bit' equivalent quality. As such, Voodoo3's framebuffer was not representative of the final output, and therefore, screenshots did not accurately portray Voodoo3's display
https://en.wikipedia.org/wiki/EEMBC
EEMBC, the Embedded Microprocessor Benchmark Consortium, is a non-profit, member-funded organization formed in 1997, focused on the creation of standard benchmarks for the hardware and software used in embedded systems. The goal of its members is to make EEMBC benchmarks an industry standard for evaluating the capabilities of embedded processors, compilers, and the associated embedded system implementations, according to objective, clearly defined, application-based criteria. EEMBC members may contribute to the development of benchmarks, vote at various stages before public distribution, and accelerate testing of their platforms through early access to benchmarks and associated specifications. Most Popular Benchmark Working Groups In chronological order of development: AutoBench 1.1 - single-threaded code for automotive, industrial, and general-purpose applications Networking - single-threaded code associated with moving packets in networking applications. MultiBench - multi-threaded code for testing scalability of multicore processors. CoreMark - measures the performance of central processing units (CPU) used in embedded systems BXBench - system benchmark measuring the web browsing user-experience, from the click/touch on a URL to final page rendered on the screen, and is not limited to measuring only JavaScript execution. AndEBench-Pro - system benchmark providing a standardized, industry-accepted method of evaluating Android platform performance. It's available for free download in Google Play. FPMark - multi-threaded code for both single- and double-precision floating-point workloads, as well as small, medium, and large data sets. ULPMark - energy-measuring benchmark for ultra-low power microcontrollers; benchmarks include ULPMark-Core (with a focus on microcontroller core activity and sleep modes) and ULPMark-Peripheral (with a focus on microcontroller peripheral activity such as Analog-to-digital converter, Serial Peripheral Interface Bus, Real-time
https://en.wikipedia.org/wiki/Necrobiosis
Necrobiosis is the physiological death of a cell, and can be caused by conditions such as basophilia, erythema, or a tumor. It is identified both with and without necrosis. Necrobiotic disorders are characterized by presence of necrobiotic granuloma on histopathology. Necrobiotic granuloma is described as aggregation of histiocytes around a central area of altered collagen and elastic fibers. Such a granuloma is typically arranged in a palisaded pattern. It is associated with necrobiosis lipoidica and granuloma annulare. Necrobiosis differs from apoptosis, which kills a damaged cell to protect the body from harm.
https://en.wikipedia.org/wiki/Gwynneth%20Coogan
Gwynneth "Gwyn" Coogan (born Gwynneth Hardesty; August 21, 1965 in Trenton, New Jersey) is an American former Olympic athlete, educator and mathematician. Hardesty attended Phillips Exeter Academy for two years, where she played squash and field hockey. She then attended Smith College, graduating in 1987, where she majored in math and took up running for the first time, and became the two-time NCAA Division III champion in the 3,000 meters. She qualified for the 1992 Summer Olympics in Barcelona, where she competed in the 10,000 meters. Four years later, she was an alternate for the women's marathon for the 1996 Summer Olympics in Atlanta. She was the 1998 United States National Champion in the Marathon. Early and personal life Coogan went on to earn her Ph.D. in math from the University of Colorado in 1999, working primarily in number theory. She did post-doctorate work with Ken Ono at the University of Wisconsin–Madison, taught at Hood College, and currently teaches math at Phillips Exeter Academy. Achievements
https://en.wikipedia.org/wiki/Ambiguity%20aversion
In decision theory and economics, ambiguity aversion (also known as uncertainty aversion) is a preference for known risks over unknown risks. An ambiguity-averse individual would rather choose an alternative where the probability distribution of the outcomes is known over one where the probabilities are unknown. This behavior was first introduced through the Ellsberg paradox (people prefer to bet on the outcome of an urn with 50 red and 50 black balls rather than to bet on one with 100 total balls but for which the number of black or red balls is unknown). There are two categories of imperfectly predictable events between which choices must be made: risky and ambiguous events (also known as Knightian uncertainty). Risky events have a known probability distribution over outcomes while in ambiguous events the probability distribution is not known. The reaction is behavioral and still being formalized. Ambiguity aversion can be used to explain incomplete contracts, volatility in stock markets, and selective abstention in elections (Ghirardato & Marinacci, 2001). The concept is expressed in the English proverb: "Better the devil you know than the devil you don't." Difference from risk aversion The distinction between ambiguity aversion and risk aversion is important but subtle. Risk aversion comes from a situation where a probability can be assigned to each possible outcome of a situation and it is defined by the preference between a risky alternative and its expected value. Ambiguity aversion applies to a situation when the probabilities of outcomes are unknown (Epstein 1999) and it is defined through the preference between risky and ambiguous alternatives, after controlling for preferences over risk. Using the traditional two-urn Ellsberg choice, urn A contains 50 red balls and 50 blue balls while urn B contains 100 total balls (either red or blue) but the number of each is unknown. An individual who prefers a certain payoff strictly smaller than $10 over a bet t
https://en.wikipedia.org/wiki/Spherical%20multipole%20moments
In physics, spherical multipole moments are the coefficients in a series expansion of a potential that varies inversely with the distance to a source, i.e., as  Examples of such potentials are the electric potential, the magnetic potential and the gravitational potential. For clarity, we illustrate the expansion for a point charge, then generalize to an arbitrary charge density Through this article, the primed coordinates such as refer to the position of charge(s), whereas the unprimed coordinates such as refer to the point at which the potential is being observed. We also use spherical coordinates throughout, e.g., the vector has coordinates where is the radius, is the colatitude and is the azimuthal angle. Spherical multipole moments of a point charge The electric potential due to a point charge located at is given by where is the distance between the charge position and the observation point and is the angle between the vectors and . If the radius of the observation point is greater than the radius of the charge, we may factor out 1/r and expand the square root in powers of using Legendre polynomials This is exactly analogous to the axial multipole expansion. We may express in terms of the coordinates of the observation point and charge position using the spherical law of cosines (Fig. 2) Substituting this equation for into the Legendre polynomials and factoring the primed and unprimed coordinates yields the important formula known as the spherical harmonic addition theorem where the functions are the spherical harmonics. Substitution of this formula into the potential yields which can be written as where the multipole moments are defined As with axial multipole moments, we may also consider the case when the radius of the observation point is less than the radius of the charge. In that case, we may write which can be written as where the interior spherical multipole moments are defined as the complex conjugate of irregular solid
https://en.wikipedia.org/wiki/Isenthalpic%20process
An isenthalpic process or isoenthalpic process is a process that proceeds without any change in enthalpy, H; or specific enthalpy, h. Overview If a steady-state, steady-flow process is analysed using a control volume, everything outside the control volume is considered to be the surroundings. Such a process will be isenthalpic if there is no transfer of heat to or from the surroundings, no work done on or by the surroundings, and no change in the kinetic energy of the fluid. This is a sufficient but not necessary condition for isoenthalpy. The necessary condition for a process to be isoenthalpic is that the sum of each of the terms of the energy balance other than enthalpy (work, heat, changes in kinetic energy, etc.) cancel each other, so that the enthalpy remains unchanged. For a process in which magnetic and electric effects (among others) give negligible contributions, the associated energy balance can be written as If then it must be that The throttling process is a good example of an isoenthalpic process in which significant changes in pressure and temperature can occur to the fluid, and yet the net sum the associated terms in the energy balance is null, thus rendering the transformation isoenthalpic. The lifting of a relief (or safety) valve on a pressure vessel is an example of throttling process. The specific enthalpy of the fluid inside the pressure vessel is the same as the specific enthalpy of the fluid as it escapes through the valve. With a knowledge of the specific enthalpy of the fluid and the pressure outside the pressure vessel, it is possible to determine the temperature and speed of the escaping fluid. In an isenthalpic process: , . Isenthalpic processes on an ideal gas follow isotherms, since . See also Adiabatic process Joule–Thomson effect Ideal gas laws Isentropic process
https://en.wikipedia.org/wiki/SegaSoft
SegaSoft, originally headquartered in Redwood City, California and later San Francisco, was a joint venture by Sega and CSK (Sega's majority stockholder at the time), created in 1995 to develop and publish games for the PC and Sega Saturn, primarily in the North American market. SegaSoft was responsible for, among other things, the Heat.net multiplayer game system and publishing the last few titles made by Rocket Science Games. History In 1996, SegaSoft announced that they would be publishing games for all viable platforms, not just Saturn and PC. This, however, never came to fruition, as in January 1997 SegaSoft restructured to focus on the PC and online gaming. SegaSoft disbanded in 2000 with staff layoffs. Many of them were reassigned to Sega.com, a new company established to handle Sega's online presence in the United States. Published games Incomplete List 10Six Alien Race Bug Too! Cosmopolitan Virtual Makeover Cosmopolitan Virtual Makeover 2 Da Bomb Emperor of the Fading Suns Essence Virtual Makeover Fatal Abyss Flesh Feast Golf: The Ultimate Collection Lose Your Marbles Grossology Mr. Bones Net Fighter Obsidian Plane Crazy Puzzle Castle Rocket Jockey Science Fiction: The Ultimate Collection Scud: The Disposable Assassin Scud: Industrial Evolution The Space Bar Three Dirty Dwarves Trampoline-Fractured Fairy Tales: A Frog Prince Vigilance Cancelled games G.I. Ant Heat Warz Ragged Earth Sacred Pools Skies Heat.net Heat.net, stylized HEAT.NET, was an online PC gaming system produced by SegaSoft and launched in 1997 during Bernie Stolar's tenure as SEGA of America president. Heat.net hosted both Sega-published first- and second-party games, as well as popular third-party games of the era, such as Quake II and Baldur's Gate. Much like Kali, it also allowed users to play any IPX network-compatible game, regardless of whether or not it was designed for the Internet. Each supported game had its own chat lobby and game creation options
https://en.wikipedia.org/wiki/Advaxis
Advaxis Inc. is an American company devoted to the discovery, development and commercialization of immunotherapies based on a technology platform which uses engineered Listeria monocytogenes (aka Lm). The company is headquartered in Princeton, New Jersey and was incorporated in Delaware in 2006. The Lm-based platform on which the company's products are based involves the use of attenuated Lm which secrete antigen/adjuvant fusion proteins and stimulate a patient's immune system (specifically their T cells) to mount an immune response to the secreted antigen; if the antigen is specifically found on cancerous cells, then the result aims to be an effective immune response targeting and eliminating the cancer. Treatments developed using this paradigm are referred to as Lm-LLO immunotherapies. Today, the Company has over fifteen distinct constructs in various stages of development, directly developed by the Company and through strategic collaborations with centers such as: the National Cancer Institute, Cancer Research – UK, the Wistar Institute, the University of Pennsylvania, and the Department of Homeland Security among others. The Company also has a veterinary medicine program that is evaluating an Lm-LLO based immunotherapy in a Phase 1 study in canine osteosarcoma. Source: www.advaxis.com Corporate history and governance Beginnings Advaxis was a Delaware corporation when it was acquired by a shell corporation (in the official SEC sense) in November 2004. The acquiring company was Great Expectations, which was incorporated in Colorado in June 1987. The only operating company owned by Great Expectations was Advaxis and in 2004, a month after the acquisition, it changed its name to Advaxis, and 18 months after that it reincorporated as a Delaware corporation. The official 'date of inception' for the company is 1 March 2002. Partnerships In 2014, Advaxis entered a co-development and commercialization agreement with India's Biocon for the ADXS-HPV therapeutic
https://en.wikipedia.org/wiki/Hematopathology
Hematopathology or hemopathology (both also spelled haem-, see spelling differences) is the study of diseases and disorders affecting and found in blood cells, their production, and any organs and tissues involved in hematopoiesis, such as bone marrow, the spleen, and the thymus. Diagnoses and treatment of diseases such as leukemia and lymphoma often deal with hematopathology; techniques and technologies include flow cytometry studies and immunohistochemistry. In the United States, hematopathology is a board-certified subspecialty by the American Board of Pathology. Board-eligible or board-certified hematopathologists are usually pathology residents (anatomic, clinical, or combined) who have completed hematopathology fellowship training after their pathology residency. The hematopathology fellowship lasts either one or two years. A physician who practices hematopathology is called a hematopathologist.
https://en.wikipedia.org/wiki/Platform-independent%20GUI%20library
A PIGUI (Platform Independent Graphical User Interface) package is a software library that a programmer uses to produce GUI code for multiple computer platforms. The package presents subroutines and/or objects (along with a programming approach) which are independent of the GUIs that the programmer is targeting. For software to qualify as PIGUI it must support several GUIs under at least two different operating systems (e.g. just supporting OPEN LOOK and X11 on two Unix boxes doesn't count). The package does not necessarily provide any additional portability features. Native look and feel is a desirable feature, but is not essential for PIGUIs. Considerations Using a PIGUI has limitations, such as the PIGUI only deals with the GUI aspects of the program so the programmer responsible for other portability issues, most PIGUIs slow the execution of the resulting code, and programmers are largely limited to the feature set provided by the PIGUI. Dependence on a PIGUI can lead to project difficulties since fewer people know how to code any specific PIGUI than do a platform-specific GUI, limiting the number of people who can give advanced help, and if the vendor goes out of business there may be no further support, including future OS enhancements, though availability of source code can ease but not eliminate this problem. Also, bugs in any package, including the PIGUI, filter down to production code. Alternative approaches Web browsers offer a convenient alternative for many applications. Web browsers utilize HTML as a presentation layer for applications hosted on a central server, and web browsers are available for pretty much every platform. However, some applications do not lend themselves well to the web paradigm, requiring a local application with GUI capabilities. Where such applications must support multiple platforms, PIGUI can be more appropriate. Instead of using a PIGUI, developers could partition their applications into GUI and non-GUI objects, and imp
https://en.wikipedia.org/wiki/International%20Association%20for%20Food%20Protection
The International Association for Food Protection (IAFP), founded in 1911, is a non-profit association of food safety professionals based in Des Moines, Iowa. The organization claims a membership of over 3,000 members from 50 nations. The mission of the IAFP is to provide food safety professionals worldwide with a forum to exchange information on protecting the food supply. The association provides its members with an information network on scientific, technical, and practical developments in food safety and sanitation through its two scientific journals, Food Protection Trends and Journal of Food Protection, its educational annual meeting, and interaction with other food safety professionals. Before 2000, it was known as the International Association of Dairy and Milk Inspectors (1911–1936), International Association of Milk Sanitarians (1936–1947), and International Association of Milk and Food Sanitarians, Inc. The name was changed in 1966 to the International Association of Milk, Food and Environmental Sanitarians. In 1999, it received the current name. History In the early 20th century, an increasing number of cities and states in the US passed policies to ensure safety of milk. The laws were a response to the food industry's deception at that time. There were types of alteration in milk products on the market, such as water-dilution or adding butter or cheaper substitutes. In addition, spoiled milk is danger to health: infant mortality rate was lower in cities that had monitored milk production and sale. The association was established in 1911 based in Milwaukee, Wisconsin, and there were 35 members. One of the members was from Canada, and one from Australia; the rest were from the US. According to the Association's constitution, an object is to develop "uniform and efficient inspection of dairy farms, milk establishments, milk and milk products" by "men who have a thorough knowledge of dairy work." The object reflects the historical context when the associ
https://en.wikipedia.org/wiki/Pradeep%20Sindhu
Pradeep Sindhu is an Indian-American business executive. He is the chairman, chief development officer (CDO) and co-founder of data center technology company Fungible. Previously, he co-founded Juniper Networks, where he was the chief scientist and served as CEO until 1996. Biography Sindhu holds a B.Tech. in electrical engineering (1974) from the Indian Institute of Technology, Kanpur, M.S. in electrical engineering (1976) from the University of Hawaiʻi, and a PhD (1982) in computer science from Carnegie Mellon University where he studied under Bob Sproull. Work Sindhu had worked at the Computer Science Lab of Xerox PARC for 11 years. Sindhu worked on design tools for very-large-scale integration (VLSI) of integrated circuits and high-speed interconnects for shared memory architecture multiprocessors. Sindhu founded Juniper Networks along with Dennis Ferguson and Bjorn Liencres in February 1996 in California. The company was subsequently reincorporated in Delaware in March 1998 and went public on 25 June 1999. Sindhu worked on the architecture, design, and development of the Juniper M40 data router. Sindhu's earlier work subsequently influenced the architecture, design, and development of Sun Microsystems' first high-performance multiprocessor system family, which included systems such as the SS1000 and SC2000. Sindhu is the founder and CEO of data center technology company Fungible.
https://en.wikipedia.org/wiki/Cylindrical%20multipole%20moments
Cylindrical multipole moments are the coefficients in a series expansion of a potential that varies logarithmically with the distance to a source, i.e., as . Such potentials arise in the electric potential of long line charges, and the analogous sources for the magnetic potential and gravitational potential. For clarity, we illustrate the expansion for a single line charge, then generalize to an arbitrary distribution of line charges. Through this article, the primed coordinates such as refer to the position of the line charge(s), whereas the unprimed coordinates such as refer to the point at which the potential is being observed. We use cylindrical coordinates throughout, e.g., an arbitrary vector has coordinates where is the radius from the axis, is the azimuthal angle and is the normal Cartesian coordinate. By assumption, the line charges are infinitely long and aligned with the axis. Cylindrical multipole moments of a line charge The electric potential of a line charge located at is given by where is the shortest distance between the line charge and the observation point. By symmetry, the electric potential of an infinite line charge has no -dependence. The line charge is the charge per unit length in the -direction, and has units of (charge/length). If the radius of the observation point is greater than the radius of the line charge, we may factor out and expand the logarithms in powers of which may be written as where the multipole moments are defined as Conversely, if the radius of the observation point is less than the radius of the line charge, we may factor out and expand the logarithms in powers of which may be written as where the interior multipole moments are defined as General cylindrical multipole moments The generalization to an arbitrary distribution of line charges is straightforward. The functional form is the same and the moments can be written Note that the represents the line charge per unit area in t
https://en.wikipedia.org/wiki/Hunting%20oscillation
Hunting oscillation is a self-oscillation, usually unwanted, about an equilibrium. The expression came into use in the 19th century and describes how a system "hunts" for equilibrium. The expression is used to describe phenomena in such diverse fields as electronics, aviation, biology, and railway engineering. Railway wheelsets A classical hunting oscillation is a swaying motion of a railway vehicle (often called truck hunting or bogie hunting) caused by the coning action on which the directional stability of an adhesion railway depends. It arises from the interaction of adhesion forces and inertial forces. At low speed, adhesion dominates but, as the speed increases, the adhesion forces and inertial forces become comparable in magnitude and the oscillation begins at a critical speed. Above this speed, the motion can be violent, damaging track and wheels and potentially causing derailment. The problem does not occur on systems with a differential because the action depends on both wheels of a wheelset rotating at the same angular rate, although differentials tend to be rare, and conventional trains have their wheels fixed to the axles in pairs instead. Some trains, like the Talgo 350, have no differential, yet they are mostly not affected by hunting oscillation, as most of their wheels rotate independently from one another. The wheels of the power car, however, can be affected by hunting oscillation, because the wheels of the power car are fixed to the axles in pairs like in conventional bogies. Less conical wheels and bogies equipped with independent wheels that turn independently from each other and are not fixed to an axle in pairs are cheaper than a suitable differential for the bogies of a train. The problem was first noticed towards the end of the 19th century, when train speeds became high enough to encounter it. Serious efforts to counteract it got underway in the 1930s, giving rise to lengthened trucks and the side-damping swing hanger truck. In the deve
https://en.wikipedia.org/wiki/High%20impedance
In electronics, high impedance means that a point in a circuit (a node) allows a relatively small amount of current through, per unit of applied voltage at that point. High impedance circuits are low current and potentially high voltage, whereas low impedance circuits are the opposite (low voltage and potentially high current). Numerical definitions of "high impedance" vary by application. High impedance inputs are preferred on measuring instruments such as voltmeters or oscilloscopes. In audio systems, a high-impedance input may be required for use with devices such as crystal microphones or other devices with high internal impedance. Analog electronics In analog circuits a high impedance node is one that does not have any low impedance paths to any other nodes in the frequency range being considered. Since the terms low and high depend on context to some extent, it is possible in principle for some high impedance nodes to be described as low impedance in one context, and high impedance in another; so the node (perhaps a signal source or amplifier input) has relatively low currents for the voltages involved. High impedance nodes have higher thermal noise voltages and are more prone to capacitive and inductive noise pick up. When testing, they are often difficult to probe as the impedance of an oscilloscope or multimeter can heavily affect the signal or voltage on the node. High impedance signal outputs are characteristic of some transducers (such as crystal pickups); they require a very high impedance load from the amplifier to which they are connected. Vacuum tube amplifiers, and field effect transistors more easily supply high-impedance inputs than bipolar junction transistor-based amplifiers, although current buffer circuits or step-down transformers can match a high-impedance input source to a low impedance amplifier. Digital electronics In digital circuits, a high impedance (also known as hi-Z, tri-stated, or floating) output is not being driven to any de
https://en.wikipedia.org/wiki/J%C3%BAlio%20Ribeiro
Júlio César Ribeiro Vaughan (April 16, 1845 – November 1, 1890) was a Brazilian Naturalist novelist, philologist, journalist and grammarian. He is famous for his controversial romance A Carne and for designing the flag of the State of São Paulo, which he wanted to be the flag of Brazil. He is patron of the 24th chair of the Brazilian Academy of Letters. Life Ribeiro was born in 1845, in Sabará, to American George Washington Vaughan and Maria Francisca Vaughan (née Ribeiro). Initially homeschooled by his mother, he later entered a school in Minas, and, in 1862, he moved to Rio de Janeiro to ingress at the Academia Militar das Agulhas Negras. Three years later, he quit the Military School to dedicate himself to journalism. For that, he studied Latin in the Faculdade de Direito da Universidade de São Paulo and later became a teacher there. As a journalist, he founded and wrote for O Sorocabano in Sorocaba; wrote for A Procelária and O Rebate in São Paulo, and also to O Estado de S. Paulo, Diário Mercantil, A Gazeta de Campinas and the Almanaque de São Paulo, where he published his studies on Philology. He published his controversial and heavily erotic romance A Carne (The Flesh) in 1888. At the time of its publication, it was panned by critics such as José Veríssimo and Alfredo Pujol. The most vehement critic, however, was the priest Sena Freitas, who wrote an article in the Diário Mercantil named A Carniça (The Carrion). Ribeiro, a strong anti-clericalist, refuted Freitas' critics with the series of articles O Urubu Sena Freitas (Sena Freitas, the Vulture). Those articles were later compiled and published under the name of Uma Polêmica Célebre, in 1934. He died in 1890, a victim of tuberculosis. He is the grandfather of chronicler Elsie Lessa, great-grandfather of writers Ivan Lessa and Sérgio Pinheiro Lopes and great-great-grandfather of writer Juliana Foster. Works O Padre Belchior de Pontes (1877) Gramática Portuguesa (1881) Cartas Sertanejas (1885) A Ca
https://en.wikipedia.org/wiki/Quick%20Response%20Engine
Quick Response Engine was a planning and scheduling program developed for the OS/400 platform. The program was developed by the Acacia Technologies division of Computer Associates in 1996. In 2002 the group was sold to SSA Global Technologies.
https://en.wikipedia.org/wiki/Carnegie%20stages
In embryology, Carnegie stages are a standardized system of 23 stages used to provide a unified developmental chronology of the vertebrate embryo. The stages are delineated through the development of structures, not by size or the number of days of development, and so the chronology can vary between species, and to a certain extent between embryos. In the human being only the first 60 days of development are covered; at that point, the term embryo is usually replaced with the term fetus. It was based on work by Streeter (1942) and O'Rahilly and Müller (1987). The name "Carnegie stages" comes from the Carnegie Institution of Washington. While the Carnegie stages provide a universal system for staging and comparing the embryonic development of most vertebrates, other systems are occasionally used for the common model organisms in developmental biology, such as the Hamburger–Hamilton stages in the chick. Stages Days are approximate and reflect the days since the last ovulation before pregnancy ("Postovulatory age"). Stage 1: 1 days fertilization polar bodies Carnegie stage 1 is the unicellular embryo. This stage is divided into three substages. Stage 1 a Primordial embryo. All the genetic material necessary for a new individual, along with some redundant chromosomes, are present within a single plasmalemma. Penetration of the fertilising sperm allows the oocyte to resume meiosis and the polar body is extruded. Stage 1 b Pronuclear embryo. Two separate haploid components are present - the maternal and paternal pronuclei. The pronuclei move towards each other and eventually compress their envelopes where they lie adjacent near the centre of the wall. Stage 1 c Syngamic embryo. The last phase of fertilisation. The pronuclear envelopes disappear and the parental chromosomes come together in a process called syngamy. Stage 2: 2-3 days cleavage morula compaction Carnegie stage 2 begins when the zygote undergoes its first cell division, and ends when the blas
https://en.wikipedia.org/wiki/Evolution%3A%20The%20Modern%20Synthesis
Evolution: The Modern Synthesis, a popularising 1942 book by Julian Huxley (grandson of T.H. Huxley), set out his vision of the modern synthesis of evolutionary biology of the mid-20th century. It was enthusiastically reviewed in academic biology journals. Significance In the book, Huxley tackles the subject of evolution at full length, in what became the defining work of his life. His role was that of a synthesiser rather than a researcher, and it helped that he had met many of the other participants. His book was written whilst he was Secretary to the Zoological Society of London, and made use of his remarkable collection of reprints covering the first part of the century. It was published in 1942. Publication history Allen & Unwin, London. (1942, reprinted 1943, 1944, 1945, 1948, 1955; 2nd ed, with new introduction and bibliography by the author, 1963; 3rd ed, with new introduction and bibliography by nine contributors, 1974). U.S. first edition by Harper, 1943. Reception Contemporaneous Reviewing the book for American Scientist in 1943, the geologist Kirtley Mather wrote that the book provided "an admirable digest" of decades of work by many scientists. Mather commented "Of general interest is Huxley’s defense of the Darwinian concept of evolution, under attack by Hogben, Bateson and other biologists, amusingly reminiscent of bygone days when another Huxley championed the cause of evolution in a wholly different battle." Mather noted, too, that Huxley emphasises evolutionary progress, not by assuming that it means specialisation or "improvement" over earlier forms, but that attaining greater control over the environment and independence from it show man's progress at a new direction or rather at a higher level with "increases of aesthetic, intellectual, and spiritual experience and satisfaction." Modern The historian and philosopher of science Ehud Lamm, on the book's reissue in 2010 for Darwin's bicentenary, writes that at almost 800 pages it was l
https://en.wikipedia.org/wiki/Lampbrush%20chromosome
Lampbrush chromosome are a special form of chromosome found in the growing oocytes (immature eggs) of most animals, except mammals. They were first described by Walther Flemming and Ruckert in 1882. Lampbrush chromosomes of tailed and tailless amphibians, birds and insects are described best of all. Chromosomes transform into the lampbrush form during the diplotene stage of meiotic prophase I due to an active transcription of many genes. They are highly extended meiotic half-bivalents, each consisting of 2 sister chromatids. Lampbrush chromosomes are clearly visible even in the light microscope, where they are seen to be organized into a series of chromomeres with large chromatin loops extended laterally. Continuous RNA transcription is required to maintain typical chromomere-loop structure of lampbrush chromosomes. Inhibition of transcription leads to retraction of lateral loops into chromomeres and chromosome condensation. Lampbrush chromosomes produce a large number of mRNAs and non-coding RNAs, which are synthesized on the lateral loops. These transcripts are used during oogenesis and at the early stages of embryogenesis. Each lateral loop contains one or several transcription units with polarized RNP-matrix coating the DNA axis of the loop. Amphibian and avian lampbrush chromosomes can be microsurgically isolated from oocyte nucleus (germinal vesicle) with either forceps or needles. Giant chromosomes in the lampbrush form are useful model for studying chromosome organization, genome function and gene expression during meiotic prophase, since they allow the individual transcription units to be visualized. Moreover, lampbrush chromosomes are widely used for high-resolution mapping of DNA sequences and construction of detail cytological maps of individual chromosomes.
https://en.wikipedia.org/wiki/Quantum%20acoustics
In physics, quantum acoustics is the study of sound under conditions such that quantum mechanical effects are relevant. For most applications, classical mechanics are sufficient to accurately describe the physics of sound. However very high frequency sounds, or sounds made at very low temperatures may be subject to quantum effects. Quantum acoustics can also refer to attempts within the scientific community to couple superconducting qubits to acoustic waves. One particularly successful method involves coupling a superconducting qubit with a Surface Acoustic Wave (SAW) Resonator and placing these components on different substrates to achieve a higher signal to noise ratio as well as controlling the coupling strength of the components. This allows quantum experiments to verify that the phonons within the SAW Resonator are in quantum fock states by using Quantum tomography. Similar attempts have been made by using bulk acoustic resonators. One consequence of these developments is that it is possible to explore the properties of atoms with a much larger size than found conventionally by modelling them using a superconducting qubit coupled with a SAW Resonator. See also Superfluid Phonon
https://en.wikipedia.org/wiki/Legend%20of%20the%20Octopus
The Legend of the Octopus is a sports tradition during Detroit Red Wings home playoff games involving dead octopuses thrown onto the ice rink. The origins of the activity go back to the 1952 playoffs, when a National Hockey League team played two best-of-seven series to capture the Stanley Cup. Having eight arms, the octopus symbolized the number of playoff wins necessary for the Red Wings to win the Stanley Cup. The practice started on April 15, 1952, when Pete and Jerry Cusimano, brothers and storeowners in Detroit's Eastern Market, hurled an octopus into the rink of Olympia Stadium. The team swept the Toronto Maple Leafs and Montreal Canadiens en route to winning the championship. History Since 1952, the practice has persisted with each passing year. In one 1995 game, fans threw 36 octopuses, including a specimen weighing . The Red Wings' unofficial mascot is a purple octopus named Al, and during playoff runs, two of these mascots were also hung from the rafters of Joe Louis Arena, symbolizing the 16 wins now needed to take home the Stanley Cup. The practice has become such an accepted part of the team's lore, fans have developed various techniques and "octopus etiquette" for launching the creatures onto the ice. On October 4, 1987, the last day of the regular Major League Baseball season, an octopus was thrown on the field in the top of the seventh inning at Tiger Stadium in Detroit as the Tigers defeated the Toronto Blue Jays, 1–0, clinching the AL East division championship. In May of that year, the Red Wings had defeated the Toronto Maple Leafs in the Stanley Cup playoffs. At the final game at Joe Louis Arena, 35 octopuses were thrown onto the ice. Twirling ban Al Sobotka, the former head ice manager at Little Caesars Arena and one of the two Zamboni drivers, was the person who retrieved the thrown octopuses from the ice. When the Red Wings played at Joe Louis Arena, he was known to twirl an octopus above his head as he walked across the ice rink to the
https://en.wikipedia.org/wiki/Obsessive%20relational%20intrusion
Obsessive relational intrusion (ORI) occurs when someone knowingly and repeatedly invades another person's privacy boundaries by using intrusive tactics to try to get closer to that person. It includes behaviors such as repeated calls and texts, malicious contact, spreading rumors, stalking, and violence (kidnapping and assault). Drs. Brian Spitzberg and William Cupach, the creators of the term, define ORI as "repeated and unwanted pursuit and invasion of one's sense of physical or symbolic privacy by another person, either stranger or acquaintance, who desires and/or presumes an intimate relationship". Some victims of ORI have no preexisting relationship with or interest in their pursuers; others know their pursuers, but are less interested in making an existing relationship more intimate. Components There are several key components of ORI that distinguish it from other similar relational patterns. First, ORI involves a lack of mutual agreement regarding the nature or even the existence of a relationship. While the person obsessed with the relationship is attempting to make a closer connection, the object of ORI desires freedom from continued forced contact. Spitzberg writes that "this dialectical tension is endemic to the formation and ongoing construction of all interpersonal relationships". Second, ORI is not associated with a singular event, but is repeated. Spitzberg and Cupach write, "Obsessiveness is reflected in the fact that the intruder is fixated on the target of attention; the intruder's thoughts and behaviors are persistent, preoccupying, and often morbid. Pursuit is persistent despite the absence of reciprocity by the obsessional object and despite resistance by the object." The episodes of unwanted behavior tend to escalate over time, with the seriousness rising and the time between incidents shortening. Third, the intrusion can be symbolic and psychological, not just physical. The invasion of privacy as well as the imposition on the victim's
https://en.wikipedia.org/wiki/Innumeracy%20%28book%29
Innumeracy: Mathematical Illiteracy and its Consequences is a 1988 book by mathematician John Allen Paulos about innumeracy (deficiency of numeracy) as the mathematical equivalent of illiteracy: incompetence with numbers rather than words. Innumeracy is a problem with many otherwise educated and knowledgeable people. While many people would be ashamed to admit they are illiterate, there is very little shame in admitting innumeracy by saying things like "I'm a people person, not a numbers person", or "I always hated math", but Paulos challenges whether that widespread cultural excusing of innumeracy is truly worthy of acceptability. Paulos speaks mainly of the common misconceptions about, and inability to deal comfortably with, numbers, and the logic and meaning that they represent. He looks at real-world examples in stock scams, psychics, astrology, sports records, elections, sex discrimination, UFOs, insurance and law, lotteries, and drug testing. Paulos discusses innumeracy with quirky anecdotes, scenarios, and facts, encouraging readers in the end to look at their world in a more quantitative way. The book sheds light on the link between innumeracy and pseudoscience. For example, the fortune telling psychic's few correct and general observations are remembered over the many incorrect guesses. He also stresses the problem between the actual number of occurrences of various risks and popular perceptions of those risks happening. The problems of innumeracy come at a great cost to society. Topics include probability and coincidence, innumeracy in pseudoscience, statistics, and trade-offs in society. For example, the danger of getting killed in a car accident is much greater than terrorism and this danger should be reflected in how we allocate our limited resources. Background John Allen Paulos (born July 4, 1945) is an American professor of mathematics at Temple University in Pennsylvania. He is a writer and speaker on mathematics and the importance of mathematic
https://en.wikipedia.org/wiki/Sogitec%204X
The Sogitec 4X was a digital audio workstation developed by Giuseppe di Giugno at IRCAM (Paris) in the 1980s. It was the last large hardware processor before the development of the ISPW. Later solutions combined control and audio processing in the same computer like Max/MSP. 4X built on the achievements of the earlier Halaphone, capable of timbre alteration and sound localization. Nicolas Schöffer was one of the first users to build his composition method with this computer. Sources External links Computer music Digital signal processing
https://en.wikipedia.org/wiki/Jeffrey%20P.%20Buzen
Jeffrey Peter Buzen (born May 28, 1943) is an American computer scientist in system performance analysis best known for his contributions to queueing theory. His PhD dissertation (available as https://archive.org/details/DTIC_AD0731575) and his 1973 paper Computational algorithms for closed queueing networks with exponential servers have guided the study of queueing network modeling for decades. Born in Brooklyn, Buzen holds three degrees in Applied Mathematics -- an ScB (1965) from Brown University and, from Harvard University, an MS (1966) and a PhD (1971). He was a systems programmer at the National Institutes of Health in Bethesda, Maryland (1967–69), where his technique for optimizing the performance of a realtime biomedical computer system led to his first publication at a 1969 IEEE conference. After completing his PhD, he held concurrent appointments as a Lecturer in Computer Science at Harvard and as a Systems Engineer at Honeywell (1971-76). Some of his students at Harvard have gone on to become well known figures in computing. Buzen was PhD thesis advisor for Robert M. Metcalfe (1973), Turing Award winner and co-inventor of Ethernet, and for John M. McQuillan (1974), developer the original adaptive routing algorithms used in ARPAnet and Internet. Buzen also co-taught (with Ugo Gagliardi) a two-semester graduate level course on Operating Systems (AM 251a/AM251br) that Microsoft co-founder Bill Gates took during his Freshman year (1973-74). Two decades later, Gates wrote "It was the only 'computer course' I officially ever took at Harvard." (private email, July 24, 1995) In addition to being an educator and a researcher, Buzen is also an entrepreneur. Along with fellow Harvard Applied Mathematics PhDs Robert Goldberg and Harold Schwenk, he co-founded BGS Systems in 1975. The company, which began operations in his basement, developed, marketed and supported software products for the performance management and capacity planning of enterprise computer
https://en.wikipedia.org/wiki/Michel%20Raynaud
Michel Raynaud (; 16 June 1938 – 10 March 2018) was a French mathematician working in algebraic geometry and a professor at Paris-Sud 11 University. Early life and education He was born in Riom, France as a single son to a modest household. His father was a carpenter and his mother cleaned houses. He attended the local primary school at Châtel Guyon and Riom, and attended high school at the boarding school in Clermont-Ferrand. Raynaud entered the École normale supérieur where he studied from 1958 to 1962, while being first of the class in the "agrégation" exam where the new high school teachers were selected in 1961. In 1962, he entered the French National Centre for Scientific Research where he studied together with his future wife Michèle Chaumartin. Both had the same doctoral advisor in Alexander Grothendieck. Raynaud received his doctoral degree in 1967. Career Raynaud was hired as professor at the Orsay Faculty of Sciences in Paris where he was a employed until 2001, when he retired. Raynaud died on 10 March 2018 in Rueil-Malmaison, France. Research In 1983, Raynaud published a proof of the Manin–Mumford conjecture. In 1985, he proved Raynaud's isogeny theorem on Faltings heights of isogenous elliptic curves. With David Harbater and following the work of Jean-Pierre Serre, Raynaud proved Abhyankar's conjecture in 1994. The Raynaud surface was named after him by William E. Lang in 1979. Honors and awards In 1970 Raynaud was an invited speaker at the International Congress of Mathematicians in Nice. In 1987 he received the Prize Ampère from the French Academy of Sciences. In 1995 he received the Cole Prize, together with David Harbater, for his solution of the Abhyankar conjecture. Personal life He practiced skiing (especially in Val-d'Isère), tennis, and rock climbing (in Fontainebleau). He was married to the mathematician Michèle Raynaud () who also worked with Alexander Grothendieck.
https://en.wikipedia.org/wiki/NOAA%27s%20Environmental%20Real-time%20Observation%20Network
The NOAA Environmental Real-time Observation Network (NERON) is a project to establish a nationwide network of high quality near real-time weather monitoring stations across the United States. A 20-mile by 20-mile grid has been established, with the hopes of having one observation system within each grid cell. Effort is being put forth by local National Weather Service (NWS) offices and other state climate groups to ensure that sites in the network meet important criteria. The network will be composed of existing, and in some cases upgraded, sites (ASOS, Cooperative Observer, etc.) as well as new sites being established for other local and state efforts. Many stations in New England and New York have already been installed. See also Citizen Weather Observer Program (CWOP) Community Collaborative Rain, Hail and Snow Network (CoCoRaHS) Mesonet
https://en.wikipedia.org/wiki/Impact%20driver
An impact driver is a tool that delivers a strong, sudden rotational force and forward thrust. The force can be delivered either by striking with a hammer in the case of manual impact drivers, or mechanically in the case of powered impact drivers. It is often used by mechanics to loosen larger screws, bolts and nuts that are corrosively "frozen" or over-torqued. The direction can also be reversed for situations where screws have to be tightened with torque greater than a screwdriver can reasonably provide. Manual impact drivers Manual impact drivers consist of a heavy outer sleeve that surrounds an inner core that is splined to it. The spline is curved so that when the user strikes the outer sleeve with a hammer, its downward force works on the spline to produce turning force on the core and any socket or work bit attached to it. The tool translates the heavy rotational inertia of the sleeve to the lighter core to generate large amounts of torque. At the same time, the striking blow from the hammer forces the impact driver forward into the screw reducing or eliminating cam out. This attribute is beneficial for Phillips screws which are prone to cam out. It is also excellent for use with the Robertson square socket head screws that are in common use in Canada. Powered impact drivers Typical battery-powered impact drivers are similar to electric drills when used to drive screws or bolts, but additionally have a spring-driven mechanism that applies rotational striking blows once the torque required becomes too great for the motor alone. This shouldn't be confused with the hammer mechanism found on hammer drills, which is a longitudinal blow. Most impact drivers have a handle to make it easier to hold onto. The impact drivers can be used on various types of nuts and bolts, and surfaces. Various materials may be used with it, including steel, iron, wood, plastic, and more. An impact driver is more appropriate than a drill for tightening bolts. Compared to an impa
https://en.wikipedia.org/wiki/Ballistic%20pendulum
A ballistic pendulum is a device for measuring a bullet's momentum, from which it is possible to calculate the velocity and kinetic energy. Ballistic pendulums have been largely rendered obsolete by modern chronographs, which allow direct measurement of the projectile velocity. Although the ballistic pendulum is considered obsolete, it remained in use for a significant length of time and led to great advances in the science of ballistics. The ballistic pendulum is still found in physics classrooms today, because of its simplicity and usefulness in demonstrating properties of momentum and energy. Unlike other methods of measuring the speed of a bullet, the basic calculations for a ballistic pendulum do not require any measurement of time, but rely only on measures of mass and distance. In addition its primary uses of measuring the velocity of a projectile or the recoil of a gun, the ballistic pendulum can be used to measure any transfer of momentum. For example, a ballistic pendulum was used by physicist C. V. Boys to measure the elasticity of golf balls, and by physicist Peter Guthrie Tait to measure the effect that spin had on the distance a golf ball traveled. History The ballistic pendulum was invented in 1742 by English mathematician Benjamin Robins (1707–1751), and published in his book New Principles of Gunnery, which revolutionized the science of ballistics, as it provided the first way to accurately measure the velocity of a bullet. Robins used the ballistic pendulum to measure projectile velocity in two ways. The first was to attach the gun to the pendulum, and measure the recoil. Since the momentum of the gun is equal to the momentum of the ejecta, and since the projectile was (in those experiments) the large majority of the mass of the ejecta, the velocity of the bullet could be approximated. The second, and more accurate method, was to directly measure the bullet momentum by firing it into the pendulum. Robins experimented with musket balls of arou
https://en.wikipedia.org/wiki/Dynamic%20Kernel%20Module%20Support
Dynamic Kernel Module Support (DKMS) is a program/framework that enables generating Linux kernel modules whose sources generally reside outside the kernel source tree. The concept is to have DKMS modules automatically rebuilt when a new kernel is installed. Framework An essential feature of DKMS is that it automatically recompiles all DKMS modules if a new kernel version is installed. This allows drivers and devices outside of the mainline kernel to continue working after a Linux kernel upgrade. Another benefit of DKMS is that it allows the installation of a new driver on an existing system, running an arbitrary kernel version, without any need for manual compilation or precompiled packages provided by the vendor. DKMS was written by the Linux Engineering Team at Dell in 2003. It is included in many distributions, such as Ubuntu, Debian, Fedora, SUSE, Mageia and Arch. DKMS is free software released under the terms of the GNU General Public License (GPL) v2 or later. DKMS supports both the rpm and deb package formats out of the box. See also Binary blob
https://en.wikipedia.org/wiki/Duodenojejunal%20flexure
The duodenojejunal flexure or duodenojejunal junction, also known as the angle of Treitz, is the border between the duodenum and the jejunum. Structure The ascending portion of the duodenum ascends on the left side of the aorta, as far as the level of the upper border of the second lumbar vertebra. At this point, it turns abruptly forward to merge with the jejunum, forming the duodenojejunal flexure. This forms the beginning of the jejunum. The duodenojejunal flexure is surrounded by the suspensory muscle of the duodenum. It is retroperitoneal, so is less mobile than the jejunum that comes after it, helping to stabilise the jejunum. The duodenojejunal flexure lies in front of the left psoas major muscle, the left renal artery, and the left renal vein. It is covered in front, and partly at the sides, by peritoneum continuous with the left portion of the mesentery. Clinical significance The ligament of Treitz, a peritoneal fold, from the right crus of diaphragm, is an identification point for the duodenojejunal flexure during abdominal surgery. Additional images See also Duodenum Transpyloric plane
https://en.wikipedia.org/wiki/Electroless%20nickel-phosphorus%20plating
Electroless nickel-phosphorus plating, also referred to as E-nickel, is a chemical process that deposits an even layer of nickel-phosphorus alloy on the surface of a solid substrate, like metal or plastic. The process involves dipping the substrate in a water solution containing nickel salt and a phosphorus-containing reducing agent, usually a hypophosphite salt. It is the most common version of electroless nickel plating (EN plating) and is often referred by that name. A similar process uses a borohydride reducing agent, yielding a nickel-boron coating instead. Unlike electroplating, processes in general do not require passing an electric current through the bath and the substrate; the reduction of the metal cations in solution to metallic is achieved by purely chemical means, through an autocatalytic reaction. This creates an even layer of metal regardless of the geometry of the surface – in contrast to electroplating which suffers from uneven current density due to the effect of substrate shape on the electric resistance of the bath and therefore on the current distribution within it. Moreover, can be applied to non-conductive surfaces. It has many industrial applications, from merely decorative to the prevention of corrosion and wear. It can be used to apply composite coatings, by suspending suitable powders in the bath. Historical overview The reduction of nickel salts to nickel metal by hypophosphite was accidentally discovered by Charles Adolphe Wurtz in 1844. In 1911, François Auguste Roux of L'Aluminium Français patented the process (using both hypophosphite and orthophosphite) for general metal plating. However, Roux's invention does not seem to have received much commercial use. In 1946 the process was accidentally rediscovered by Abner Brenner and Grace E. Riddell of the National Bureau of Standards. They tried adding various reducing agents to an electroplating bath in order to prevent undesirable oxidation reactions at the anode. When they
https://en.wikipedia.org/wiki/Suspensory%20ligament%20of%20ovary
The suspensory ligament of the ovary, also infundibulopelvic ligament (commonly abbreviated IP ligament or simply IP), is a fold of peritoneum that extends out from the ovary to the wall of the pelvis. Some sources consider it a part of the broad ligament of uterus while other sources just consider it a "termination" of the ligament. It is not considered a true ligament in that it does not physically support any anatomical structures; however it is an important landmark and it houses the ovarian vessels. The suspensory ligament is directed upward over the iliac vessels. Structure It contains the ovarian artery, ovarian vein, ovarian nerve plexus, and lymphatic vessels. Composition The suspensory ligament of the ovary is one continuous tissue that connects the ovary to the wall of the pelvis. There are separate names for the two regions of this tissue. In the anterior region, the suspensory ligament is attached to the wall of the pelvis via a continuous tissue called peritoneum. In the more posterior region, the suspensory ligament is attached to the upper pole of ovary and infundibulum of fallopian tube via a continuous tissue called the broad ligament. In sum, the suspensory ligament consists of a single connective tissue that has different regional notations, the peritoneum and the broad ligament. Peritoneal relationship Most of the abdominal cavity is lined by a double-membranous sac called peritoneum . The interior is called the peritoneal cavity, this is the location of all 'intra-peritoneal' organs (disambiguation: retro-peritoneal organs ). The most inferior extent of the peritoneum covers the pelvic inlet; in females, this region of the peritoneum is referred to as the 'broad ligament'. Development The suspensory ligament originates from the mesonephros, which, in turn, originates from intermediate mesoderm. The prenatal development of the suspensory ligament of the ovary is a part of the development of the reproductive system. See also Li
https://en.wikipedia.org/wiki/Suspensory%20ligament%20of%20penis
The suspensory ligament of the penis is attached to the pubic symphysis, which holds the penis close to the pubic bone and supports it when erect. The ligament does not directly connect to the corpus cavernosum penis, but may still play a role in erectile dysfunction. The ligament can be surgically lengthened in a procedure known as ligamentolysis, which is a form of penis enlargement. Gallery See also Suspensory ligament of clitoris fundiform ligament of penis
https://en.wikipedia.org/wiki/Parametrium
The parametrium is the fibrous and fatty connective tissue that surrounds the uterus. This tissue separates the supravaginal portion of the cervix from the bladder. The parametrium lies in front of the cervix and extends laterally between the layers of the broad ligaments. It connects the uterus to other tissues in the pelvis. It is different from the perimetrium, which is the outermost layer of the uterus. The uterine artery and ovarian ligament are located in the parametrium. An associated form of pelvic inflammatory disease is inflammation of the parametrium known as parametritis.
https://en.wikipedia.org/wiki/W86
The W86 was an American earth-penetrating ("bunker buster") nuclear warhead, intended for use on the Pershing II intermediate-range ballistic missile (IRBM). The W86 design was canceled in September 1980 when the Pershing II missile mission shifted from destroying hardened targets to targeting soft targets at greater range. The W85 warhead, which had been developed in parallel with the W86, was used for all production Pershing II missiles. Development work for the W86's penetrator case began in 1975 at Sandia National Laboratories. The weapon was intended to allow for the destruction of hardened structures and the cratering of runways while using smaller yields. Design The warhead was developed from Los Alamos nuclear artillery shell technologies. A 2005 study by the National Research Council that examined a number of nuclear earth penetrating weapon proposals, described the W86 as being in diameter and weighing . The study calculated that such an EPW could penetrate into medium strength rock, into low strength rock and into silt or clay, assuming a peak allowable deceleration of . The study cites the classified Sandia development report for the W86 warhead for its figures. In the National Research Council study, they refer to the "low-yield weapon" (W86) as having a yield of less than or less than . A 1979 article in Sandia's monthly Lab News magazine describes a test unit penetrating into the Tonopah Test Range lake bed, striking the ground at . Development called for approximately 20 tests of the penetrator into various mediums. The weapon was an implosion-type weapon. Gallery See also List of nuclear weapons B61 nuclear bomb Pershing II
https://en.wikipedia.org/wiki/Pectineal%20line%20%28femur%29
On the posterior surface of the femur, the intermediate ridge or pectineal line is continued to the base of the lesser trochanter and gives attachment to the pectineus muscle.
https://en.wikipedia.org/wiki/Lattice-based%20access%20control
In computer security, lattice-based access control (LBAC) is a complex access control model based on the interaction between any combination of objects (such as resources, computers, and applications) and subjects (such as individuals, groups or organizations). In this type of label-based mandatory access control model, a lattice is used to define the levels of security that an object may have and that a subject may have access to. The subject is only allowed to access an object if the security level of the subject is greater than or equal to that of the object. Mathematically, the security level access may also be expressed in terms of the lattice (a partial order set) where each object and subject have a greatest lower bound (meet) and least upper bound (join) of access rights. For example, if two subjects A and B need access to an object, the security level is defined as the meet of the levels of A and B. In another example, if two objects X and Y are combined, they form another object Z, which is assigned the security level formed by the join of the levels of X and Y. LBAC is also known as a label-based access control (or rule-based access control) restriction as opposed to role-based access control (RBAC). Lattice based access control models were first formally defined by Denning (1976); see also Sandhu (1993). See also