source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Biomarker%20%28medicine%29
In medicine, a biomarker is a measurable indicator of the severity or presence of some disease state. It may be defined as a "cellular, biochemical or molecular alteration in cells, tissues or fluids that can be measured and evaluated to indicate normal biological processes, pathogenic processes, or pharmacological responses to a therapeutic intervention." More generally a biomarker is anything that can be used as an indicator of a particular disease state or some other physiological state of an organism. According to the WHO, the indicator may be chemical, physical, or biological in nature - and the measurement may be functional, physiological, biochemical, cellular, or molecular. A biomarker can be a substance that is introduced into an organism as a means to examine organ function or other aspects of health. For example, rubidium chloride is used in isotopic labeling to evaluate perfusion of heart muscle. It can also be a substance whose detection indicates a particular disease state, for example, the presence of an antibody may indicate an infection. More specifically, a biomarker indicates a change in expression or state of a protein that correlates with the risk or progression of a disease, or with the susceptibility of the disease to a given treatment. Biomarkers can be characteristic biological properties or molecules that can be detected and measured in parts of the body like the blood or tissue. They may indicate either normal or diseased processes in the body. Biomarkers can be specific cells, molecules, or genes, gene products, enzymes, or hormones. Complex organ functions or general characteristic changes in biological structures can also serve as biomarkers. Although the term biomarker is relatively new, biomarkers have been used in pre-clinical research and clinical diagnosis for a considerable time. For example, body temperature is a well-known biomarker for fever. Blood pressure is used to determine the risk of stroke. It is also widely known that
https://en.wikipedia.org/wiki/SERI%20microalgae%20culture%20collection
The SERI microalgae culture collection was a collection from the Department of Energy's Aquatic Species Program cataloged at the Solar Energy Research Institute located in Golden, Colorado. The Aquatic Species Program ended in 1996 after its funding was cut, at which point its microalgae collection was moved to the University of Hawaii. In 1998 the University of Hawaii, partnered with the University of California at Berkeley, received a grant from the National Science Foundation (NSF), for their proposal to develop commercial, medical, and industrial uses of microalgae, as well as new and more efficient techniques for cultivation. This grant was used to form Marine Bioproduct Engineering Center (MarBEC), a facility operating within the University system of Hawaii at Manoa, but connected to corporate interests. Below is a list of the algal-strains in the microalgae culture collection from the closeout report of the Department of Energy's Aquatic Species Program.
https://en.wikipedia.org/wiki/IBM%203270%20PC
The IBM 3270 PC (IBM System Unit 5271), released in October 1983, is an IBM PC XT containing additional hardware that, in combination with software, can emulate the behaviour of an IBM 3270 terminal. It can therefore be used both as a standalone computer, and as a terminal to a mainframe. IBM later released the 3270 AT (IBM System Unit 5273), which is a similar design based on the IBM PC AT. They also released high-end graphics versions of the 3270 PC in both XT and AT variants. The XT-based versions are called 3270 PC/G and 3270 PC/GX and they use a different System Unit 5371, while their AT counterparts (PC AT/G and PC AT/GX) have System Unit 5373. Technology The additional hardware occupies nearly all the free expansion slots in the computer. It includes a video card which occupies 1-3 ISA slots (depending on what level of graphics support is required), and supports CGA and MDA video modes. The display resolution is 720×350, either on the matching 14-inch color monitor (model 5272) or in monochrome on an MDA monitor. A further expansion card intercepts scancodes from the 122-key 3270 keyboard, translating them into XT scancodes which are then sent to the normal keyboard connector. This keyboard, officially called the 5271 Keyboard Element, weighs 9.3 pounds. The final additional card (a 3278 emulator) provides the communication interface to the host mainframe. Models 3270 PC (System Unit 5271) - original 3270 PC, initially offered in three different Models numbered 2, 4, and 6. Model 2 has non-expandable memory of 256 KB and a single floppy drive. Model 4 has expandable memory, a second floppy drive, and a parallel port. Model 6 replaces one of the floppy drives with a 10 MB hard disk. Model 6 had a retail price of at its launch (with 512KB RAM), not including display, cables and software; a working configuration with an additional 192KB RAM, color display (model 5272) and the basic cabling and software (but without support for host/mainframe-side graph
https://en.wikipedia.org/wiki/Non-cellular%20life
Non-cellular life, also known as acellular life, is life that exists without a cellular structure for at least part of its life cycle. Historically, most definitions of life postulated that an organism must be composed of one or more cells, but this is no longer considered necessary, and modern criteria allow for forms of life based on other structural arrangements. The primary candidates for non-cellular life are viruses. Some biologists consider viruses to be organisms, but others do not. Their primary objection is that no known viruses are capable of autonomous reproduction; they must rely on cells to copy them. Viruses as non-cellular life The nature of viruses was unclear for many years following their discovery as pathogens. They were described as poisons or toxins at first, then as "infectious proteins", but with advances in microbiology it became clear that they also possessed genetic material, a defined structure, and the ability to spontaneously assemble from their constituent parts. This spurred extensive debate as to whether they should be regarded as fundamentally organic or inorganic — as very small biological organisms or very large biochemical molecules — and since the 1950s many scientists have thought of viruses as existing at the border between chemistry and life; a gray area between living and nonliving. Viral replication and self-assembly has implications for the study of the origin of life, as it lends further credence to the hypotheses that cells and viruses could have started as a pool of replicators where selfish genetic information was parasitizing on producers in RNA world, as two strategies to survive, gained in response to environmental conditions, or as self-assembling organic molecules. Viroids Viroids are the smallest infectious pathogens known to biologists, consisting solely of short strands of circular, single-stranded RNA without protein coats. They are mostly plant pathogens and some are animal pathogens, from which some ar
https://en.wikipedia.org/wiki/Peasemeal
Peasemeal (also called pea flour) is a flour produced from yellow field peas that have been roasted. The roasting enables greater access to protein and starch, thus increasing nutritive value. Traditionally the peas would be ground three times using water-powered stone mills. The color of the flour is brownish yellow due to the caramelization achieved during roasting, while the texture ranges from fine to gritty. The uses of peasemeal are similar to maize meal in baking, porridge and quick breads. Peasemeal has had a long history in Great Britain and is still used in Scotland for dishes such as brose and bannocks. Brose is similar to farina in its consumption by the addition of boiling water or stock to the peasemeal then eaten immediately with butter, pepper, salt, sugar or raisins. The production of peasemeal disappeared in the 1970s until Fergus Morrison took over a run-down water-powered mill in Golspie, Scotland and revived the mill and peasemeal due to popular demand. Currently, the use of yellow pea flour is again gaining momentum due to the nutritional benefits and sustainability associated to this food crop. Pea flour can fully or partly replace wheat flour in bakery products, such as cakes, cookies and bread.
https://en.wikipedia.org/wiki/Packet%20Clearing%20House
Packet Clearing House (PCH) is the international nonprofit organization responsible for providing operational support and security to critical internet infrastructure, including Internet exchange points and the core of the domain name system. The organization also works in the areas of cybersecurity coordination, regulatory policy and Internet governance. Overview Packet Clearing House (PCH) was formed in 1994 by Chris Alan and Mark Kent to provide efficient regional and local network interconnection alternatives for the West Coast of the United States. It has grown to become a leading proponent of neutral independent network interconnection and provider of route-servers at major exchange points worldwide. PCH provides equipment, training, data, and operational support to organizations and individual researchers seeking to improve the quality, robustness, and Internet accessibility. , major PCH projects include Building and supporting nearly half of the world's approximately 700 Internet exchange points (IXPs), and maintaining the canonical index of Internet exchange points, with data going back to 1994; Operating the world's largest anycast Domain Name System (DNS) server platform, including two root nameservers, more than 400 top-level domains (TLDs) including the country-code domains of more than 130 countries, and the Quad9 recursive resolver; Operating the only FIPS 140-2 Level 4 global TLD DNSSEC key management and signing infrastructure, with facilities in Singapore, Zurich, and San Jose; Implementing network research data collection initiatives in more than 100 countries; Publishing original research and policy guidance in the areas of telecommunications regulation, including the 2011 and 2016 Interconnection Surveys, country reports such as those for Canada in 2012 and 2016 and Paraguay in 2012, and a survey of critical infrastructure experts for the GCSC; and Developing and presenting educational materials to foster a better under
https://en.wikipedia.org/wiki/INOC-DBA
The INOC-DBA (Inter-Network Operations Center Dial-By-ASN) hotline phone system is a global voice telephony network that connects the network operations centers and security incident response teams of critical Internet infrastructure providers such as backbone carriers, Internet service providers, and Internet exchanges as well as critical individuals within the policy, regulatory, Internet governance, security and vendor communities. It was built by Packet Clearing House in 2001, was publicly announced at NANOG in October of 2002, and the secretariat function was transferred from PCH to the Brazilian CERT in 2015. INOC-DBA is a closed system, ensuring secure and authenticated communications, and uses a combination of redundant directory services and direct peer-to-peer communications between stations to create a resilient, high-survivability network. It carries both routine operational traffic and emergency-response traffic. The INOC-DBA network uses IETF-standard SIP Voice over IP protocols to ensure interoperability between thousands of users across more than 2,800 NOCs and CERTs, which use dozens of different varieties of station and switch devices. It was the first production implementation of inter-carrier SIP telephony, when voice over IP had previously consisted exclusively of H.323 gateway-to-gateway call transport. INOC-DBA became the first telephone network of any kind to provide service on all seven continents when Michael Holstine of Raytheon Polar Services installed terminals at the South Pole Station in March of 2001.
https://en.wikipedia.org/wiki/Self-actualization
Self-actualization, in Maslow's hierarchy of needs, is the highest level of psychological development, where personal potential is fully realized after basic bodily and ego needs have been fulfilled. Self-actualization was coined by the organismic theorist Kurt Goldstein for the motive to realize one's full potential: "the tendency to actualize itself as fully as possible is the basic drive ... the drive of self-actualization." Carl Rogers similarly wrote of "the curative force in psychotherapyman's tendency to actualize himself, to become his potentialities ... to express and activate all the capacities of the organism." Abraham Maslow's theory Definition Maslow defined self-actualization to be "self-fulfillment, namely the tendency for him [the individual] to become actualized in what he is potentially. This tendency might be phrased as the desire to become more and more what one is, to become everything that one is capable of becoming." He used the term to describe a desire, not a driving force, that could lead to realizing one's capabilities. He did not feel that self-actualization determined one's life; rather, he felt that it gave the individual a desire, or motivation to achieve budding ambitions. Maslow's idea of self-actualization has been commonly interpreted as "the full realization of one's potential" and of one's "true self." A more explicit definition of self-actualization according to Maslow is "intrinsic growth of what is already in the organism, or more accurately of what is the organism itself ... self-actualization is growth-motivated rather than deficiency-motivated." This explanation emphasizes the fact that self-actualization cannot normally be reached until other lower order necessities of Maslow's hierarchy of needs are satisfied. While Goldstein defined self-actualization as a driving force, Maslow uses the term to describe personal growth that takes place once lower order needs have essentially been met, one corollary being that, in
https://en.wikipedia.org/wiki/Irony%20punctuation
Irony punctuation is any form of notation proposed or used to denote irony or sarcasm in text. Written text, in English and other languages, lacks a standard way to mark irony, and several forms of punctuation have been proposed to fill the gap. The oldest is the percontation point in the form of a reversed question mark (), proposed by English printer Henry Denham in the 1580s for marking rhetorical questions, which can be a form of irony. Specific irony marks have also been proposed, such as in the form of an open upward arrow (△|), used by Marcellin Jobard in the 19th century, and in a form resembling a reversed question mark (), proposed by French poet Alcanter de Brahm during the 19th century. Irony punctuation is primarily used to indicate that a sentence should be understood at a second level. A bracketed exclamation point or question mark as well as scare quotes are also occasionally used to express irony or sarcasm. Percontation point The percontation point , a reversed question mark later referred to as a rhetorical question mark, was proposed by Henry Denham in the 1580s and was used at the end of a question that does not require an answer—a rhetorical question. Its use died out in the 17th century. This character can be represented using the reversed question mark (⸮) found in Unicode as U+2E2E; another character approximating it is the Arabic question mark (؟), U+061F. The modern question mark (? U+003F) is descended from the "punctus interrogativus" (described as "a lightning flash, striking from right to left"), but unlike the modern question mark, the punctus interrogativus may be contrasted with the punctus percontativus—the former marking questions that require an answer while the latter marks rhetorical questions. Irony mark In 1668, John Wilkins, in An Essay towards a Real Character and a Philosophical Language, proposed using an inverted exclamation mark to punctuate rhetorical questions. In an article dated 11 October 1841, Marcellin Job
https://en.wikipedia.org/wiki/Case%20report
In medicine, a case report is a detailed report of the symptoms, signs, diagnosis, treatment, and follow-up of an individual patient. Case reports may contain a demographic profile of the patient, but usually describe an unusual or novel occurrence. Some case reports also contain a literature review of other reported cases. Case reports are professional narratives that provide feedback on clinical practice guidelines and offer a framework for early signals of effectiveness, adverse events, and cost. They can be shared for medical, scientific, or educational purposes. Types Most case reports are on one of six topics: An unexpected association between diseases or symptoms. An unexpected event in the course of observing or treating a patient. Findings that shed new light on the possible pathogenesis of a disease or an adverse effect. Unique or rare features of a disease. Unique therapeutic approaches. A positional or quantitative variation of the anatomical structures. Roles in research and education A case report is generally considered a type of anecdotal evidence. Given their intrinsic methodological limitations, including lack of statistical sampling, case reports are placed at the bottom of the hierarchy of clinical evidence, together with case series. Nevertheless, case reports do have genuinely useful roles in medical research and evidence-based medicine. In particular, they have facilitated recognition of new diseases and adverse effects of treatments (e.g., recognition of the link between administration of thalidomide to mothers and malformations in their babies was triggered by a case report). Case reports have a role in pharmacovigilance. They can also help understand the clinical spectrum of rare diseases as well as unusual presentations of common diseases. They can help generate study hypotheses, including plausible mechanisms of disease. Case reports may also have a role to play in guiding the personalization of treatments in clinical practice
https://en.wikipedia.org/wiki/Green%27s%20matrix
In mathematics, and in particular ordinary differential equations, a Green's matrix helps to determine a particular solution to a first-order inhomogeneous linear system of ODEs. The concept is named after George Green. For instance, consider where is a vector and is an matrix function of , which is continuous for , where is some interval. Now let be linearly independent solutions to the homogeneous equation and arrange them in columns to form a fundamental matrix: Now is an matrix solution of . This fundamental matrix will provide the homogeneous solution, and if added to a particular solution will give the general solution to the inhomogeneous equation. Let be the general solution. Now, This implies or where is an arbitrary constant vector. Now the general solution is The first term is the homogeneous solution and the second term is the particular solution. Now define the Green's matrix The particular solution can now be written External links An example of solving an inhomogeneous system of linear ODEs and finding a Green's matrix from www.exampleproblems.com. Ordinary differential equations Matrices
https://en.wikipedia.org/wiki/Static%20grass
Static grass is used in scale models and miniatures to create realistic-looking grass textures. It consists of small coloured fibres charged with static electricity, making them stand on end when sprinkled onto a surface coated with glue that then hardens, holding the fibres in place. Static grass is usually prepared by applying a layer of glue on the surface, then pouring the fibres on and tipping off the excess. The fibres can also be applied with a shaker, also known as a puffer. Static grass consists of man-made fibres selected for their ability to hold a static electric charge. They are usually a blend of coloured nylon, rayon, or polyester fibres that are used to more realistically replicate grass on a modeller's layout. The fibres are usually sold by weight in 2, 4, 6, 10 and 12 millimetre lengths, although fibres can be found from as little as 0.5 mm in length. If using an electronic applicator, the fibres are attracted to the adhesive vertically and "end-on", giving the grass-like effect the modeller requires. The application sequence is as follows: Apply adhesive to the area to be covered with grass; Ground the applicator to the adhesive area; Load the applicator with fibres; Apply the fibres; Allow the adhesive to dry; Remove excess fibres. Once the basic technique is mastered, advanced techniques can be learned, such as developing differing lengths, dead grass and creating grass tufts, to enhance realism.. Several companies produce static grass products, including PECO, Woodland Scenics and WW Scenics. See also Rail transport modelling#Scatter (modeling)), alternative that may be just dyed sawdust External links How to Make a Homemade Static Grass Applicator for Model Train Scenery Easy Static Grass application by Craig Stocks Static Grass Applicator & Static Grass specialists — DoubleO Scenics Static Grass Applicator — GrassTech USA Scale modeling
https://en.wikipedia.org/wiki/Body%20composition
In physical fitness, body composition refers to quantifying the different components (or "compartments") of a human body. The selection of compartments varies by model but may include fat, bone, water, and muscle. Two people of the same gender, height, and body weight may have completely different body types as a consequence of having different body compositions. This may be explained by a person having low or high body fat, dense muscles, or big bones. Compartment models Body composition models typically use between 2 and 6 compartments to describe the body. Common models include: 2 compartment: Fat mass (FM) and fat-free mass (FFM) 3 compartment: Fat mass (FM), water, and fat-free dry mass 4 compartment: Fat mass (FM), water, protein, and mineral 5 compartment: Fat mass (FM), water, protein, bone mineral content, and non-osseous mineral content 6 compartment: Fat mass (FM), water, protein, bone mineral content, non-osseous mineral content, and glycogen As a rule, the compartments must sum to the body weight. The proportion of each compartment as a percent is often reported, found by dividing the compartment weight by the body weight. Individual compartments may be estimated based on population averages or measured directly or indirectly. Many measurement methods exist with varying levels of accuracy. Typically, the higher compartment models are more accurate, as they require more data and thus account for more variation across individuals. The four compartment model is considered the reference model for assessment of body composition as it is robust to most variation and each of its components can be measured directly. Measurement methods A wide variety of body composition measurement methods exist. The "gold standard" measurement technique for the 4-compartment model consists of a weight measurement, body density measurement using hydrostatic weighing or air displacement plethysmography, total body water calculation using isotope dilution analysis, a
https://en.wikipedia.org/wiki/Longest%20increasing%20subsequence
In computer science, the longest increasing subsequence problem aims to find a subsequence of a given sequence in which the subsequence's elements are sorted in an ascending order and in which the subsequence is as long as possible. This subsequence is not necessarily contiguous or unique. The longest increasing subsequences are studied in the context of various disciplines related to mathematics, including algorithmics, random matrix theory, representation theory, and physics. The longest increasing subsequence problem is solvable in time where denotes the length of the input sequence. Example In the first 16 terms of the binary Van der Corput sequence 0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15 one of the longest increasing subsequences is 0, 2, 6, 9, 11, 15. This subsequence has length six; the input sequence has no seven-member increasing subsequences. The longest increasing subsequence in this example is not the only solution: for instance, 0, 4, 6, 9, 11, 15 0, 2, 6, 9, 13, 15 0, 4, 6, 9, 13, 15 are other increasing subsequences of equal length in the same input sequence. Relations to other algorithmic problems The longest increasing subsequence problem is closely related to the longest common subsequence problem, which has a quadratic time dynamic programming solution: the longest increasing subsequence of a sequence is the longest common subsequence of and where is the result of sorting However, for the special case in which the input is a permutation of the integers this approach can be made much more efficient, leading to time bounds of the form The largest clique in a permutation graph corresponds to the longest decreasing subsequence of the permutation that defines the graph (assuming the original non-permuted sequence is sorted from lowest value to highest). Similarly, the maximum independent set in a permutation graph corresponds to the longest non-decreasing subsequence. Therefore, longest increasing subsequence algorithms can be
https://en.wikipedia.org/wiki/Product%20category
In the mathematical field of category theory, the product of two categories C and D, denoted and called a product category, is an extension of the concept of the Cartesian product of two sets. Product categories are used to define bifunctors and multifunctors. Definition The product category has: as objects: pairs of objects , where A is an object of C and B of D; as arrows from to : pairs of arrows , where is an arrow of C and is an arrow of D; as composition, component-wise composition from the contributing categories: ; as identities, pairs of identities from the contributing categories: 1(A, B) = (1A, 1B). Relation to other categorical concepts For small categories, this is the same as the action on objects of the categorical product in the category Cat. A functor whose domain is a product category is known as a bifunctor. An important example is the Hom functor, which has the product of the opposite of some category with the original category as domain: Hom : Cop × C → Set. Generalization to several arguments Just as the binary Cartesian product is readily generalized to an n-ary Cartesian product, binary product of two categories can be generalized, completely analogously, to a product of n categories. The product operation on categories is commutative and associative, up to isomorphism, and so this generalization brings nothing new from a theoretical point of view.
https://en.wikipedia.org/wiki/Omega%20meson
The omega meson () is a flavourless meson formed from a superposition of an up quark–antiquark and a down quark–antiquark pair. It is part of the vector meson nonet and mediates the nuclear force along with pions and rho mesons. Properties The most common decay mode for the ω meson is at 89.2±0.7%, followed by at 8.34±0.26%. The quark composition of the meson can be thought of as a mix between , and states, but it is very nearly a pure symmetric - state. This can be shown by deconstructing the wave function of the into its component parts. We see that the and mesons are mixtures of the SU(3) wave functions as follows. , , where is the nonet mixing angle, and . The mixing angle at which the components decouple completely can be calculated to be , which almost corresponds to the actual value calculated from the masses of 35°. Therefore, the meson is nearly a pure symmetric - state. See also List of mesons Quark model Vector meson
https://en.wikipedia.org/wiki/Fever%20of%20unknown%20origin
Fever of unknown origin (FUO) refers to a condition in which the patient has an elevated temperature (fever) but, despite investigations by a physician, no explanation is found. If the cause is found it is usually a diagnosis of exclusion, eliminating all possibilities until only the correct explanation remains. Causes Worldwide, infection is the leading cause of FUO with prevalence varying by country and geographic region. Extrapulmonary tuberculosis is the most frequent cause of FUO. Drug-induced hyperthermia, as the sole symptom of an adverse drug reaction, should always be considered. Disseminated granulomatoses such as tuberculosis, histoplasmosis, coccidioidomycosis, blastomycosis and sarcoidosis are associated with FUO. Lymphomas are the most common cause of FUO in adults. Thromboembolic disease (i.e. pulmonary embolism, deep venous thrombosis) occasionally shows fever. Although infrequent, its potentially lethal consequences warrant evaluation of this cause. Endocarditis, although uncommon, is possible. Bartonella infections are also known to cause fever of unknown origin. Human herpes viruses are a common cause of fever of unknown origin with one study showing Cytomegalovirus, Epstein–Barr virus, human herpesvirus 6 (HHV-6), human herpesvirus 7 (HHV-7) being present in 15%, 10%, 14% and 4.8% respectively with 10% of people presenting with co-infection (infection with two or more human herpes viruses). Infectious mononucleosis, most commonly caused by EBV, may present as a fever of unknown origin. Other symptoms of infectious mononucleosis vary with age with middle aged adults and the elderly more likely to have a longer duration of fever and leukopenia, and younger adults and adolescents more likely to have splenomegaly, pharyngitis and lymphadenopathy. Endemic mycoses such as histoplasmosis, blastomycosis, coccidiomycosis and paracoccidioidomycosis can cause a fever of unknown origin in immunocompromised as well as immunocompetent people. These endemic
https://en.wikipedia.org/wiki/Data%20General%20AOS
Data General AOS (an abbreviation for Advanced Operating System) was the name of a family of operating systems for Data General 16-bit Eclipse C, M, and S minicomputers, followed by AOS/VS and AOS/RT32 (1980) and later AOS/VS II (1988) for the 32-bit Eclipse MV line. Overview AOS/VS exploited the 8-ring protection architecture of the Eclipse MV hardware with ring 7 being the least privileged and ring 0 being the most privileged. The AOS/VS kernel ran in ring 0 and used ring-1 addresses for data structures related to virtual address translations. Ring 2 was unused and reserved for future use by the kernel. The Agent, which performed much of the system call validation for the AOS/VS kernel, as well as some I/O buffering and many compatibility functions, ran in ring 3 of each process. Ring 4 was used by various D.G. products such as the INFOS II DBMS. Rings 5 and 6 were reserved for use by user programs but rarely used except for large software such as the MV/UX inner-ring emulator and Oracle which used ring 5. All user programs ran in ring 7. The AOS software was far more advanced than competing PDP-11 operating systems. 16-bit AOS applications ran natively under AOS/VS and AOS/VS II on the 32-bit Eclipse MV line. AOS/VS (Advanced Operating System/Virtual Storage) was the most commonly used DG software product, and included a command-line interpreter (CLI) allowing for complex scripting, DUMP/LOAD, and other custom components. The 16-bit version of the CLI is famous for including an Easter egg meant to honor Xyzzy (which was pronounced "magic"). This was the internal code name of what externally became known as the AOS/VS 32-bit operating system. A user typing in the command "xyzzy" would get back a response from the CLI of "Nothing Happens". When a 32-bit version of the CLI became available under AOS/VS II, the same command instead reported "Twice As Much Happens". A modified version of System V.2 Unix called MV/UX hosted under AOS/VS was also available. A modi
https://en.wikipedia.org/wiki/Bindi
Bindi may refer to: Bindi (decoration), a forehead decoration Bindi (name) See also Bindii (disambiguation), a common name for several plant species Bindiya (disambiguation) Bhindi, term for okra in Indic languages Bindy, a name
https://en.wikipedia.org/wiki/Lorentz%20Medal
Lorentz Medal is a distinction awarded every four years by the Royal Netherlands Academy of Arts and Sciences. It was established in 1925 on the occasion of the 50th anniversary of the doctorate of Hendrik Lorentz. The medal is given for important contributions to theoretical physics, though in the past there have been some experimentalists among its recipients. The first winner, Max Planck, was personally selected by Lorentz. Eleven of the 23 award winners later received a Nobel Prize. The Lorentz medal is ranked fifth in a list of most prestigious international academic awards in physics. Recipients See also List of physics awards
https://en.wikipedia.org/wiki/Ricochet%20%28Internet%20service%29
Ricochet was one of the first wireless Internet access services in the United States, before Wi-Fi, 3G, and other technologies were available to the general public. It was developed and first offered by Metricom Incorporated, which shut down in 2001. The service was originally known as the Micro Cellular Data Network, or MCDN, gaining the Ricochet name when the service was launched to the public. History Metricom was founded in 1985, initially selling radios to electric, gas, oil, and water industrial customers. The company was founded by Dr. David M. Elliott and Paul Baran. Paul Allen took a controlling stake in Metricom in 1997. Service began in 1994 in Cupertino, California, and was deployed throughout Silicon Valley (the northern part of Santa Clara Valley) by 1995, the rest of the San Francisco Bay Area by 1996, and to other cities throughout the end of the 1990s. By this time, the service was operating at roughly the speed of a 56 kbit/s dialup modem. Ricochet introduced a higher-speed 128 kbit/s, service in 1999, however, monthly fees for this service were more than double those for the original service. At its height in early 2001, Ricochet service was available in many areas, including Atlanta, Baltimore, and Dallas. Over 51,000 subscribers paid for the service. In July 2001, however, Ricochet's owner, Metricom, filed for Chapter 11 bankruptcy and shut down its service. Like many companies during the dot-com boom, Metricom had spent more money than it took in and concentrated on a nationwide rollout and marketing instead of developing select markets. Ricochet was reportedly officially utilized in the immediate disaster recovery situation of the September 11, 2001 terrorist attacks, partially operated by former employees as volunteers, when even cell phone networks were overloaded. Aftermath After bankruptcy, in November 2001, Aerie Networks, a Denver-based broadband firm, purchased the assets of the company at a liquidation sale. Service was restored t
https://en.wikipedia.org/wiki/182%20%28number%29
182 (one hundred [and] eighty-two) is the natural number following 181 and preceding 183. In mathematics 182 is an even number 182 is a composite number, as it is a positive integer with a positive divisor other than one or itself 182 is a deficient number, as the sum of its proper divisors, 154, is less than 182 182 is a member of the Mian–Chowla sequence: 1, 2, 4, 8, 13, 21, 31, 45, 66, 81, 97, 123, 148, 182 182 is a nontotient number, as there is no integer with exactly 182 coprimes below it 182 is an odious number 182 is a pronic number, oblong number or heteromecic number, a number which is the product of two consecutive integers (13 × 14) 182 is a repdigit in the D'ni numeral system (77), and in base 9 (222) 182 is a sphenic number, the product of three prime factors 182 is a square-free number 182 is an Ulam number Divisors of 182: 1, 2, 7, 13, 14, 26, 91, 182 In astronomy 182 Elsa is a S-type main belt asteroid OGLE-TR-182 is a star in the constellation Carina In the military JDS Ise (DDH-182), a Hyūga-class helicopter destroyer of the Japan Maritime Self-Defense Force The United States Air Force 182d Airlift Wing unit at Greater Peoria Regional Airport, Peoria, Illinois was a United States Navy troop transport during World War II was a United States Navy yacht during World War I was a United States Navy Alamosa-class cargo ship during World War II was a United States Navy during World War II was a United States Navy during World War II was a United States Navy following World War I 182nd Fighter Squadron, Texas Air National Guard unit of the Texas Air National Guard 182nd Infantry Regiment, now known as the 182nd Cavalry Squadron (RSTA), is the oldest combat regiment in the United States Army 182nd Battalion, Canadian Expeditionary Force during World War I In music Blink-182, an American pop punk band Blink-182 (album), their 2003 eponymous album In transportation Alfa Romeo 182 Formula One car 182nd–183rd Street
https://en.wikipedia.org/wiki/The%20Algebra%20of%20Infinite%20Justice
The Algebra of Infinite Justice (2001) is a collection of essays written by Booker Prize winner Arundhati Roy. The book discusses a wide range of issues including political euphoria in India over its successful nuclear bomb tests, the effect of public works projects on the environment, the influence of foreign multinational companies on policy in poorer countries, and the "war on terror". Some of the essays in the collection were republished later, along with later writing, in her book My Seditious Heart. The official introduction to the book by Penguin India states : A few weeks after India detonated a thermonuclear device in 1998, Arundhati Roy wrote ‘The End of Imagination’. The essay attracted worldwide attention as the voice of a brilliant Indian writer speaking out with clarity and conscience against nuclear weapons. Over the next three and a half years, she wrote a series of political essays on a diverse range of momentous subjects: from the illusory benefits of big dams, to the downside of corporate globalization and the US Government’s war against terror. Essays The end of Imagination This is the name of the first essay in the 2001 book. It was later used as the title of a comprehensive collection of Roy's essays in 2016 The greater common good Essay concerning the controversial Sardar Sarovar Dam project in India's Narmada Valley. Power politics This essay examines Indian dam construction and challenges the idea that only "experts" can influence economic policy. It explores the human costs of the privatization of India’s power supply and the construction of monumental dams in India. This is the second essay in the original 2001 book. There is also a 2002 book of Roy's essays with this title Power Politics. The ladies have feelings so... The Algebra of Infinite Justice War is peace The world doesn't have to choose between the Taliban and the US government. All the beauty of the world—literature, music, art—lies between these two fundamentalis
https://en.wikipedia.org/wiki/IBM%20BladeCenter
The IBM BladeCenter was IBM's blade server architecture, until it was replaced by Flex System in 2012. The x86 division was later sold to Lenovo in 2014. History Introduced in 2002, based on engineering work started in 1999, the IBM eServer BladeCenter was relatively late to the blade server market. It differed from prior offerings in that it offered a range of x86 Intel server processors and input/output (I/O) options. The naming was changed to IBM BladeCenter in 2005. In February 2006, IBM introduced the BladeCenter H with switch capabilities for 10 Gigabit Ethernet and InfiniBand 4X. A web site called Blade.org was available for the blade computing community through about 2009. In 2012, the replacement Flex System was introduced. Enclosures IBM BladeCenter (E) The original IBM BladeCenter was later marketed as BladeCenter E. Power supplies have been upgraded through the life of the chassis from the original 1200 to 1400, 1800, 2000 and 2320 watt. The BladeCenter (E) was co-developed by IBM and Intel and included: 14 blade slots in 7U Shared Media tray with Optical drive, floppy drive and USB 1.1 port 1 (upgradable to 2) Management modules Two slots for Gigabit Ethernet switches (can also have optical or copper pass-through) Two slots for optional switch or pass-through modules, can have additional Ethernet, Fibre Channel, InfiniBand or Myrinet 2000 functions Power: Two (upgradable to four) power supplies, C19/C20 connectors Two redundant high-speed blowers IBM BladeCenter T BladeCenter T is the telecommunications company version of the original BladeCenter, available with either AC or DC (48 V) power. Has 8 blade slots in 8U, but uses the same switches and blades as the regular BladeCenter E. To keep NEBS Level 3 / ETSI compliant special Network Equipment-Building System (NEBS) compliant blades are available. IBM BladeCenter H Upgraded BladeCenter design with high-speed fabric options, announced in 2006. Backwards compatible with older BladeCente
https://en.wikipedia.org/wiki/Los%20Alamos%20Primer
The Los Alamos Primer is a printed version of the first five lectures on the principles of nuclear weapons given to new arrivals at the top-secret Los Alamos laboratory during the Manhattan Project. The five lectures were given by physicist Robert Serber in April 1943. The notes from the lectures which became the Primer were written by Edward Condon. History The Los Alamos Primer was composed from five lectures given by the physicist Robert Serber to the newcomers at the Los Alamos Laboratory in April 1943, at the start of the Manhattan Project. The aim of the project was to build the first nuclear bomb, and these lectures were a very concise introduction into the priciples of nuclear weapon design. Serber was a postdoctoral student of J. Robert Oppenheimer, the leader of the Los Alamos Laboratory, and worked with him on the project from the very start. The five lectures were conducted at April 5, 7, 9, 12, and 14, 1943; according to Serber, between 30 and 50 people attended them. Notes were taken by Edward Condon; the Primer is just 24-pages-long. Only 36 copies were printed at the time. Serber later described the lectures: Previously the people working at the separate universities had no idea of the whole story. They only knew what part they were working on. So somebody had to give them the picture of what it was all about and what the bomb was like, what was known about the theory, and some idea why they needed the various experimental numbers. In July 1942, Oppenheimer held a "conference" at his office at Berkeley. No records were preserved, but the Primer arose from all the aspects of bomb design discussed there. Content The Primer, though only 24-pages-long, consists of 22 sections, divided into chapters: Preliminaries Neutrons and the fission process Critical mass and efficiency Detonation, pre-detonation, and fizzles Conclusion The first paragraph states the intention of the Los Alamos Laboratory during World War II: The object of the project
https://en.wikipedia.org/wiki/Aerodynamic%20center
In aerodynamics, the torques or moments acting on an airfoil moving through a fluid can be accounted for by the net lift and net drag applied at some point on the airfoil, and a separate net pitching moment about that point whose magnitude varies with the choice of where the lift is chosen to be applied. The aerodynamic center is the point at which the pitching moment coefficient for the airfoil does not vary with lift coefficient (i.e. angle of attack), making analysis simpler. where is the aircraft lift coefficient. The lift and drag forces can be applied at a single point, the center of pressure, about which they exert zero torque. However, the location of the center of pressure moves significantly with a change in angle of attack and is thus impractical for aerodynamic analysis. Instead the aerodynamic center is used and as a result the incremental lift and drag due to change in angle of attack acting at this point is sufficient to describe the aerodynamic forces acting on the given body. Theory Within the assumptions embodied in thin airfoil theory, the aerodynamic center is located at the quarter-chord (25% chord position) on a symmetric airfoil while it is close but not exactly equal to the quarter-chord point on a cambered airfoil. From thin airfoil theory: where is the section lift coefficient, is the angle of attack in radian, measured relative to the chord line. where is the moment taken at quarter-chord point and is a constant. Differentiating with respect to angle of attack For symmetrical airfoils , so the aerodynamic center is at 25% of chord. But for cambered airfoils the aerodynamic center can be slightly less than 25% of the chord from the leading edge, which depends on the slope of the moment coefficient, . These results obtained are calculated using the thin airfoil theory so the use of the results are warranted only when the assumptions of thin airfoil theory are realistic. In precision experimentation with real airfoils and ad
https://en.wikipedia.org/wiki/PDE%20surface
PDE surfaces are used in geometric modelling and computer graphics for creating smooth surfaces conforming to a given boundary configuration. PDE surfaces use partial differential equations to generate a surface which usually satisfy a mathematical boundary value problem. PDE surfaces were first introduced into the area of geometric modelling and computer graphics by two British mathematicians, Malcolm Bloor and Michael Wilson. Technical details The PDE method involves generating a surface for some boundary by means of solving an elliptic partial differential equation of the form Here is a function parameterised by the two parameters and such that where , and are the usual cartesian coordinate space. The boundary conditions on the function and its normal derivatives are imposed at the edges of the surface patch. With the above formulation it is notable that the elliptic partial differential operator in the above PDE represents a smoothing process in which the value of the function at any point on the surface is, in some sense, a weighted average of the surrounding values. In this way, a surface is obtained as a smooth transition between the chosen set of boundary conditions. The parameter is a special design parameter which controls the relative smoothing of the surface in the and directions. When , the PDE is the biharmonic equation: . The biharmonic equation is the equation produced by applying the Euler-Lagrange equation to the simplified thin plate energy functional . So solving the PDE with is equivalent to minimizing the thin plate energy functional subject to the same boundary conditions. Applications PDE surfaces can be used in many application areas. These include computer-aided design, interactive design, parametric design, computer animation, computer-aided physical analysis and design optimisation. Related publications M.I.G. Bloor and M.J. Wilson, Generating Blend Surfaces using Partial Differential Equations, Computer Aided Desig
https://en.wikipedia.org/wiki/TRPV6
TRPV6 is a membrane calcium (Ca2+) channel protein which is particularly involved in the first step in Ca2+absorption in the intestine. Classification Transient Receptor Potential Vanilloid subfamily member 6 (TRPV6) is an epithelial Ca2+ channel that belongs to the transient receptor potential family (TRP) of proteins. The TRP family is a group of channel proteins critical for ionic homeostasis and the perception of various physical and chemical stimuli. TRP channels can detect temperature, osmotic pressure, olfaction, taste, and mechanical forces. The human genome encodes for 28 TRP channels, which include six TRPV channels. The high Ca2+-selectivity of TRPV5 and TRPV6 makes these channels distinct from the other four TRPV channels (TRPV1-TRPV4). TRPV5 and TRPV6 are involved in Ca2+ transport, whereas TRPV1 through TRPV3 are heat sensors with different temperature threshold for activation, and TRPV4 is involved in sensing osmolarity. Genetic defects in TRPV6 gene are linked to transient neonatal hyperparathyroidism and early-onset chronic pancreatitis. Dysregulation of TRPV6 is also involved in hypercalciuria, kidney stone formation, bone disorders, defects in keratinocyte differentiation, skeletal deformities, osteoarthritis, male sterility, Pendred syndrome, and certain sub-types of Cancer. Identification Peng et al identified TRPV6 in 1999 from rat duodenum in an effort to search for Ca2+ transporting proteins involved in Ca2+absorption. TRPV6 was also called calcium transport protein 1 (CaT1) initially although the names epithelial calcium channel 2 (ECaC2) and CaT1-like (CaT-L) were also used in early studies to describe the channel. The human and mouse orthologs of TRPV6 were cloned by Peng et al and Weber et al, respectively. The name TRPV6 was confirmed in 2005. Gene location, chromosomal location, and phylogeny The human TRPV6 gene is located on chromosomal locus 7q33-34 close to its homolog TRPV5 on 7q35. The TRPV6 gene in human encodes for 2906 bp
https://en.wikipedia.org/wiki/Standards%20for%20Educational%20and%20Psychological%20Testing
The Standards for Educational and Psychological Testing is a set of testing standards developed jointly by the American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME). The new edition of The Standards for Educational and Psychological Testing was released in July 2014. Five areas received particular attention in the 2014 revision: 1. Examining accountability issues associated with the uses of tests in educational policy 2. Broadening the concept of accessibility of tests for all examinees 3. Representing more comprehensively the role of tests in the workplace 4. Taking into account the expanding role of technology in testing 5. Improving the structure of the book for better communication of the standards Previous versions It was published on 1985, the 1999 Standards for Educational and Psychological Testing has more in-depth background material in each chapter, a greater number of standards, and a significantly expanded glossary and index. The 1999 version Standards reflects changes in United States federal law and measurement trends affecting validity; testing individuals with disabilities or different linguistic backgrounds; and new types of tests as well as new uses of existing tests. The Standards is written for the professional and for the educated layperson and addresses professional and technical issues of test development and use in education, psychology and employment. Overview of organization and content Part I: Test Construction, Evaluation, and Documentation 1. Validity 2. Reliability and Errors of Measurement 3. Test Development and Revision 4. Scales, Norms, and Score Comparability 5. Test Administration, Scoring, and Reporting 6. Supporting Documentation for Tests Part II: Fairness in Testing 7. Fairness in Testing and Test Use 8. The Rights and Responsibilities of Test Takers 9. Testing Individuals of Diverse Linguistic Backgrounds 10. Test
https://en.wikipedia.org/wiki/RoboWar
RoboWar is an open-source video game in which the player programs onscreen icon-like robots to battle each other with animation and sound effects. The syntax of the language in which the robots are programmed is a relatively simple stack-based one, based largely on IF, THEN, and simply-defined variables. 25 RoboWar tournaments were held in the past between 1990 until roughly 2003, when tournaments became intermittent and many of the major coders moved on. All robots from all tournaments are available on the RoboWar website. The RoboWar programming language, RoboTalk, is a stack-oriented programming language and is similar in structure to FORTH. Programming features RoboWar for the Macintosh was notable among the genre of autonomous robot programming games for the powerful programming model it exposed to the gamer. By the early 1990s, RoboWar included an integrated debugger that permitted stepping through code and setting breakpoints. Later editions of the RoboTalk language used by the robots (a cognate of the HyperTalk language for Apple's HyperCard) included support for interrupts as well. History RoboWar was originally released as a closed source shareware game in 1990 by David Harris for the Apple Macintosh platform. The source code has since been released and implementations are now also available for Microsoft Windows. It was based upon the same concepts as the 1981 Apple II game RobotWar. Initially tournaments were run by David Harris himself, but were eventually run by Eric Foley. See also Core War RobotWar Crobots Robot Battle
https://en.wikipedia.org/wiki/Holomorphic%20separability
In mathematics in complex analysis, the concept of holomorphic separability is a measure of the richness of the set of holomorphic functions on a complex manifold or complex-analytic space. Formal definition A complex manifold or complex space is said to be holomorphically separable, if whenever x ≠ y are two points in , there exists a holomorphic function , such that f(x) ≠ f(y). Often one says the holomorphic functions separate points. Usage and examples All complex manifolds that can be mapped injectively into some are holomorphically separable, in particular, all domains in and all Stein manifolds. A holomorphically separable complex manifold is not compact unless it is discrete and finite. The condition is part of the definition of a Stein manifold.
https://en.wikipedia.org/wiki/Plasmasphere
The plasmasphere, or inner magnetosphere, is a region of the Earth's magnetosphere consisting of low-energy (cool) plasma. It is located above the ionosphere. The outer boundary of the plasmasphere is known as the plasmapause, which is defined by an order of magnitude drop in plasma density. In 1963 American scientist Don Carpenter and Soviet astronomer proved the plasmasphere and plasmapause's existence from the analysis of very low frequency (VLF) whistler wave data. Traditionally, the plasmasphere has been regarded as a well behaved cold plasma with particle motion dominated entirely by the geomagnetic field and, hence, co-rotating with the Earth. History The discovery of the plasmasphere grew out of the scientific study of whistlers, natural phenomena caused by very low frequency (VLF) radio waves. Whistlers were first heard by radio operators in the 1890s. British scientist Llewelyn Robert Owen Storey had shown lightning generated whistlers in his 1953 PhD dissertation. Around the same time, Storey had posited the existence of whistlers meant plasma was present in Earth's atmosphere, and that it moved radio waves in the same direction as Earth's magnetic field lines. From this he deduced but was unable to conclusively prove the existence of the plasmasphere. In 1963 American scientist Don Carpenter and Soviet astronomer Konstantin Gringauz—independently of each other, and the latter using data from the Luna 2 spacecraft—experimentally proved the plasmasphere and plasmapause's existence, building on Storey's thinking. In 1965 Storey and French scientist M. P. Aubry worked on FR-1, a French scientific satellite equipped with instruments for measuring VLF frequencies and the local electron density of plasma. Aubry and Storey's studies of FR-1 VLF and electron density data further corroborated their theoretical models: VLF waves in the ionosphere occasionally passed through a thin layer of plasma into the magnetosphere, normal to the direction of Earth's magneti
https://en.wikipedia.org/wiki/Computational%20problem
In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring "Given a positive integer n, find a nontrivial prime factor of n." is a computational problem. A computational problem can be viewed as a set of instances or cases together with a, possibly empty, set of solutions for every instance/case. For example, in the factoring problem, the instances are the integers n, and solutions are prime numbers p that are the nontrivial prime factors of n. Computational problems are one of the main objects of study in theoretical computer science. The field of computational complexity theory attempts to determine the amount of resources (computational complexity) solving a given problem will require and explain why some problems are intractable or undecidable. Computational problems belong to complexity classes that define broadly the resources (e.g. time, space/memory, energy, circuit depth) it takes to compute (solve) them with various abstract machines. For example, the complexity classes P, problems that consume polynomial time for deterministic classical machines BPP, problems that consume polynomial time for probabilistic classical machines (e.g. computers with random number generators) BQP, problems that consume polynomial time for probabilistic quantum machines. Both instances and solutions are represented by binary strings, namely elements of {0, 1}*. For example, natural numbers are usually represented as binary strings using binary encoding. This is important since the complexity is expressed as a function of the length of the input representation. Types Decision problem A decision problem is a computational problem where the answer for every instance is either yes or no. An example of a decision problem is primality testing: "Given a positive integer n, determine if n is prime." A decision problem is typically represented as the set of all instances for which the answer
https://en.wikipedia.org/wiki/Water%E2%80%93cement%20ratio
The water–cement ratio (w/c ratio, or water-to-cement ratio, sometimes also called the Water-Cement Factor, ) is the ratio of the mass of water () to the mass of cement () used in a concrete mix: The typical values of this ratio = are generally comprised in the interval 0.40 and 0.60. The water-cement ratio of the fresh concrete mix is one of the main, if not the most important, factors determining the quality and properties of hardened concrete, as it directly affects the concrete porosity, and a good concrete is always a concrete as compact and as dense as possible. A good concrete must be therefore prepared with as little water as possible, but with enough water to hydrate the cement minerals and to properly handle it. A lower ratio leads to higher strength and durability, but may make the mix more difficult to work with and form. Workability can be resolved with the use of plasticizers or super-plasticizers. A higher ratio gives a too fluid concrete mix resulting in a too porous hardened concrete of poor quality. Often, the concept also refers to the ratio of water to cementitious materials, w/cm. Cementitious materials include cement and supplementary cementitious materials such as ground granulated blast-furnace slag (GGBFS), fly ash (FA), silica fume (SF), rice husk ash (RHA), metakaolin (MK), and natural pozzolans. Most of supplementary cementitious materials (SCM) are byproducts of other industries presenting interesting hydraulic binding properties. After reaction with alkalis (GGBFS activation) and portlandite (), they also form calcium silicate hydrates (C-S-H), the "gluing phase" present in the hardened cement paste. These additional C-S-H are filling the concrete porosity and thus contribute to strengthen concrete. SCMs also help reducing the clinker content in concrete and therefore saving energy and minimizing costs, while recycling industrial wastes otherwise aimed to landfill. The effect of the water-to-cement (w/c) ratio onto the mechan
https://en.wikipedia.org/wiki/Sodium%20phenylbutyrate
Sodium phenylbutyrate, sold under the brand name Buphenyl among others, is a salt of an aromatic fatty acid, 4-phenylbutyrate (4-PBA) or 4-phenylbutyric acid. The compound is used to treat urea cycle disorders, because its metabolites offer an alternative pathway to the urea cycle to allow excretion of excess nitrogen. Sodium phenylbutyrate is also a histone deacetylase inhibitor and chemical chaperone, leading respectively to research into its use as an anti-cancer agent and in protein misfolding diseases such as cystic fibrosis. Structure and properties Sodium phenylbutyrate is a sodium salt of an aromatic fatty acid, made up of an aromatic ring and butyric acid. The chemical name for sodium phenylbutyrate is 4-phenylbutyric acid, sodium salt. It forms water-soluble off-white crystals. Uses Medical uses Sodium phenylbutyrate is taken orally or by nasogastric intubation as a tablet or powder, and tastes very salty and bitter. It treats urea cycle disorders, genetic diseases in which nitrogen waste builds up in the blood plasma as ammonia glutamine (a state called hyperammonemia) due to deficiences in the enzymes carbamoyl phosphate synthetase I, ornithine transcarbamylase, or argininosuccinic acid synthetase. Uncontrolled, this causes intellectual impairment and early death. Sodium phenylbutyrate metabolites allows the kidneys to excrete excess nitrogen in place of urea, and coupled with dialysis, amino acid supplements and a protein-restricted diet, children born with urea cycle disorders can usually survive beyond 12 months. Patients may need treatment for all their life. The treatment was introduced by researchers in the 1990s, and approved by the U.S. Food and Drugs Administration (FDA) in April 1996. Adverse effects Nearly of women may experience an adverse effect of amenorrhea or menstrual dysfunction. Appetite loss is seen is 4% of patients. Body odor due to metabolization of phenylbutyrate affects 3% of patients, and 3% experience unpleasant tastes. G
https://en.wikipedia.org/wiki/Mechanician
A mechanician is an engineer or a scientist working in the field of mechanics, or in a related or sub-field: engineering or computational mechanics, applied mechanics, geomechanics, biomechanics, and mechanics of materials. Names other than mechanician have been used occasionally, such as mechaniker and mechanicist. The term mechanician is also used by the Irish Navy to refer to junior engine room ratings. In the British Royal Navy, Chief Mechanicians and Mechanicians 1st Class were Chief Petty Officers, Mechanicians 2nd and 3rd Class were Petty Officers, Mechanicians 4th Class were Leading Ratings, and Mechanicians 5th Class were Able Ratings. The rate was only applied to certain technical specialists and no longer exists. In the New Zealand Post Office, which provided telephone service prior to the formation of Telecom New Zealand in 1987, "Mechanician" was a job classification for workers who serviced telephone exchange switching equipment. The term seems to have originated in the era of the 7A Rotary system exchange, and was superseded by "Technician" circa 1975, perhaps because "Mechanician" was no longer considered appropriate after the first 2000 type Step-byStep Strowger switch exchanges began to be introduced in 1952 (in Auckland, at Birkenhead exchange). It is also the term by which makers of mechanical automata use in reference to their profession. People who made lasting contributions to mechanics prior to the 20th century Ibn al-Haytham: motion Galileo Galilei: notion of strength Robert Hooke: Hooke's law Isaac Newton: Newton's laws, law of gravitation Guillaume Amontons: laws of friction Leonhard Euler: buckling, rigid body dynamics Jean le Rond d'Alembert: d'Alembert's principle, Wave equation Joseph Louis Lagrange: Lagrangian mechanics Pierre-Simon Laplace: effects of surface tension Sophie Germain: elasticity Siméon Denis Poisson: elasticity Claude-Louis Navier: elasticity, fluid mechanics Augustin Louis Cauchy: elasticity Barré de Saint-Venant
https://en.wikipedia.org/wiki/Gardner%E2%80%93Salinas%20braille%20codes
The Gardner–Salinas braille codes are a method of encoding mathematical and scientific notation linearly using braille cells for tactile reading by the visually impaired. The most common form of Gardner–Salinas braille is the 8-cell variety, commonly called GS8. There is also a corresponding 6-cell form called GS6. The codes were developed as a replacement for Nemeth Braille by John A. Gardner, a physicist at Oregon State University, and Norberto Salinas, an Argentinian mathematician. The Gardner–Salinas braille codes are an example of a compact human-readable markup language. The syntax is based on the LaTeX system for scientific typesetting. Table of Gardner–Salinas 8-dot (GS8) braille The set of lower-case letters, the period, comma, semicolon, colon, exclamation mark, apostrophe, and opening and closing double quotes are the same as in Grade-2 English Braille. Digits Apart from 0, this is the same as the Antoine notation used in French and Luxembourgish Braille. Upper-case letters GS8 upper-case letters are indicated by the same cell as standard English braille (and GS8) lower-case letters, with dot #7 added. Compare Luxembourgish Braille. Greek letters Dot 8 is added to the letter forms of International Greek Braille to derive Greek letters: Characters differing from English Braille ASCII symbols and mathematical operators Text symbols Math and science symbols Markup * Encodes the fraction-slash for the single adjacent digits/letters as numerator and denominator. * Used for any > 1 digit radicand. ** Used for markup to represent inkprint text. Typeface indicators Shape symbols Set theory
https://en.wikipedia.org/wiki/Flux%20tube
A flux tube is a generally tube-like (cylindrical) region of space containing a magnetic field, B, such that the cylindrical sides of the tube are everywhere parallel to the magnetic field lines. It is a graphical visual aid for visualizing a magnetic field. Since no magnetic flux passes through the sides of the tube, the flux through any cross section of the tube is equal, and the flux entering the tube at one end is equal to the flux leaving the tube at the other. Both the cross-sectional area of the tube and the magnetic field strength may vary along the length of the tube, but the magnetic flux inside is always constant. As used in astrophysics, a flux tube generally means an area of space through which a strong magnetic field passes, in which the behavior of matter (usually ionized gas or plasma) is strongly influenced by the field. They are commonly found around stars, including the Sun, which has many flux tubes from tens to hundreds of kilometers in diameter. Sunspots are also associated with larger flux tubes of 2500 km diameter. Some planets also have flux tubes. A well-known example is the flux tube between Jupiter and its moon Io. Definition The flux of a vector field passing through any closed orientable surface is the surface integral of the field over the surface. For example, for a vector field consisting of the velocity of a volume of liquid in motion, and an imaginary surface within the liquid, the flux is the volume of liquid passing through the surface per unit time. A flux tube can be defined passing through any closed, orientable surface in a vector field , as the set of all points on the field lines passing through the boundary of . This set forms a hollow tube. The tube follows the field lines, possibly turning, twisting, and changing its cross sectional size and shape as the field lines converge or diverge. Since no field lines pass through the tube walls there is no flux through the walls of the tube, so all the field lines ent
https://en.wikipedia.org/wiki/187%20%28number%29
187 (one hundred [and] eighty-seven) is the natural number following 186 and preceding 188. In mathematics There are 187 ways of forming a sum of positive integers that adds to 11, counting two sums as equivalent when they are cyclic permutations of each other. There are also 187 unordered triples of 5-bit binary numbers whose bitwise exclusive or is zero. Per Miller's rules, the triakis tetrahedron produces 187 distinct stellations. It is the smallest Catalan solid, dual to the truncated tetrahedron, which only has 9 distinct stellations. In other fields There are 187 chapters in the Hebrew Torah. See also 187 (disambiguation)
https://en.wikipedia.org/wiki/Ginzburg%E2%80%93Landau%20equation
The Ginzburg–Landau equation, named after Vitaly Ginzburg and Lev Landau, describes the nonlinear evolution of small disturbances near a finite wavelength bifurcation from a stable to an unstable state of a system. At the onset of finite wavelength bifurcation, the system becomes unstable for a critical wavenumber which is non-zero. In the neighbourhood of this bifurcation, the evolution of disturbances is characterised by the particular Fourier mode for with slowly varying amplitude . The Ginzburg–Landau equation is the governing equation for . The unstable modes can either be non-oscillatory (stationary) or oscillatory. For non-oscillatory bifurcation, satisfies the real Ginzburg–Landau equation which was first derived by Alan C. Newell and John A. Whitehead and by Lee Segel in 1969. For oscillatory bifurcation, satisfies the complex Ginzburg–Landau equation which was first derived by Keith Stewartson and John Trevor Stuart in 1971. See also Davey–Stewartson equation Stuart–Landau equation Gross–Pitaevskii equation
https://en.wikipedia.org/wiki/Food%20fortification
Food fortification or enrichment is the process of adding micronutrients (essential trace elements and vitamins) to food. It can be carried out by food manufacturers, or by governments as a public health policy which aims to reduce the number of people with dietary deficiencies within a population. The predominant diet within a region can lack particular nutrients due to the local soil or from inherent deficiencies within the staple foods; the addition of micronutrients to staples and condiments can prevent large-scale deficiency diseases in these cases. As defined by the World Health Organization (WHO) and the Food and Agricultural Organization of the United Nations (FAO), fortification refers to "the practice of deliberately increasing the content of an essential micronutrient, i.e. vitamins and minerals (including trace elements) in a food, to improve the nutritional quality of the food supply and to provide a public health benefit with minimal risk to health", whereas enrichment is defined as "synonymous with fortification and refers to the addition of micronutrients to a food which are lost during processing". Food fortification has been identified as the second strategy of four by the WHO and FAO to begin decreasing the incidence of nutrient deficiencies at the global level. As outlined by the FAO, the most commonly fortified foods are cereals and cereal-based products; milk and dairy products; fats and oils; accessory food items; tea and other beverages; and infant formulas. Undernutrition and nutrient deficiency is estimated globally to cause the deaths of between 3 and 5 million people per year. Types Fortification is present in common food items in two different ways: adding back and addition. Flour loses nutritional value due to the way grains are processed; Enriched Flour has iron, folic acid, niacin, riboflavin, and thiamine added back to it. Conversely, other fortified foods have micronutrients added to them that don't naturally occur in those subs
https://en.wikipedia.org/wiki/TV2Me
TV2Me is a device that allows TV viewers to watch their home's cable or satellite television programs on their own computers, mobile phones, television sets and projector screens anywhere in the world. "This technology gives users the ability to shift space, and to watch all the cable or satellite TV channels of any place they choose - live, in full motion, with unparalleled television-quality - on any Internet connected device." History TV2Me was invented by Ken Schaffer, who began working on it in 2001, when he was working overseas. His goal was to watch his favorite American shows through any kind of device from wherever he was. With a team of Turkish and Russian programmers he developed circuitry that allows the MPEG-4 encoder to operate more efficiently and to generate a better picture. Schaffer, who was known for having previously invented the Schaffer–Vega diversity system, the first practical wireless guitar and microphone system for major rock bands, and for developing satellite tracking systems that allowed U.S. agencies and universities to monitor internal television of the then Soviet Union, launched TV2Me on December, 2003. TV2Me introduced the concept of placeshifting and started an entire industry. Operation To set up TV2Me, the cable or satellite box and a broadband internet connection are plugged in the device. "The server requires an Internet connection with an upstream speed of 512 kb/s or higher." On the receiving end (for example the computer), any browser can be used to view in real-time or with a 6-second delay. The delayed mode uses the extra time to produce a slightly better picture. No additional software needs to be installed. "The "target" (receiving location) can be anywhere on earth − anywhere there's wired or wireless broadband. The viewer can use virtually any PC running Windows, Mac, Linux - even Solaris." Copyrights No copyright infringement has been set for this placeshifting device but this technology is problematic to many cop
https://en.wikipedia.org/wiki/Universal%20extra%20dimensions
In particle physics, models with universal extra dimensions include one or more spatial dimensions beyond the three spatial and one temporal dimensions that are observed. Overview Models with universal extra dimensions, studied in 2001 assume that all fields propagate universally in the extra dimensions; in contrast, the ADD model requires that the fields of the Standard Model be confined to a four-dimensional membrane, while only gravity propagates in the extra dimensions. The universal extra dimensions are assumed to be compactified with radii much larger than the traditional Planck length, although smaller than in the ADD model, ~10−18 m. Generically, the—so far unobserved—Kaluza–Klein resonances of the Standard Model fields in such a theory would appear at an energy scale that is directly related to the inverse size ("compactification scale") of the extra dimension, The experimental bounds (based on Large Hadron Collider data) on the compactification scale of one or two universal extra dimensions are about 1 TeV. Other bounds come from electroweak precision measurements at the Z pole, the muon's magnetic moment, and limits on flavor-changing neutral currents, and reach several hundred GeV. Using universal extra dimensions to explain dark matter yields an upper limit on the compactification scale of several TeV. See also Large extra dimensions Kaluza–Klein theory Randall–Sundrum model Notes
https://en.wikipedia.org/wiki/Effective%20potential
The effective potential (also known as effective potential energy) combines multiple, perhaps opposing, effects into a single potential. In its basic form, it is the sum of the 'opposing' centrifugal potential energy with the potential energy of a dynamical system. It may be used to determine the orbits of planets (both Newtonian and relativistic) and to perform semi-classical atomic calculations, and often allows problems to be reduced to fewer dimensions. Definition The basic form of potential is defined as: where L is the angular momentum r is the distance between the two masses μ is the reduced mass of the two bodies (approximately equal to the mass of the orbiting body if one mass is much larger than the other); and U(r) is the general form of the potential. The effective force, then, is the negative gradient of the effective potential: where denotes a unit vector in the radial direction. Important properties There are many useful features of the effective potential, such as To find the radius of a circular orbit, simply minimize the effective potential with respect to , or equivalently set the net force to zero and then solve for : After solving for , plug this back into to find the maximum value of the effective potential . A circular orbit may be either stable or unstable. If it is unstable, a small perturbation could destabilize the orbit, but a stable orbit would return to equilibrium. To determine the stability of a circular orbit, determine the concavity of the effective potential. If the concavity is positive, the orbit is stable: The frequency of small oscillations, using basic Hamiltonian analysis, is where the double prime indicates the second derivative of the effective potential with respect to and it is evaluated at a minimum. Gravitational potential Consider a particle of mass m orbiting a much heavier object of mass M. Assume Newtonian mechanics, which is both classical and non-relativistic. The conservation of energy and angul
https://en.wikipedia.org/wiki/Net%20capital%20outflow
Net capital outflow (NCO) is the net flow of funds being invested abroad by a country during a certain period of time (usually a year). A positive NCO means that the country invests outside more than the world invests in it. NCO is one of two major ways of characterizing the nature of a country's financial and economic interaction with the other parts of the world (the other being the balance of trade). Explanation NCO is linked to the market for loanable funds and the international foreign exchange market. This relationship is often summarized by graphing the NCO curve with the quantity of country A's currency in the x-axis and the country's domestic real interest rate in the y-axis. The NCO curve gets a negative slope because an increased interest rate domestically means an incentive for savers to save more at home and less abroad. NCO also represents the quantity of country A's currency available on the foreign exchange market, and as such can be viewed as the supply-half that determines the real exchange rate, the demand-half being demand for A's currency in the foreign exchange market. As can be seen in the graph, NCO serves as the perfectly inelastic supply curve for this market. Thus, changes in the demand for A's currency (e.g. change from an increase in foreign demand for products made in country A) only cause changes in the exchange rate and not in the net amount of A's currency available for exchange. By an accounting identity, Country A's NCO is always equal to A's Net Exports, because the value of net exports is equal to the amount of capital spent abroad (i.e. outflow) for goods that are imported in A. It is also equal to the net amount of A's currency traded in the foreign exchange market over that time period. The value of exports (bananas, ice cream, clothing) produced in country A is always matched by the value of reciprocal payments of some asset (cash, stocks, real estate) made by buyers in other countries to the producers in country A. Th
https://en.wikipedia.org/wiki/Photon%20epoch
In physical cosmology, the photon epoch was the period in the evolution of the early universe in which photons dominated the energy of the universe. The photon epoch started after most leptons and anti-leptons were annihilated at the end of the lepton epoch, about 10 seconds after the Big Bang. Atomic nuclei were created in the process of nucleosynthesis, which occurred during the first few minutes of the photon epoch. For the remainder of the photon epoch, the universe contained a hot dense plasma of nuclei, electrons and photons. At the start of this period, many photons had sufficient energy to photodissociate deuterium, so those atomic nuclei that formed were quickly separated back into protons and neutrons. By the ten second mark, ever fewer high energy photons were available to photodissociate deuterium, and thus the abundance of these nuclei began to increase. Heavier atoms began to form through nuclear fusion processes: tritium, helium-3, and helium-4. Finally, trace amounts of lithium and beryllium began to appear. Once the thermal energy dropped below 0.03 MeV, nucleosynthesis effectively came to an end. Primordial abundances were now set, with the measured amounts in the modern epoch providing checks on the physical models of this period. 370,000 years after the Big Bang, the temperature of the universe fell to the point where nuclei could combine with electrons to create neutral atoms. As a result, photons no longer interacted frequently with matter, the universe became transparent and the cosmic microwave background radiation was created and then structure formation took place. This is referred to as the surface of last scattering, as it corresponds to a virtual outer surface of the spherical observable universe. See also Timeline of the early universe Big Bang nucleosynthesis Timeline of the Big Bang
https://en.wikipedia.org/wiki/Fermi%20point
The term Fermi point has two applications but refers to the same phenomena (special relativity): Fermi point (quantum field theory) Fermi point (nanotechnology) For both applications count that the symmetry between particles and anti-particles in weak interactions is violated: At this point the particle energy is zero. In nanotechnology this concept can be applied to electron behavior. An electron as a single particle is a fermion obeying the Pauli exclusion principle. Fermi point (quantum field theory) Fermionic systems that have a Fermi surface (FS) belong to a universality class in quantum field theory. Any collection of fermions with weak repulsive interactions belongs to this class. At the Fermi point, the break of symmetry can be explained by assuming a vortex or singularity will appear as a result of the spin of a fermi particle (quasiparticle, fermion) in one dimension of the three-dimensional momentum space. Fermi point (nanoscience) The Fermi point is one particular electron state. The Fermi point refers to an event chirality of electrons is involved and the diameter of a carbon nanotube for which the nanotube becomes metallic. As the structure of a carbon nanotube determines the energy levels that the carbon's electrons may occupy, the structure affects macroscopic properties of the nanotube structure, most notably electrical and thermal conductivity. Flat graphite is a conductor except when rolled up into small cylinders. This circular structure inhibits the internal flow of electrons and the graphite becomes a semiconductor; a transition point forms between the valence band and conduction band. This point is called the Fermi point. If the diameter of the carbon nanotube is sufficiently great, the necessary transition phase disappears and the nanotube may be considered a conductor. See also Fermi energy Fermi surface Bandgap Notes Critical phenomena Nanoelectronics Condensed matter physics Quantum field theory Special relativity
https://en.wikipedia.org/wiki/Pacific%20electric%20ray
Tetronarce californica also known as the Pacific electric ray is a species of electric ray in the family Torpedinidae, endemic to the coastal waters of the northeastern Pacific Ocean from Baja California to British Columbia. It generally inhabits sandy flats, rocky reefs, and kelp forests from the surface to a depth of , but has also been known to make forays into the open ocean. Measuring up to long, this species has smooth-rimmed spiracles (paired respiratory openings behind the eyes) and a dark gray, slate, or brown dorsal coloration, sometimes with dark spots. Its body form is typical of the genus, with a rounded pectoral fin disc wider than long and a thick tail bearing two dorsal fins of unequal size and a well-developed caudal fin. Solitary and nocturnal, the Pacific electric ray can generate up to 45 volts of electricity for the purposes of subduing prey or self-defense. It feeds mainly on bony fishes, ambushing them from the substrate during the day and actively hunting for them at night. Reproduction is aplacental viviparous, meaning that the embryos are initially nourished by yolk, later supplemented by histotroph ("uterine milk") produced by the mother. Females bear litters of 17–20 pups, probably once every other year. Care should be exercised around the Pacific electric ray, as it has been known to act aggressively if provoked and its electric shock can potentially incapacitate a diver. It and other electric rays are used as model organisms for biomedical research. The International Union for Conservation of Nature (IUCN) has listed this species under Least Concern, as it is not fished in any significant numbers. Taxonomy The Pacific electric ray was described by American ichthyologist William Orville Ayres, the first Curator of Ichthyology at the California Academy of Sciences, who named it after the U.S. state where it was first discovered by science. Ayers published his account in 1855, in the inaugural volume of the academy's Proceedings; no typ
https://en.wikipedia.org/wiki/Spectral%20theory%20of%20compact%20operators
In functional analysis, compact operators are linear operators on Banach spaces that map bounded sets to relatively compact sets. In the case of a Hilbert space H, the compact operators are the closure of the finite rank operators in the uniform operator topology. In general, operators on infinite-dimensional spaces feature properties that do not appear in the finite-dimensional case, i.e. for matrices. The compact operators are notable in that they share as much similarity with matrices as one can expect from a general operator. In particular, the spectral properties of compact operators resemble those of square matrices. This article first summarizes the corresponding results from the matrix case before discussing the spectral properties of compact operators. The reader will see that most statements transfer verbatim from the matrix case. The spectral theory of compact operators was first developed by F. Riesz. Spectral theory of matrices The classical result for square matrices is the Jordan canonical form, which states the following: Theorem. Let A be an n × n complex matrix, i.e. A a linear operator acting on Cn. If λ1...λk are the distinct eigenvalues of A, then Cn can be decomposed into the invariant subspaces of A The subspace Yi = Ker(λi − A)m where Ker(λi − A)m = Ker(λi − A)m+1. Furthermore, the poles of the resolvent function ζ → (ζ − A)−1 coincide with the set of eigenvalues of A. Compact operators Statement Proof Preliminary Lemmas The theorem claims several properties of the operator λ − C where λ ≠ 0. Without loss of generality, it can be assumed that λ = 1. Therefore we consider I − C, I being the identity operator. The proof will require two lemmas. This fact will be used repeatedly in the argument leading to the theorem. Notice that when X is a Hilbert space, the lemma is trivial. Concluding the Proof Invariant subspaces As in the matrix case, the above spectral properties lead to a decomposition of X into invariant subspaces o
https://en.wikipedia.org/wiki/Initial
In a written or published work, an initial capital (also referred to as a drop capital or simply an initial cap, initial, initcapital, initcap or init or a drop cap or drop) is a letter at the beginning of a word, a chapter, or a paragraph that is larger than the rest of the text. The word is derived from the Latin initialis, which means standing at the beginning. An initial is often several lines in height and in older books or manuscripts are known as "inhabited" initials. Certain important initials, such as the Beatus initial or "B" of Beatus vir... at the opening of Psalm 1 at the start of a vulgate Latin. These specific initials in an illuminated manuscript were also called initiums. In the present, the word "initial" commonly refers to the first letter of any word or name, the latter normally capitalized in English usage and is generally that of a first given name or a middle one or ones. History The classical tradition was slow to use capital letters for initials at all; in surviving Roman texts it often is difficult even to separate the words as spacing was not used either. In late antiquity (–6th century) both came into common use in Italy, the initials usually were set in the left margin (as in the third example below), as though to cut them off from the rest of the text, and about twice as tall as the other letters. The radical innovation of insular manuscripts was to make initials much larger, not indented, and for the letters immediately following the initial also to be larger, but diminishing in size (called the "diminuendo" effect, after the musical term). Subsequently, they became larger still, coloured, and penetrated farther and farther into the rest of the text, until the whole page might be taken over. The decoration of insular initials, especially large ones, was generally abstract and geometrical, or featured animals in patterns. Historiated initials were an Insular invention, but did not come into their own until the later developments
https://en.wikipedia.org/wiki/William%20Poduska
John William Poduska Sr. is an American engineer and entrepreneur. He was a founder of Prime Computer, Apollo Computer, and Stellar Computer. Prior to that he headed the Electronics Research Lab at NASA's Cambridge, Massachusetts, facility and also worked at Honeywell. Poduska has been involved in a number of other high-tech startups. He also has served on the boards of Novell, Anadarko Petroleum, Anystream, Boston Ballet, Wang Center and the Boston Lyric Opera. Poduska was elected a member of the National Academy of Engineering in 1986 for technical and entrepreneurial leadership in computing, including development of Prime, the first virtual memory minicomputer, and Apollo, the first distributed, co-operating workstation. Education Poduska was born in Memphis, Tennessee. In 1955, he graduated from Central High School in Memphis. He went on to earn a S.B. and S.M. in electrical engineering, both in 1960, from MIT. He also earned a Sc.D. in EECS from MIT in 1962. Awards Recipient of the McDowell Award, National Academy of Engineering, 1986
https://en.wikipedia.org/wiki/Obnoxio%20the%20Clown
Obnoxio the Clown is a character appearing in American comic books published by Marvel Comics. The character appears in the humor magazine Crazy and served as its mascot. He was created by Larry Hama. Character Obnoxio was portrayed as a slovenly, vulgar, cigar-puffing middle-aged man in a torn and dirty clown suit, with a dyspeptic and cynical attitude. Background Larry Hama created Obnoxio immediately after he became the editor of Crazy. He explained, "I thought the old mascot was too 'nebbishy.' I wanted someone proactive, and somebody who actually had a voice, unlike all the other humor magazine mascots." The character's face was modeled from Al Milgrom. Artist Alan Kupperberg, who would become heavily associated with the character, recounted, "Obnoxio's first appearance was in a one-panel illustration to accompany a subscription ad in Crazy, written by Larry and calling for likenesses of P. T. Barnum and Marcy Tweed among others. This was right up my alley, so I pulled the reference and really went to town, doing a very nice half-tone illo. I think the piece impressed Larry quite a bit, because if my memory is correct, Larry left me strictly alone on anything and everything Obnoxio the Clown-related." Most of the Obnoxio features were written by Virgil Diamond, who according to Hama "was a high school English teacher in Brooklyn. I heard from him a few years ago when he retired. He really labored on those pages and was constantly fussing with them." In other media Comic books Obnoxio the Clown appeared in a number of single-page gags in What If? #34 (August 1982). Marvel also published a one-shot Obnoxio the Clown (titled Obnoxio the Clown vs. the X-Men on the cover) comic book in April 1983, despite the fact that Crazy had already been cancelled. The plot centered on Obnoxio the Clown as a villain and unlikely ally of the X-Men. He and the group, at the X-Mansion, 'team-up' against Eye Scream, a villain who can transform into various types of ice cream.
https://en.wikipedia.org/wiki/Ivan%20Petrovsky
Ivan Georgievich Petrovsky () (18 January 1901 – 15 January 1973) (the family name is also transliterated as Petrovskii or Petrowsky) was a Soviet mathematician working mainly in the field of partial differential equations. He greatly contributed to the solution of Hilbert's 19th and 16th problems, and discovered what are now called Petrovsky lacunas. He also worked on the theories of boundary value problems, probability, and on the topology of algebraic curves and surfaces. Biography Petrovsky was a student of Dmitri Egorov. Among his students were Olga Ladyzhenskaya, Yevgeniy Landis, Olga Oleinik and Sergei Godunov. Petrovsky taught at Steklov Institute of Mathematics. He was a member of the Soviet Academy of Sciences since 1946 and was awarded Hero of Socialist Labor in 1969. He was the president of Moscow State University (1951–1973) and the head of the International Congress of Mathematicians (Moscow, 1966). He is buried in the cemetery of the Novodevichy Convent in Moscow. Selected publications . . . | . .
https://en.wikipedia.org/wiki/Neutral%20particle
In physics, a neutral particle is a particle without an electric charge, such as a neutron. The term neutral particles should not be confused with truly neutral particles, the subclass of neutral particles that are also identical to their own antiparticles. Stable or long-lived neutral particles Long-lived neutral particles provide a challenge in the construction of particle detectors, because they do not interact electromagnetically, except possibly through their magnetic moments. This means that they do not leave tracks of ionized particles or curve in magnetic fields. Examples of such particles include photons, neutrons, and neutrinos. Other neutral particles Other neutral particles are very short-lived and decay before they could be detected even if they were charged. They have been observed only indirectly. They include: Z bosons Dozens of heavy neutral hadrons: Neutral mesons such as the and The neutral Delta baryon (), and other neutral baryons, such as the and See also Neutral particle oscillation Truly neutral particle
https://en.wikipedia.org/wiki/Stochastic%20cooling
Stochastic cooling is a form of particle beam cooling. It is used in some particle accelerators and storage rings to control the emittance of the particle beams in the machine. This process uses the electrical signals that the individual charged particles generate in a feedback loop to reduce the tendency of individual particles to move away from the other particles in the beam. The technique was invented and applied at the Intersecting Storage Rings, and later the Super Proton Synchrotron (SPS), at CERN in Geneva, Switzerland, by Simon van der Meer, a physicist from the Netherlands. It was used to collect and cool antiprotons—these particles were injected into the Proton-Antiproton Collider, a modification of the SPS, with counter-rotating protons and collided at a particle physics experiment. For this work, van der Meer was awarded the Nobel Prize in Physics in 1984. He shared this prize with Carlo Rubbia of Italy, who proposed the Proton-Antiproton Collider. This experiment discovered the W and Z bosons, fundamental particles that carry the weak nuclear force. Before the shutdown of the Tevatron on the 30th of September 2011, Fermi National Accelerator Laboratory used stochastic cooling in its antiproton source. The accumulated antiprotons were sent to the Tevatron to collide with protons at two collision points: the CDF and the D0 experiment. Stochastic cooling in the Tevatron at Fermilab was attempted, but was not fully successful. The equipment was subsequently transferred to Brookhaven National Laboratory, where it was successfully used in a longitudinal cooling system in RHIC, operationally used beginning in 2006. Since 2012 RHIC has 3D operational stochastic cooling, i.e. cooling the horizontal, vertical, and longitudinal planes. Technical details Stochastic cooling uses the electrical signals produced by individual particles in a group of particles (called a "bunch" of particles) to drive an electro-magnet device, usually an electric kicker, that
https://en.wikipedia.org/wiki/General%20Coordinates%20Network
The General Coordinates Network (GCN), formerly known as the Gamma-ray burst Coordinates Network, is an open-source platform created by NASA to receive and transmit alerts about astronomical transient phenomena. This includes neutrino detections by observatories such as IceCube or Super-Kamiokande, gravitational wave events from the LIGO, Virgo and KAGRA interferometers, and gamma-ray bursts observed by Fermi, Swift or INTEGRAL. One of the main goals is to allow for follow-up observations of an event by other observatories, in hope to observe multi-messenger events. GCN has its origins in the BATSE coordinates distribution network (BACODINE). The Burst And Transient Source Experiment (BATSE) was a scientific instrument on the Compton Gamma-Ray Observatory (CGRO), and BACODINE monitored the BATSE real-time telemetry from CGRO. The first function of BACODINE was calculating the right ascension (RA) and declination (dec) locations for GRBs that it detected, and distributing those locations to sites around the world in real-time. Since the de-orbiting of the CGRO, this function of BACODINE is no longer operational. The second function of BACODINE was collecting right ascension and declination locations of GRBs detected by spacecraft other than CGRO, and then distributing that information. With this functionality, the original BACODINE name was changed to the more general name GCN. It later evolved to include alerts from non-GRB observatories and was sometimes referred to as GCN/TAN (for Transient Astronomy Network). Design The GCN relies on two types of alerts: notices and circulars. Notices are machine-readable alerts, which are distributed in real time; they typically include only basic information about the event. Circulars are brief human-readable alerts, which are distributed (typically by e-mail) with a low latency but not in real time; they can also contain predictions, requests for follow-up observations from other observatories, or advertise observing plans.
https://en.wikipedia.org/wiki/Leigh%20Canham
Leigh Canham is a British scientist who has pioneered the optoelectronic and biomedical applications of porous silicon. Leigh Canham graduated from University College London in 1979 with a BSc in Physics and completed his PhD at King's College London in 1983. His early work in this area took place at the Royal Signals and Radar Establishment in Malvern, Worcestershire. Canham and his colleagues showed that electrochemically etched silicon could be made porous. This porous material could emit visible light when a current was passed through it (electroluminescence). Later the group demonstrated the biocompatibility of porous silicon. Canham now works as Chief Scientific Officer of psiMedica (part of pSiVida). According to the pSiVida web site, Canham is the most cited author on porous silicon. In a study of most cited physicists up to 1997 Canham ranked at 771. Bibliography Selected papers Porous silicon-based scaffolds for tissue engineering and other biomedical applications, Jeffery L. Coffer, Melanie A. Whitehead, Dattatri K. Nagesha, Priyabrata Mukherjee, Giridhar Akkaraju, Mihaela Totolici, Roghieh S. Saffie, Leigh T. Canham, Physica Status Solidi A Vol. 202, Issue 8, Pages 1451 - 1455 Gaining light from silicon, Leigh Canham, Nature vol. 408, pp. 411 – 412 (2000) Progress towards silicon optoelectronics using porous silicon technology, L. T. Canham, T. I. Cox, A. Loni and A. J. Simons, Applied Surface Science, Volume 102, Pages 436-441 (1996) Porous silicon multilayer optical waveguides, A. Loni, L. T. Canham, M. G. Berger, R. Arens-Fischer, H. Munder, H. Luth, H. F. Arrand and T. M. Benson, Thin Solid Films, Vol. 276, Issues 1-2, pages 143-146 (1996) The origin of efficient luminescence in highly porous silicon, K. J. Nash, P. D. J. Calcott, L. T. Canham, M. J. Kane and D. Brumhead, J. of Luminescence, Volumes 60-61, Pages 297-301 (1994) Electronic quality of vapour phase epitaxial Si grown at reduced temperature, W. Y. Leong, L. T. Canham, I. M. Yo
https://en.wikipedia.org/wiki/Autocorrelation%20technique
The autocorrelation technique is a method for estimating the dominating frequency in a complex signal, as well as its variance. Specifically, it calculates the first two moments of the power spectrum, namely the mean and variance. It is also known as the pulse-pair algorithm in radar theory. The algorithm is both computationally faster and significantly more accurate compared to the Fourier transform, since the resolution is not limited by the number of samples used. Derivation The autocorrelation of lag 1 can be expressed using the inverse Fourier transform of the power spectrum : If we model the power spectrum as a single frequency , this becomes: where it is apparent that the phase of equals the signal frequency. Implementation The mean frequency is calculated based on the autocorrelation with lag one, evaluated over a signal consisting of N samples: The spectral variance is calculated as follows: Applications Estimation of blood velocity and turbulence in color flow imaging used in medical ultrasonography. Estimation of target velocity in pulse-doppler radar External links A covariance approach to spectral moment estimation, Miller et al., IEEE Transactions on Information Theory. Doppler Radar Meteorological Observations Doppler Radar Theory. Autocorrelation technique described on p.2-11 Real-Time Two-Dimensional Blood Flow Imaging Using an Autocorrelation Technique, by Chihiro Kasai, Koroku Namekawa, Akira Koyano, and Ryozo Omoto, IEEE Transactions on Sonics and Ultrasonics, Vol. SU-32, No.3, May 1985. Radar theory Signal processing Autocorrelation
https://en.wikipedia.org/wiki/MU%20puzzle
The MU puzzle is a puzzle stated by Douglas Hofstadter and found in Gödel, Escher, Bach involving a simple formal system called "MIU". Hofstadter's motivation is to contrast reasoning within a formal system (i.e., deriving theorems) against reasoning about the formal system itself. MIU is an example of a Post canonical system and can be reformulated as a string rewriting system. The puzzle Suppose there are the symbols , , and which can be combined to produce strings of symbols. The MU puzzle asks one to start with the "axiomatic" string and transform it into the string using in each step one of the following transformation rules: {| |- | Nr.           | COLSPAN=3 | Formal rule | Informal explanation | COLSPAN=3 | Example |- | 1. | ALIGN=RIGHT | x || → || x | Add a to the end of any string ending in | ALIGN=RIGHT | || to || |- | 2. | ALIGN=RIGHT | x || → || xx | Double the string after the | ALIGN=RIGHT | || to || |- | 3. | ALIGN=RIGHT | xy || → || xy | Replace any with a | ALIGN=RIGHT | || to || |- | 4. | ALIGN=RIGHT | xy || → || xy | Remove any | ALIGN=RIGHT | || to || |} Solution The puzzle cannot be solved: it is impossible to change the string into by repeatedly applying the given rules. In other words, MU is not a theorem of the MIU formal system. To prove this, one must step "outside" the formal system itself. In order to prove assertions like this, it is often beneficial to look for an invariant; that is, some quantity or property that doesn't change while applying the rules. In this case, one can look at the total number of in a string. Only the second and third rules change this number. In particular, rule two will double it while rule three will reduce it by 3. Now, the invariant property is that the number of is not divisible by 3: In the beginning, the number of s is 1 which is not divisible by 3. Doubling a number that is not divisible by 3 does not make it divisible by 3. Subtracting 3 from a number that is not divisible
https://en.wikipedia.org/wiki/Electromagnetic%20stress%E2%80%93energy%20tensor
In relativistic physics, the electromagnetic stress–energy tensor is the contribution to the stress–energy tensor due to the electromagnetic field. The stress–energy tensor describes the flow of energy and momentum in spacetime. The electromagnetic stress–energy tensor contains the negative of the classical Maxwell stress tensor that governs the electromagnetic interactions. Definition SI units In free space and flat space–time, the electromagnetic stress–energy tensor in SI units is where is the electromagnetic tensor and where is the Minkowski metric tensor of metric signature . When using the metric with signature , the expression on the right of the equals sign will have opposite sign. Explicitly in matrix form: where is the Poynting vector, is the Maxwell stress tensor, and c is the speed of light. Thus, is expressed and measured in SI pressure units (pascals). CGS unit conventions The permittivity of free space and permeability of free space in cgs-Gaussian units are then: and in explicit matrix form: where Poynting vector becomes: The stress–energy tensor for an electromagnetic field in a dielectric medium is less well understood and is the subject of the unresolved Abraham–Minkowski controversy. The element of the stress–energy tensor represents the flux of the μth-component of the four-momentum of the electromagnetic field, , going through a hyperplane ( is constant). It represents the contribution of electromagnetism to the source of the gravitational field (curvature of space–time) in general relativity. Algebraic properties The electromagnetic stress–energy tensor has several algebraic properties: The symmetry of the tensor is as for a general stress–energy tensor in general relativity. The trace of the energy–momentum tensor is a Lorentz scalar; the electromagnetic field (and in particular electromagnetic waves) has no Lorentz-invariant energy scale, so its energy–momentum tensor must have a vanishing trace. This tracelessness
https://en.wikipedia.org/wiki/Zoophyte
A zoophyte (animal-plant) is an obsolete term for an organism thought to be intermediate between animals and plants, or an animal with plant-like attributes or appearance. In the 19th century they were reclassified as Radiata which included various taxa, a term superseded by Coelenterata referring more narrowly to the animal phyla Cnidaria (coral animals, true jellies, sea anemones, sea pens, and their allies), sponges, and Ctenophora (comb jellies). A group of strange creatures that exist somewhere on, or between, the boundaries of plants and animals kingdoms were the subject of considerable debate in the eighteenth century. Some naturalists believed that they were a blend of plant and animal; other naturalists considered them to be entirely either plant or animal (such as sea anemones). Ancient and medieval to early modern era In Eastern cultures such as Ancient China fungi were classified as plants in the Traditional Chinese Medicine texts, and cordyceps, and in particular Ophiocordyceps sinensis, were considered zoophytes. Zoophytes are common in medieval and renaissance era herbals, notable examples including the Tartar Lamb, a legendary plant which grew sheep as fruit. Zoophytes appeared in many influential early medical texts, such as Dioscorides's De Materia Medica and subsequent adaptations and commentaries on that work, notably Mattioli's Discorsi. Zoophytes are frequently seen as medieval attempts to explain the origins of exotic, unknown plants with strange properties (such as cotton, in the case of the Tartar Lamb). Reports of zoophytes continued into the seventeenth century and were commented on by many influential thinkers of the time period, including Francis Bacon. It was not until 1646 that claims of zoophytes began to be concretely refuted, and skepticism towards claims of zoophytes mounted throughout the seventeenth and eighteenth centuries. 18th to 19th century, natural history As natural history and natural philosophy developed in the
https://en.wikipedia.org/wiki/Herv%C3%A9%20This
Hervé This (; born 5 June 1955 in Suresnes, Hauts-de-Seine, sometimes named Hervé This-Benckhard, or Hervé This vo Kientza) is a French physical chemist who works for the Institut National de la Recherche Agronomique at AgroParisTech, in Paris, France. His main area of scientific research is molecular gastronomy, that is the science of culinary phenomena (more precisely, looking for the mechanisms of phenomena occurring during culinary transformations). Career With the late Nicholas Kurti, he coined the scientific term "Molecular and Physical Gastronomy" in 1988, which he shortened to "Molecular Gastronomy" after Kurti's death in 1998. Graduated from ESPCI Paris, he obtained a Ph.D from the Pierre and Marie Curie University, under the title "La gastronomie moléculaire et physique". He has written many scientific publications, as well as several books on the subject, which can be understood even by those who have little or no knowledge of chemistry, but so far only four have been translated into English. He also collaborates with the magazine Pour la Science (the French edition ofScientific American), the aim of which is to present scientific concepts to the general public. Member of the Académie d'agriculture de France since 2010, he was the president of the Section "Human Food" for 9 years. In 2004, he was invited by the French Academy of sciences to create the Foundation "Food Science & Culture", of which he was appointed the Scientific Director. The same year, he was asked to create the Institute for Advanced Studies of Taste ("Hautes Etudes du Goût") with the University of Reims Champagne Ardenne, of which he is the President of the Educational Program. In 2011, he was appointed as a Consulting Professor of AgroParisTech, and he was also asked to create courses on science and technology at Sciences Po Paris. The 3 June 2014, he was asked to create the "International Center for Molecular Gastronomy AgroParisTech-Inrae", to which he was appointed director. The
https://en.wikipedia.org/wiki/Loudspeaker%20measurement
Loudspeaker measurement is the practice of determining the behaviour of loudspeakers by measuring various aspects of performance. This measurement is especially important because loudspeakers, being transducers, have a higher level of distortion than other audio system components used in playback or sound reinforcement. Anechoic measurement One way to test a loudspeaker requires an anechoic chamber, with an acoustically transparent floor-grid. The measuring microphone is normally mounted on an unobtrusive boom (to avoid reflections) and positioned 1 metre in front of the drive units on the axis with the high-frequency driver. While this can produce repeatable results, such a 'free-space' measurement is not representative of performance in a room, especially a small room. For valid results at low frequencies, a very large anechoic chamber is needed, with large absorbent wedges on all sides. Most anechoic chambers are not designed for accurate measurement down to 20 Hz and most are not capable of measuring below 80 Hz. Tetrahedral chamber A tetrahedral chamber is capable of measuring the low frequency limit of the driver without the large footprint required by an anechoic chamber. This compact measurement system for loudspeaker drivers is defined in IEC 60268-21:2018, IEC 60268-22:2020 and AES73id-2019. Half-space measurement An alternative is to simply lay the speaker on its back pointing at the sky on open grass. Ground reflection will still interfere but will be greatly reduced in the mid-range because most speakers are directional, and only radiate very low frequencies backward. Putting absorbent material around the speaker will reduce mid-range ripple by absorbing rear radiation. At low frequencies, the ground reflection is always in-phase, so that the measured response will have increased bass, but this is what generally happens in a room anyway, where the rear wall and the floor both provide a similar effect. There is a good case, therefore, using such
https://en.wikipedia.org/wiki/Covariant%20formulation%20of%20classical%20electromagnetism
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism (in particular, Maxwell's equations and the Lorentz force) in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems. This article uses the classical treatment of tensors and Einstein summation convention throughout and the Minkowski metric has the form . Where the equations are specified as holding in a vacuum, one could instead regard them as the formulation of Maxwell's equations in terms of total charge and current. For a more general overview of the relationships between classical electromagnetism and special relativity, including various conceptual implications of this picture, see Classical electromagnetism and special relativity. Covariant objects Preliminary four-vectors Lorentz tensors of the following kinds may be used in this article to describe bodies or particles: four-displacement: Four-velocity: where γ(u) is the Lorentz factor at the 3-velocity u. Four-momentum: where is 3-momentum, is the total energy, and is rest mass. Four-gradient: The d'Alembertian operator is denoted , The signs in the following tensor analysis depend on the convention used for the metric tensor. The convention used here is , corresponding to the Minkowski metric tensor: Electromagnetic tensor The electromagnetic tensor is the combination of the electric and magnetic fields into a covariant antisymmetric tensor whose entries are B-field quantities. and the result of raising its indices is where E is the elec
https://en.wikipedia.org/wiki/Maxwell%27s%20equations%20in%20curved%20spacetime
In physics, Maxwell's equations in curved spacetime govern the dynamics of the electromagnetic field in curved spacetime (where the metric may not be the Minkowski metric) or where one uses an arbitrary (not necessarily Cartesian) coordinate system. These equations can be viewed as a generalization of the vacuum Maxwell's equations which are normally formulated in the local coordinates of flat spacetime. But because general relativity dictates that the presence of electromagnetic fields (or energy/matter in general) induce curvature in spacetime, Maxwell's equations in flat spacetime should be viewed as a convenient approximation. When working in the presence of bulk matter, distinguishing between free and bound electric charges may facilitate analysis. When the distinction is made, they are called the macroscopic Maxwell's equations. Without this distinction, they are sometimes called the "microscopic" Maxwell's equations for contrast. The electromagnetic field admits a coordinate-independent geometric description, and Maxwell's equations expressed in terms of these geometric objects are the same in any spacetime, curved or not. Also, the same modifications are made to the equations of flat Minkowski space when using local coordinates that are not rectilinear. For example, the equations in this article can be used to write Maxwell's equations in spherical coordinates. For these reasons, it may be useful to think of Maxwell's equations in Minkowski space as a special case of the general formulation. Summary In general relativity, the metric tensor is no longer a constant (like as in Examples of metric tensor) but can vary in space and time, and the equations of electromagnetism in a vacuum become where is the density of the Lorentz force, is the inverse of the metric tensor , and is the determinant of the metric tensor. Notice that and are (ordinary) tensors, while , , and are tensor densities of weight +1. Despite the use of partial derivatives, the
https://en.wikipedia.org/wiki/Monogenism
Monogenism or sometimes monogenesis is the theory of human origins which posits a common descent for all human races. The negation of monogenism is polygenism. This issue was hotly debated in the Western world in the nineteenth century, as the assumptions of scientific racism came under scrutiny both from religious groups and in the light of developments in the life sciences and human science. It was integral to the early conceptions of ethnology. Modern scientific views favor this theory, with the most widely accepted model for human origins being the "Out of Africa" theory. In the Abrahamic religions The belief that all humans are descended from Adam is central to traditional Judaism, Christianity and Islam. Christian monogenism played an important role in the development of an African-American literature on race, linked to theology rather than science, up to the time of Martin Delany and his Principia of Ethnology (1879). Scriptural ethnology is a term applied to debate and research on the biblical accounts, both of the early patriarchs and migration after Noah's Flood, to explain the diverse peoples of the world. Monogenism as a Bible-based theory required both the completeness of the narratives and the fullness of their power of explanation. These time-honored debates were sharpened by the rise of polygenist skeptical claims; when Louis Agassiz set out his polygenist views in 1847, they were opposed on biblical grounds by John Bachman, and by Thomas Smyth in his Unity of the Human Races. The debates also saw the participation of Delany, and George Washington Williams defended monogenesis as the starting point of his pioneer history of African-Americans. Environmentalist monogenism Environmentalist monogenism describes a theory current in the first half of the nineteenth century, in particular, according to which there was a single human origin, but that subsequent migration of groups of humans had subjected them to different environmental conditions. Envir
https://en.wikipedia.org/wiki/188%20%28number%29
188 (one hundred [and] eighty-eight) is the natural number following 187 and preceding 189. In mathematics There are 188 different four-element semigroups, and 188 ways a chess queen can move from one corner of a board to the opposite corner by a path that always moves closer to its goal. The sides and diagonals of a regular dodecagon form 188 equilateral triangles. In other fields The number 188 figures prominently in the film The Parallel Street (1962) by German experimental film director . The opening frame of the film is just an image of this number. See also The year AD 188 or 188 BC List of highways numbered 188
https://en.wikipedia.org/wiki/Inhomogeneous%20electromagnetic%20wave%20equation
In electromagnetism and applications, an inhomogeneous electromagnetic wave equation, or nonhomogeneous electromagnetic wave equation, is one of a set of wave equations describing the propagation of electromagnetic waves generated by nonzero source charges and currents. The source terms in the wave equations make the partial differential equations inhomogeneous, if the source terms are zero the equations reduce to the homogeneous electromagnetic wave equations. The equations follow from Maxwell's equations. Maxwell's equations For reference, Maxwell's equations are summarized below in SI units and Gaussian units. They govern the electric field E and magnetic field B due to a source charge density ρ and current density J: {| class="wikitable" style="text-align: center;" |- ! scope="col" style="width: 15em;" | Name ! scope="col" | SI units ! scope="col" | Gaussian units |- ! scope="row" | Gauss's law | | |- ! scope="row" | Gauss's law for magnetism | | |- ! scope="row" | Maxwell–Faraday equation (Faraday's law of induction) | | |- ! scope="row" | Ampère's circuital law (with Maxwell's addition) | | |- |} where ε0 is the vacuum permittivity and μ0 is the vacuum permeability. Throughout, the relation is also used. SI units E and B fields Maxwell's equations can directly give inhomogeneous wave equations for the electric field E and magnetic field B. Substituting Gauss' law for electricity and Ampère's Law into the curl of Faraday's law of induction, and using the curl of the curl identity (The last term in the right side is the vector Laplacian, not Laplacian applied on scalar functions.) gives the wave equation for the electric field E: Similarly substituting Gauss's law for magnetism into the curl of Ampère's circuital law (with Maxwell's additional time-dependent term), and using the curl of the curl identity, gives the wave equation for the magnetic field B: The left hand sides of each equation correspond to wave motion (the D'Alembert operator act
https://en.wikipedia.org/wiki/Parallel%20Ocean%20Program
The Parallel Ocean Program (POP) is a three-dimensional ocean circulation model designed primarily for studying the ocean climate system. The model is developed and supported primarily by researchers at LANL. External links Physical oceanography
https://en.wikipedia.org/wiki/Journal%20of%20Virology
The Journal of Virology is a biweekly peer-reviewed scientific journal that covers research concerning all aspects of virology. It was established in 1967 and is published by the American Society for Microbiology. Research papers are available free online four months after print publication. The current editors-in-chief are Felicia Goodrum (University of Arizona) and Stacey Schultz-Cherry (St. Jude Children's Research Hospital). Past editors-in-chief include Rozanne M. Sandri-Goldin (University of California, Irvine, California) (2012-2022), Lynn W. Enquist (2002–2012), Thomas Shenk (1994–2002), and Arnold J. Levine (1984–1994). Abstracting and indexing The journal is abstracted and indexed in AGRICOLA, Biological Abstracts, BIOSIS Previews, Chemical Abstracts, Current Contents, EMBASE, MEDLINE/Index Medicus/PubMed, and the Science Citation Index Expanded. Its 2015 impact factor was 4.606, ranking it fifth out of 33 journals in the category "Virology".
https://en.wikipedia.org/wiki/MicrobeLibrary
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology
https://en.wikipedia.org/wiki/Commutation%20cell
The commutation cell is the basic structure in power electronics. It is composed of two electronic switches (today, a high-power semiconductor, not a mechanical switch). It was traditionally referred to as a chopper, but since switching power supplies became a major form of power conversion, this new term has become more popular. The purpose of the commutation cell is to "chop" DC power into square wave alternating current. This is done so that an inductor and a capacitor can be used in an LC circuit to change the voltage. This is, in theory, a lossless process; in practice, efficiencies above 80-90% are routinely achieved. The output is usually run through a filter to produce clean DC power. By controlling the on and off times (the duty cycle) of the switch in the commutation cell, the output voltage can be regulated. This basic principle is the core of most modern power supplies, from tiny DC-DC converters in portable devices to massive switching stations for high voltage DC power transmission. Connection of two power elements A Commutation cell connects two power elements, often referred to as sources, although they can either produce or absorb power. Some requirements to connect power sources exist. The impossible configurations are listed in figure 1. They are basically: a voltage source cannot be shorted, as the short circuit would impose a zero voltage which would contradict the voltage generated by the source; in an identical way, a current source cannot be placed in an open circuit; two (or more) voltage sources cannot be connected in parallel, as each of them would try to impose the voltage on the circuit; two (or more) current sources cannot be connected in series, as each of them would try to impose the current in the loop. This applies to classical sources (battery, generator) and capacitors and inductors: At a small time scale, a capacitor is identical to a voltage source and an inductor to a current source. Connecting two capacitors with dif
https://en.wikipedia.org/wiki/Frontal%20vein
The frontal vein (supratrochlear vein) begins on the forehead in a venous plexus which communicates with the frontal branches of the superficial temporal vein. The veins converge to form a single trunk, which runs downward near the middle line of the forehead parallel with the vein of the opposite side. The two veins are joined, at the root of the nose, by a transverse branch, called the nasal arch, which receives some small veins from the dorsum of the nose. At the root of the nose the veins diverge, and, each at the medial angle of the orbit, joins the supraorbital vein, to form the angular vein. Occasionally the frontal veins join to form a single trunk, which bifurcates at the root of the nose into the two angular veins. See also Glabella
https://en.wikipedia.org/wiki/Supraorbital%20vein
The supraorbital vein is a vein of the forehead. It communicates with the frontal branch of the superficial temporal vein. It passes through the supraorbital notch, and merges with the angular vein to form the superior ophthalmic vein. The supraorbital vein helps to drain blood from the forehead, eyebrow, and upper eyelid. Structure The supraorbital vein begins on the forehead, where it communicates with the frontal branch of the superficial temporal vein. It runs downward superficial to the frontalis muscle. It merges with the angular vein to form the superior ophthalmic vein. Previous to its junction with the angular vein, it passes through the supraorbital notch into the orbit around the eye. As this vessel passes through the notch, it receives the frontal diploic vein through a foramen at the bottom of the notch. Function The supraorbital vein helps to drain blood from the forehead, eyebrow, and upper eyelid. Additional images
https://en.wikipedia.org/wiki/Superficial%20temporal%20vein
The superficial temporal vein is a vein of the side of the head which collects venous blood from the region of the temple. It arises from an anastomosing venous plexus on the side and vertex of the head. The superficial temporal vein terminates within the substance of the parotid gland by uniting with the maxillary vein to form the retromandibular vein. Structure It begins on the side and vertex of the skull in a network () which communicates with the frontal vein and supraorbital vein, with the corresponding vein of the opposite side, and with the posterior auricular vein and occipital vein. From this network frontal and parietal branches arise, and join above the zygomatic arch to form the trunk of the vein, which is joined by the middle temporal vein emerging from the temporalis muscle. It then crosses the posterior root of the zygomatic arch, enters the substance of the parotid gland where it unites with the internal maxillary vein to form the posterior facial vein. Tributaries Tributaries of the superficial temporal vein drain venous blood from the temple. Tributaries of the superficial temporal vein include: some parotid veins articular veins of the temporomandibular joint anterior auricular veins from the auricula the transverse facial vein from the side of the face
https://en.wikipedia.org/wiki/Angular%20vein
The angular vein is a vein of the face. It is the upper part of the facial vein, above its junction with the superior labial vein. It is formed by the junction of the supratrochlear vein and supraorbital vein, and joins with the superior labial vein. It drains the medial canthus, and parts of the nose and the upper lip. It can be a route of spread of infection from the danger triangle of the face to the cavernous sinus. Structure The angular vein is the upper part of the facial vein, above its junction with the superior labial vein. It anastomoses with the supratrochlear vein, and the supraorbital vein. Its connection with the supraorbital vein forms the superior ophthalmic vein that drains through the orbit. This also connects it with the inferior ophthalmic vein and the cavernous sinus. These do not have valves. The angular vein itself may not contain valves. It receives the lateral nasal veins from the ala of the nose, and the inferior palpebral vein. The angular vein lies lateral to the angular nerve. It runs obliquely downward by the side of the nose. It passes under zygomaticus major muscle. It joins with the superior labial vein. Function The angular vein drains the medial canthus, and parts of the nose and the upper lip. Clinical significance The angular vein may be affected by a thrombus. This can create problems for endovascular treatment. Cavernous sinus thrombosis Any infection of the mouth or face (such as the danger triangle of the face) can spread to the cavernous sinus via the angular veins. This is particularly as the veins are valveless. This can cause thrombosis. Squeezing pimples in this area should be avoided. Additional images
https://en.wikipedia.org/wiki/Retromandibular%20vein
The retromandibular vein (temporomaxillary vein, posterior facial vein) is a major vein of the face. It is formed within the parotid gland by the confluence of the maxillary vein, and superficial temporal vein. It descends in the gland and splits into two branches upon emerging from the gland. Its anterior branch then joins the (anterior) facial vein forming the common facial vein, while its posterior branch joins the posterior auricular vein forming the external jugular vein. Anatomy Origin The retromandibular vein is formed within the parotid gland by the confluence of the maxillary vein, and superficial temporal vein. Course It descends inside parotid gland, superficial to the external carotid artery (but beneath the facial nerve), between the sternocleidomastoideus muscle and ramus of mandible. It emerges from the parotid gland inferiorly, then immediately divides into two branches: an anterior branch which passes anterior-ward to unite with the (anterior) facial vein forming the common facial vein (which then empties into the internal jugular vein). a posterior branch which penetrates the investing layer of the deep cervical fascia before uniting with the posterior auricular vein forming the external jugular vein. Function The retromandibular vein provides venous drainage to the superior cranium, and significant drainage to the ear. Clinical significance Parrot's sign is a sensation of pain when pressure is applied to the retromandibular region. Additional images
https://en.wikipedia.org/wiki/Food%20Union%20NNF
The Food Union NNF () is a trade union representing food and tobacco workers in Denmark. The union was founded in 1980, when the Bakery, Pastry and Mill Workers' Union merged with the Danish Union of Slaughterhouse Workers, the Danish Tobacco Workers' Union, and the Confectionery and Chocolate Workers' Union. They formed the Danish Food and Allied Workers' Union (NNF), and in 1983, the Association of Dairy Workers also merged in. In 2009, the union shortened its name, to become the "Food Union NNF". Like its predecessors, the union affiliated to the Danish Confederation of Trade Unions, and since 2019 has been a member of its successor, the Danish Trade Union Confederation. In 1997, it had 41,913 members, but by 2018, membership had dropped to only 17,095.
https://en.wikipedia.org/wiki/Posterior%20auricular%20vein
The posterior auricular vein is a vein of the head. It begins from a plexus with the occipital vein and the superficial temporal vein, descends behind the auricle, and drains into the external jugular vein. Structure The posterior auricular vein begins upon the side of the head, in a plexus which communicates with the tributaries of the occipital vein and the superficial temporal vein. It descends behind the auricle. It joins the posterior division of the retromandibular vein. It drains into the external jugular vein. It receive the stylomastoid vein, and some tributaries from the cranial surface of the auricle. Variation The posterior auricular vein may drain into the internal jugular vein or a posterior jugular vein if there are variations in the external jugular vein. Clinical significance Skin from the auriculomastoid region of the head may be grafted as a flap, keeping the posterior auricular vein with it.
https://en.wikipedia.org/wiki/Occipital%20vein
The occipital vein is a vein of the scalp. It originates from a plexus around the external occipital protuberance and superior nuchal line to the back part of the vertex of the skull. It usually drains into the internal jugular vein, but may also drain into the posterior auricular vein (which joins the external jugular vein). It drains part of the scalp. Structure The occipital vein is part of the scalp. It begins as a plexus at the posterior aspect of the scalp from the external occipital protuberance and superior nuchal line to the back part of the vertex of the skull. It pierces the cranial attachment of the trapezius and, dipping into the venous plexus of the suboccipital triangle, joins the deep cervical vein and the vertebral vein. Occasionally it follows the course of the occipital artery, and ends in the internal jugular vein. Alternatively, it joins the posterior auricular vein, and ends in the external jugular vein. The parietal emissary vein connects it with the superior sagittal sinus. As the occipital vein passes across the mastoid portion of the temporal bone, it usually receives the mastoid emissary vein, which connects it with the sigmoid sinus. The occipital diploic vein sometimes joins it. Function The occipital vein drains blood from part of the scalp. Additional images
https://en.wikipedia.org/wiki/Internal%20pudendal%20veins
The internal pudendal veins (internal pudic veins) are a set of veins in the pelvis. They are the venae comitantes of the internal pudendal artery. Internal pudendal veins are enclosed by pudendal canal, with internal pudendal artery and pudendal nerve. They begin in the deep veins of the vulva and of the penis, issuing from the bulb of the vestibule and the bulb of the penis, respectively. They accompany the internal pudendal artery, and unite to form a single vessel, which ends in the internal iliac vein. They receive the veins from the urethral bulb, the perineal and inferior hemorrhoidal veins. The deep dorsal vein of the penis communicates with the internal pudendal veins, but ends mainly in the pudendal plexus.
https://en.wikipedia.org/wiki/Latent%20Dirichlet%20allocation
In natural language processing, Latent Dirichlet Allocation (LDA) is a Bayesian network (and, therefore, a generative statistical model) that explains a set of observations through unobserved groups, and each group explains why some parts of the data are similar. The LDA is an example of a Bayesian topic model. In this, observations (e.g., words) are collected into documents, and each word's presence is attributable to one of the document's topics. Each document will contain a small number of topics. History In the context of population genetics, LDA was proposed by J. K. Pritchard, M. Stephens and P. Donnelly in 2000. LDA was applied in machine learning by David Blei, Andrew Ng and Michael I. Jordan in 2003. Overview Evolutionary biology and bio-medicine In evolutionary biology and bio-medicine, the model is used to detect the presence of structured genetic variation in a group of individuals. The model assumes that alleles carried by individuals under study have origin in various extant or past populations. The model and various inference algorithms allow scientists to estimate the allele frequencies in those source populations and the origin of alleles carried by individuals under study. The source populations can be interpreted ex-post in terms of various evolutionary scenarios. In association studies, detecting the presence of genetic structure is considered a necessary preliminary step to avoid confounding. Clinical psychology, mental health, and social science In clinical psychology research, LDA has been used to identify common themes of self-images experienced by young people in social situations. Other social scientists have used LDA to examine large sets of topical data from discussions on social media (e.g., tweets about prescription drugs). Musicology In the context of computational musicology, LDA has been used to discover tonal structures in different corpora. Machine learning One application of LDA in machine learning - specifically, top
https://en.wikipedia.org/wiki/Forest%E2%80%93savanna%20mosaic
Forest–savanna mosaic is a transitory ecotone between the tropical moist broadleaf forests of Equatorial Africa and the drier savannas and open woodlands to the north and south of the forest belt. The forest–savanna mosaic consists of drier forests, often gallery forest, interspersed with savannas and open grasslands. Flora This band of marginal savannas bordering the dense dry forest extends from the Atlantic coast of Guinea to South Sudan, corresponding to a climatic zone with relatively high rainfall, between 800 and 1400 mm. It is an often unresolvable, complex of secondary forests and mixed savannas, resulting from intense erosion of primary forests by fire and clearing. The vegetation ceases to have an evergreen character, and becomes more and more seasonal. A species of acacia, Faidherbia albida, marks, with its geographical distribution, the Guinean area of the savannas together with the area of the forest-savanna, arboreal and shrub, and a good part of the dense dry forest with prevalently deciduous trees. Ecoregions The World Wildlife Fund recognizes several distinct forest-savanna mosaic ecoregions: The Guinean forest–savanna mosaic is the transition between the Upper and Lower Guinean forests of West Africa and the West Sudanian savanna. The ecoregion extends from Senegal on the west to the Cameroon Highlands on the east. The Dahomey Gap is a region of Togo and Benin where the forest-savanna mosaic extends to the coast, separating the Upper and Lower Guinean forests. The Northern Congolian forest–savanna mosaic lies between the Congolian forests of Central Africa and the East Sudanian savanna. It extends from the Cameroon Highlands in the west to the East African Rift in the east, encompassing portions of Cameroon, Central African Republic, Democratic Republic of the Congo, and southwestern Sudan. The Western Congolian forest–savanna mosaic lies southwest of the Congolian forest belt, covering portions of southern Gabon, southern Republic of the Co
https://en.wikipedia.org/wiki/Erythronium%20dens-canis
Erythronium dens-canis, the dog's-tooth-violet or dogtooth violet, is a bulbous herbaceous perennial flowering plant in the family Liliaceae, growing to . It is native to central and southern Europe from Portugal to Ukraine. It is the only naturally occurring species of Erythronium in Europe. Despite its common name, it is not closely related to the true violets of genus Viola. Description Erythronium dens-canis produces a solitary white, pink or lilac flower at the beginning of spring. The petals (growing to approx. 3 cm) are reflexed at the top and yellow tinted at the base. The brown spotted leaves are ovate to lanceolate and grow in pairs. The white bulb is oblong and resembles a dog's tooth, hence the common name "dog's tooth violet" and the Latin specific epithet dens-canis, which translates as "dog's tooth". Ecology Erythronium dens-canis is found in damp, lightly shaded settings such as deciduous woodland. Uses Its leaves may be consumed raw in salad, or boiled as a leaf vegetable. The bulb is also the source of a starch used in making vermicelli. Varieties formerly included Numerous names have been coined at the varietal level for plants once considered to be included within Erythronium dens-canis. None of the European varieties is now recognized as meriting recognition but some of the Asian species are now regarded as distinct species. Erythronium dens-canis var. japonicum, now called Erythronium japonicum Erythronium dens-canis var. parviflorum, now called Erythronium sibiricum Erythronium dens-canis var. sibiricum, now called Erythronium sibiricum
https://en.wikipedia.org/wiki/Canadian%20Association%20for%20Physical%20Anthropology
The Canadian Association for Physical Anthropology / L'Association Canadienne D'Anthropologie Physique (CAPA/ACAP) is a learned society of international scholars and students of Physical Anthropology in Canada. The Associations's mission is to promote and increase awareness and understanding of physical (biological) anthropology among its membership and to support institutions, agencies, and the public at large. The Association is guided by a constitution and code of ethics, holds annual meetings, distributes biannual newsletters to its membership, and offers funding opportunities for student members. The current president in 2017 is Dr. Ian Colquhoun (University of Western Ontario). Mission statement The mission statement of CAPA/ACAP, as listed on the website, is as follows: "The Canadian Association of Physical Anthropology / L'Association Canadienne D'Anthropologie Physique is a learned society of international scholars and students whose aim is to promote and increase awareness and understanding of physical (biological) anthropology among its membership, as well as to supporting institutions and agencies and the public at large. Physical anthropologists study adaptation, variability and evolution in a biocultural context. Our members recognize and celebrate the geographic and temporal diversity and complexity of ancient and modern humankind, our hominid forebears and our nonhuman primate cousins and their ancestors. As such, our discipline is inherently multidisciplinary, crossing the boundaries between the natural and social sciences, in order to provide a richer understanding of human diversity and complexity." This statement is upheld by the CAPA-ACAP Constitution and Code of Ethics (2015). Founding of the Association The Canadian Association for Physical Anthropology/L'Association Canadienne d'Anthropologie Physique (CAPA-ACAP) was founded during the 1972 meeting of the American Association of Physical Anthropologists. Founding members Drs. Frank Auger
https://en.wikipedia.org/wiki/Crash%20Bandicoot%20%28character%29
Crash Bandicoot is the title character and main protagonist of the Crash Bandicoot series. Introduced in the 1996 video game Crash Bandicoot, Crash is a mutant eastern barred bandicoot who was genetically enhanced by the series' main antagonist Doctor Neo Cortex and soon escaped from Cortex's castle after a failed experiment in the "Cortex Vortex". Throughout the series, Crash acts as the opposition against Cortex and his schemes for world domination. While Crash has a number of offensive maneuvers at his disposal, his most distinctive technique is one in which he spins like a tornado at high speeds and knocks away almost anything that he strikes. Crash was created by Andy Gavin and Jason Rubin, and was originally designed by Charles Zembillas. Crash was intended to be a mascot character for Sony to use to compete against Nintendo's Mario and Sega's Sonic the Hedgehog. Before Crash was given his name (which stems from the visceral reaction to the character's destruction of boxes), he was referred to as "Willie the Wombat" for much of the duration of the first game's production. Crash has drawn comparisons to mascots such as Mario and Sonic the Hedgehog by reviewers. His animations have been praised, while his voice has faced criticism. He has been redesigned several times throughout many games, which have drawn mixed reactions. Concept and creation One of the main reasons Naughty Dog chose to develop Crash Bandicoot (at the time jokingly codenamed "Sonic's Ass Game") for the Sony PlayStation was Sony's lack of an existing mascot character that could compete with Sega's Sonic the Hedgehog and Nintendo's Mario. By this time video game mascots were seen as increasingly unimportant, since they were overshadowed by cross-licensing and the aging games market meant most gamers were too old to find mascots appealing, but Sony were nonetheless interested in covering all bases. Naughty Dog desired to do what Sega and Warner Bros. did with the hedgehog (Sonic) and the Tasma
https://en.wikipedia.org/wiki/Diagonal%20morphism
In category theory, a branch of mathematics, for any object in any category where the product exists, there exists the diagonal morphism satisfying for where is the canonical projection morphism to the -th component. The existence of this morphism is a consequence of the universal property that characterizes the product (up to isomorphism). The restriction to binary products here is for ease of notation; diagonal morphisms exist similarly for arbitrary products. The image of a diagonal morphism in the category of sets, as a subset of the Cartesian product, is a relation on the domain, namely equality. For concrete categories, the diagonal morphism can be simply described by its action on elements of the object . Namely, , the ordered pair formed from . The reason for the name is that the image of such a diagonal morphism is diagonal (whenever it makes sense), for example the image of the diagonal morphism on the real line is given by the line that is the graph of the equation . The diagonal morphism into the infinite product may provide an injection into the space of sequences valued in ; each element maps to the constant sequence at that element. However, most notions of sequence spaces have convergence restrictions that the image of the diagonal map will fail to satisfy. See also Diagonal functor Diagonal embedding
https://en.wikipedia.org/wiki/Diagonal%20functor
In category theory, a branch of mathematics, the diagonal functor is given by , which maps objects as well as morphisms. This functor can be employed to give a succinct alternate description of the product of objects within the category : a product is a universal arrow from to . The arrow comprises the projection maps. More generally, given a small index category , one may construct the functor category , the objects of which are called diagrams. For each object in , there is a constant diagram that maps every object in to and every morphism in to . The diagonal functor assigns to each object of the diagram , and to each morphism in the natural transformation in (given for every object of by ). Thus, for example, in the case that is a discrete category with two objects, the diagonal functor is recovered. Diagonal functors provide a way to define limits and colimits of diagrams. Given a diagram , a natural transformation (for some object of ) is called a cone for . These cones and their factorizations correspond precisely to the objects and morphisms of the comma category , and a limit of is a terminal object in , i.e., a universal arrow . Dually, a colimit of is an initial object in the comma category , i.e., a universal arrow . If every functor from to has a limit (which will be the case if is complete), then the operation of taking limits is itself a functor from to . The limit functor is the right-adjoint of the diagonal functor. Similarly, the colimit functor (which exists if the category is cocomplete) is the left-adjoint of the diagonal functor. For example, the diagonal functor described above is the left-adjoint of the binary product functor and the right-adjoint of the binary coproduct functor. See also Diagram (category theory) Cone (category theory) Diagonal morphism
https://en.wikipedia.org/wiki/Waterfall%20plot
Waterfall plots are often used to show how two-dimensional phenomena change over time. A three-dimensional spectral waterfall plot is a plot in which multiple curves of data, typically spectra, are displayed simultaneously. Typically the curves are staggered both across the screen and vertically, with "nearer" curves masking the ones behind. The result is a series of "mountain" shapes that appear to be side by side. The waterfall plot is often used to show how two-dimensional information changes over time or some other variable such as rotational speed. Waterfall plots are also often used to depict spectrograms or cumulative spectral decay (CSD). Uses The results of spectral density estimation, showing the spectrum of the signal at successive intervals of time. The delayed response from a loudspeaker or listening room produced by impulse response testing or MLSSA. Spectra at different engine speeds when testing engines. See also Loudspeaker acoustics Loudspeaker measurement
https://en.wikipedia.org/wiki/Climate%20change%20adaptation
Climate change adaptation is the process of adjusting to the effects of climate change. These can be both current or expected impacts. Adaptation aims to moderate or avoid harm for people. It also aims to exploit opportunities. Humans may also intervene to help adjustment for natural systems. There are many adaptation strategies or options. They can help manage impacts and risks to people and nature. We can classify adaptation actions in four ways. These are infrastructural and technological; institutional; behavioural and cultural; and nature-based options. The need for adaptation varies from place to place. It depends on the risk to human or ecological systems. Adaptation is particularly important in developing countries. This is because developing countries are most vulnerable to climate change. So they bear the brunt of the effects of climate change. Adaptation needs are high for food and water. They are high for other sectors that are important for economic output, jobs and incomes. Adaptation planning is important to help countries manage climate risks. Plans, policies or strategies are in place in more than 70% of countries. Other levels of government like cities and provinces also use adaptation planning. So do economic sectors. Developing countries can receive international funding to help develop national adaptation plans. This is important to help them implement more adaptation. The adaptation carried out so far is not enough to manage risks at current levels of climate change. And adaptation must also anticipate future risks of climate change. The costs of climate change adaptation are likely to cost billions of dollars a year for the coming decades. In many cases the cost will be less than the damage that it avoids. Definition The IPCC defines climate change adaptation in this way: "In human systems, as the process of adjustment to actual or expected climate and its effects in order to moderate harm or take advantage of beneficial opportunities."
https://en.wikipedia.org/wiki/UNIVAC%20Series%2090
The Univac Series 90 is an obsolete family of mainframe class computer systems from UNIVAC first introduced in 1973. The low end family members included the 90/25, 90/30 and 90/40 that ran the OS/3 operating system. The intermediate members of the family were the 90/60 and 90/70, while the 90/80, announced in 1976, was the high end system. The 90/60 through 90/80 systems all ran the Univac’s virtual memory operating system, VS/9. The Series 90 systems were the replacement for the UNIVAC 9000 series of low end, mainframe systems marketed by Sperry Univac during the 1960s. The 9000 series systems were byte-addressable machines with an instruction set that was compatible with the IBM System/360. The family included the 9200, 9300, 9400, and 9480 systems. The 9200 and 9300 ran the Minimum Operating system. This system was loaded from cards, but thereafter also supported magnetic tape or magnetic disk for programs and data. The 9400 and 9480 ran a real memory operating system called OS/4. As Sperry moved into the 1970s, they expanded the 9000 family with the introduction of the 9700 system in 1971. They were also developing a new real memory operating system for the 9700 called OS/7. In January 1972, Sperry officially took over the RCA customer base, offering the Spectra 70 and RCA Series computers as the UNIVAC Series 70. They redesigned the 9700, adding virtual memory, and renamed the processor the 90/70. They cancelled development of OS/7 in favor of VS/9, a renamed RCA VMOS. A number of the RCA customers continued with Sperry, and the 90/60 and 90/70 would provide an upgrade path for the customers with 70/45, 70/46, RCA 2 and 3 systems. In 1976, Sperry added the 90/80 at the top end of the Series 90 Family, based on an RCA design, providing an upgrade path for the 70/60, 70/61, RCA 6 and 7 systems. The RCA base was very profitable for Sperry and Sperry was able to put together a string of 40 quarters of profit. Sperry also offered their own 1100 family of s
https://en.wikipedia.org/wiki/Oceanic%20plateau
An oceanic or submarine plateau is a large, relatively flat elevation that is higher than the surrounding relief with one or more relatively steep sides. There are 184 oceanic plateaus in the world, covering an area of or about 5.11% of the oceans. The South Pacific region around Australia and New Zealand contains the greatest number of oceanic plateaus (see map). Oceanic plateaus produced by large igneous provinces are often associated with hotspots, mantle plumes, and volcanic islands — such as Iceland, Hawaii, Cape Verde, and Kerguelen. The three largest plateaus, the Caribbean, Ontong Java, and Mid-Pacific Mountains, are located on thermal swells. Other oceanic plateaus, however, are made of rifted continental crust, for example the Falkland Plateau, Lord Howe Rise, and parts of Kerguelen, Seychelles, and Arctic ridges. Plateaus formed by large igneous provinces were formed by the equivalent of continental flood basalts such as the Deccan Traps in India and the Snake River Plain in the United States. In contrast to continental flood basalts, most igneous oceanic plateaus erupt through young and thin () mafic or ultra-mafic crust and are therefore uncontaminated by felsic crust and representative for their mantle sources. These plateaus often rise above the surrounding ocean floor and are more buoyant than oceanic crust. They therefore tend to withstand subduction, more-so when thick and when reaching subduction zones shortly after their formations. As a consequence, they tend to "dock" to continental margins and be preserved as accreted terranes. Such terranes are often better preserved than the exposed parts of continental flood basalts and are therefore a better record of large-scale volcanic eruptions throughout Earth's history. This "docking" also means that oceanic plateaus are important contributors to the growth of continental crust. Their formations often had a dramatic impact on global climate, such as the most recent plateaus formed, the three, l
https://en.wikipedia.org/wiki/Australian%20Mathematics%20Competition
The Australian Mathematics Competition is a mathematics competition run by the Australian Maths Trust for students from year 3 up to year 12 in Australia, and their equivalent grades in other countries. Since its inception in 1976 in the Australian Capital Territory, the participation numbers have increased to around 600,000, with around 100,000 being from outside Australia, making it the world's largest mathematics competition. History The forerunner of the competition, first held in 1976, was open to students within the Australian Capital Territory, and attracted 1,200 entries. In 1976 and 1977 the outstanding entrants were awarded the Burroughs medal. In 1978, the competition became a nationwide event, and became known as the Australian Mathematics Competition for the Wales awards with 60,000 students from Australia and New Zealand participating. In 1983 the medals were renamed the Westpac awards following a change to the name of the title sponsor Westpac. Other sponsors since the inception of the competition have been the Canberra Mathematical Association and the University of Canberra (previously known as the Canberra College of Advanced Education). The competition has since spread to countries such as New Zealand, Singapore, Fiji, Tonga, Taiwan, China and Malaysia, which submit thousands of entries each. A French translation of the paper has been available since the current competition was established in 1978, with Chinese translation being made available to Hong Kong (Traditional Chinese Characters) and Taiwan (Traditional Chinese Characters) students in 2000. Large print and braille versions are also available. In 2004, the competition was expanded to allow two more divisions, one for year five and six students, and another for year three and four students. In 2005, students from 38 different countries entered the competition. Format The competition paper consists of twenty-five multiple-choice questions and five integer questions, which are ordered
https://en.wikipedia.org/wiki/Liquid%20metal%20cooled%20reactor
A liquid metal cooled nuclear reactor, or LMR is a type of nuclear reactor where the primary coolant is a liquid metal. Liquid metal cooled reactors were first adapted for breeder reactor power generation. They have also been used to power nuclear submarines. Due to their high thermal conductivity, metal coolants remove heat effectively, enabling high power density. This makes them attractive in situations where size and weight are at a premium, like on ships and submarines. Most water-based reactor designs are highly pressurized to raise the boiling point (thereby improving cooling capabilities), which presents safety and maintenance issues that liquid metal designs lack. Additionally, the high temperature of the liquid metal can be used to drive power conversion cycles with high thermodynamic efficiency. This makes them attractive for improving power output, cost effectiveness, and fuel efficiency in nuclear power plants. Liquid metals, being electrically highly conductive, can be moved by electromagnetic pumps. Disadvantages include difficulties associated with inspection and repair of a reactor immersed in opaque molten metal, and depending on the choice of metal, fire hazard risk (for alkali metals), corrosion and/or production of radioactive activation products may be an issue. Design Liquid metal coolant has been applied to both thermal- and fast-neutron reactors. To date, most fast neutron reactors have been liquid metal cooled fast reactors (LMFRs). When configured as a breeder reactor (e.g. with a breeding blanket), such reactors are called liquid metal fast breeder reactors (LMFBRs). Coolant properties Suitable liquid metal coolants must have a low neutron capture cross section, must not cause excessive corrosion of the structural materials, and must have melting and boiling points that are suitable for the reactor's operating temperature. Liquid metals generally have high boiling points, reducing the probability that the coolant can boil, which c
https://en.wikipedia.org/wiki/The%20Duel%3A%20Test%20Drive%20II
The Duel: Test Drive II is a 1989 racing video game developed by Distinctive Software and published by Accolade for Amiga, Amstrad CPC, Apple IIGS, Commodore 64, MS-DOS, MSX, ZX Spectrum, Atari ST, Sega Genesis and SNES. Gameplay Like the original Test Drive, the focus of The Duel is driving exotic cars through dangerous highways, evading traffic, and trying to escape police pursuits. While the first game in the series had the player simply racing for time in a single scenario, Test Drive II improves upon its predecessor by introducing varied scenery, and giving the player the option of racing against the clock or competing against a computer-controlled opponent. The player initially is given the opportunity to choose a car to drive and a level of difficulty, which in turn determines whether the car will use an automatic or manual transmission—the number of difficulty options varies between gaming platforms. Levels begin with the player's car (and the computer opponent, if selected) idling on a roadway. Primarily these are two to four lane public highways with many turns; each level is different, and they include obstacles such as bridges, cliffs, and tunnels in addition to the other cars already on the road. Each level also has one or more police cars along the course. The goal of each level is to reach the gas station at the end of the course in the least amount of time. Stopping at the gas station is not mandatory, and one could drive past it if inattentive. The consequence of not stopping is running out of gas, and thus losing a car (life). The player begins the game with five lives, one of which is lost each time the player crashes into something. The player is awarded a bonus life for completing a level without crashing or running out of gas. In addition to losing a life, crashing adds thirty seconds to the player's time. Cars could crash into other traffic or off-road obstacles such as trees or by falling off the cliff on one of the mountain levels. They c
https://en.wikipedia.org/wiki/Magnetic%20resonance%20force%20microscopy
Magnetic resonance force microscopy (MRFM) is an imaging technique that acquires magnetic resonance images (MRI) at nanometer scales, and possibly at atomic scales in the future. MRFM is potentially able to observe protein structures which cannot be seen using X-ray crystallography and protein nuclear magnetic resonance spectroscopy. Detection of the magnetic spin of a single electron has been demonstrated using this technique. The sensitivity of a current MRFM microscope is 10 billion times greater than a medical MRI used in hospitals. Basic principle The MRFM concept combines the ideas of magnetic resonance imaging (MRI) and atomic force microscopy (AFM). Conventional MRI employs an inductive coil as an antenna to sense resonant nuclear or electronic spins in a magnetic field gradient. MRFM uses a cantilever tipped with a ferromagnetic (iron cobalt) particle to directly detect a modulated spin gradient force between sample spins and the tip. The magnetic particle is characterized using the technique of cantilever magnetometry. As the ferromagnetic tip moves close to the sample, the atoms' nuclear spins become attracted to it and generate a small force on the cantilever. The spins are then repeatedly flipped, causing the cantilever to gently sway back and forth in a synchronous motion. That displacement is measured with an interferometer (laser beam) to create a series of 2-D images of the sample, which are combined to generate a 3-D image. The interferometer measures resonant frequency of the cantilever. Smaller ferromagnetic particles and softer cantilevers increase the signal-to-noise ratio. Unlike the inductive coil approach, MRFM sensitivity scales favorably as device and sample dimensions are reduced. Because the signal-to-noise ratio is inversely proportional to the sample size, Brownian motion is the primary source of noise at the scale in which MRFM is useful. Accordingly, MRFM devices are cryogenically cooled. MRFM was specifically devised to determine
https://en.wikipedia.org/wiki/Sternocostal%20triangle
The sternocostal triangle (foramina of Morgagni, Larrey's space, sternocostal hiatus, etc.) are small zones lying between the costal and sternal attachments of the thoracic diaphragm. No vascular elements are present within this space. The borders of this space are: Medial: the lateral border of the sternal part of the diaphragm Lateral: the medial border of the costal part of the diaphragm Anterior: the musculoaponeurotic plane formed by a confluence of the transversus thoracis superiorly and the transversus abdominis inferiorly The superficial epigastric artery passes in front of the aponeurotic plane that forms the anterior border and enters the abdomen anterior to the diaphragm. Eponym It is named for Giovanni Battista Morgagni. Pathology It can be a site of Morgagni's hernia.
https://en.wikipedia.org/wiki/Pleuroperitoneal
Pleuroperitoneal is a term denoting the pleural and peritoneal serous membranes or the cavities they line. It is divided from the pericardial cavity by the transverse septum. Congenital defect or traumatic injury of pleuroperitoneal membrane can lead to diaphragmatic hernia. Membrane biology
https://en.wikipedia.org/wiki/Peters%27s%20elephantnose%20fish
Peters's elephant-nose fish (Gnathonemus petersii) is an African freshwater elephantfish in the genus Gnathonemus. Other names in English include elephantnose fish, long-nosed elephant fish, and Ubangi mormyrid, after the Ubangi River. The Latin name is probably for the German naturalist Wilhelm Peters. The fish uses electrolocation to find prey, and has the largest brain-to-body oxygen use ratio of all known vertebrates (around 0.6). Description Peters's elephantnose fish is native to the rivers of West and Central Africa, in particular the lower Niger River basin, the Ogun River basin and the upper Chari River. It prefers muddy, slowly moving rivers and pools with cover such as submerged branches. The fish is a dark brown to black in colour, laterally compressed (averaging ), with a rear dorsal fin and anal fin of the same length. Its caudal or tail fin is forked. It has two stripes on its lower pendicular. Its most striking feature, as its names suggest, is a trunk-like protrusion on the head. This is not actually a nose, but a sensitive extension of the mouth, that it uses for self-defense, communication, navigation, and finding worms and insects to eat. This organ, called the Schnauzenorgan, is covered in electroreceptors, as is much of the rest of its body. The elephantnose uses a weak electric field, which it generates with specialized cells called electrocytes, which evolved from muscle cells, to find food, to navigate in dark or turbid waters, and to find a mate. Peters's elephantnose fish live to about 6–10 years. Electrolocation The elephant nose fish is weakly electric, meaning that it can detect moving prey and worms in the substrate by generating brief electric pulses with the electric organ in its tail. The electroreceptors around its body are sensitive enough to detect the different distortions of the electric field made by objects that conduct or resist electricity. The weak electric fields generated by this fish can be made audible by pl
https://en.wikipedia.org/wiki/Holliday%20junction
A Holliday junction is a branched nucleic acid structure that contains four double-stranded arms joined. These arms may adopt one of several conformations depending on buffer salt concentrations and the sequence of nucleobases closest to the junction. The structure is named after Robin Holliday, the molecular biologist who proposed its existence in 1964. In biology, Holliday junctions are a key intermediate in many types of genetic recombination, as well as in double-strand break repair. These junctions usually have a symmetrical sequence and are thus mobile, meaning that the four individual arms may slide through the junction in a specific pattern that largely preserves base pairing. Additionally, four-arm junctions similar to Holliday junctions appear in some functional RNA molecules. Immobile Holliday junctions, with asymmetrical sequences that lock the strands in a specific position, were artificially created by scientists to study their structure as a model for natural Holliday junctions. These junctions also later found use as basic structural building blocks in DNA nanotechnology, where multiple Holliday junctions can be combined into specific designed geometries that provide molecules with a high degree of structural rigidity. Structure Holliday junctions may exist in a variety of conformational isomers with different patterns of coaxial stacking between the four double-helical arms. Coaxial stacking is the tendency of nucleic acid blunt ends to bind to each other, by interactions between the exposed bases. There are three possible conformers: an unstacked (or open-X) form and two stacked forms. The unstacked form dominates in the absence of divalent cations such as Mg2+, because of electrostatic repulsion between the negatively charged backbones of the strands. In the presence of at least about 0.1 mM Mg2+, the electrostatic repulsion is counteracted and the stacked structures predominate. As of 2000, it was not known with certainty whether the electro