source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Arum%20italicum
Arum italicum is a species of flowering herbaceous perennial plant in the family Araceae, also known as Italian arum and Italian lords-and-ladies. It is native to the British Isles and much of the Mediterranean region, the Caucasus, Canary Islands, Madeira and northern Africa. It is also naturalized in Belgium, the Netherlands, Austria, Argentina, North Island New Zealand and scattered locations in North America. Description It grows high, with equal spread. It blooms in spring with white flowers that turn to showy red fruit. It is cultivated as an ornamental plant for traditional and woodland shade gardens. Subspecies italicum (the one normally grown in horticulture) has distinctive pale veins on the leaves, whilst subspecies neglectum (known as late cuckoo pint) has faint pale veins, and the leaves may have dark spots. Nonetheless, intermediates between these two subspecies also occur, and their distinctiveness has been questioned. Some gardeners use this arum to underplant with Hosta, as they produce foliage sequentially: when the Hosta withers away, the arum replaces it in early winter, maintaining ground-cover. Numerous cultivars have been developed for garden use, of which A. italicum subsp. italicum 'Marmoratum' has gained the Royal Horticultural Society's Award of Garden Merit. Arum italicum can be invasive in some areas. Arum italicum may hybridize with Arum maculatum. The status of two subspecies currently included in Arum italicum, subsp. albispathum (Crimea to the Caucasus) and subsp. canariense (Macaronesia), is uncertain and they may represent independent species. In 1778, Lamarck noticed that the inflorescence of this plant produces heat. Leaves, fruits and rhizomes contain compounds that make them poisonous. Notably, leaves are rich in oxalic acid; other active principles are present in other parts. The ingestion of berries, which are showy and red, can be fatal for babies and young children, as well as dogs. Gallery Taxonomy Within the genus
https://en.wikipedia.org/wiki/Hard%20inheritance
Hard inheritance was a model of heredity that explicitly excludes any acquired characteristics, such as of Lamarckism. It is the exact opposite of soft inheritance, coined by Ernst Mayr to contrast ideas about inheritance. Hard inheritance states that characteristics of an organism's offspring (passed on through DNA) will not be affected by the actions that the parental organism performs during its lifetime. For example: a medieval blacksmith who uses only his right arm to forge steel will not sire a son with a stronger right arm than left because the blacksmith's actions do not alter his genetic code. Inheritance due to usage and non-usage is excluded. Inheritance works as described in the modern synthesis of evolutionary biology. The existence of inherited epigenetic variants has led to renewed interest in soft inheritance.
https://en.wikipedia.org/wiki/Martianoids
Martianoids is a ZX Spectrum video game developed by Ultimate Play the Game and released in 1987. Gameplay Although it uses isometric projection, as with Ultimate's second-generation isometric releases such as Nightshade and Gunfright, Martianoids used a scrolling display rather than the flip-screen of earlier titles such as Knight Lore and Alien 8. As with the contemporary Ultimate title Bubbler, Martianoids was not written by the partnership of Tim Stamper and Chris Stamper. It was instead programmed by a team at U.S. Gold, and was therefore an Ultimate game in name only. It was Ultimate's penultimate title for 8-bit home computers. In 1988 the company became Rare, embarking on a long-running partnership with Nintendo to develop console games. Plot Players are given a cryptic introduction describing an attack by aliens known as "The Martianoids". The Martianoids enter the player's ship and attack the brain of the ship with "photon weapons". The player must act to prevent further damage. The player has lasers for defence which destroy internal walls, computer components and so forth, and the aliens.
https://en.wikipedia.org/wiki/Hydrophilic%20interaction%20chromatography
Hydrophilic interaction chromatography (or hydrophilic interaction liquid chromatography, HILIC) is a variant of normal phase liquid chromatography that partly overlaps with other chromatographic applications such as ion chromatography and reversed phase liquid chromatography. HILIC uses hydrophilic stationary phases with reversed-phase type eluents. The name was suggested by Andrew Alpert in his 1990 paper on the subject. He described the chromatographic mechanism for it as liquid-liquid partition chromatography where analytes elute in order of increasing polarity, a conclusion supported by a review and re-evaluation of published data. Surface Any polar chromatographic surface can be used for HILIC separations. Even non-polar bonded silicas have been used with extremely high organic solvent composition, thanks to the exposed patches of silica in between the bonded ligands on the support, which can affect the interactions. With that exception, HILIC phases can be grouped into five categories of neutral polar or ionic surfaces: simple unbonded silica silanol or diol bonded phases amino or anionic bonded phases amide bonded phases cationic bonded phases zwitterionic bonded phases Mobile phase A typical mobile phase for HILIC chromatography includes acetonitrile ("MeCN", also designated as "ACN") with a small amount of water. However, any aprotic solvent miscible with water (e.g. THF or dioxane) can be used. Alcohols can also be used, however, their concentration must be higher to achieve the same degree of retention for an analyte relative to an aprotic solvent - water combination. See also Aqueous Normal Phase Chromatography It is commonly believed that in HILIC, the mobile phase forms a water-rich layer on the surface of the polar stationary phase vs. the water-deficient mobile phase, creating a liquid/liquid extraction system. The analyte is distributed between these two layers. However, HILIC is more than just simple partitioning and includes hydrogen d
https://en.wikipedia.org/wiki/KXLK-CD
KXLK-CD (channel 40) is a low-power, Class A television station in Austin, Texas, United States. It is owned by TelevisaUnivision alongside Killeen-licensed Univision owned-and-operated station KAKW-DT (channel 62). KXLK-CD's transmitter is located at the West Austin Antenna Farm north of West Lake Hills. History The station was affiliated with the religious African-American The Word Network. Until September 2014, KXLK-CA was the Home Shopping Network affiliate for the Austin area. Due to the 2016–2017 FCC TV spectrum auction, KXLK-CD moved from RF channel 23 to RF channel 14 on June 21, 2019. On October 17, 2017, Univision announced its intent to purchase KXLK-CD from Radio Spectrum Partners for $2.55 million. ATSC 3.0 lighthouse KXLK-CD transitioned to ATSC 3.0 lighthouse station on May 19, 2021, and carries ATSC 3.0 simulcasts of KTBC, KAKW, and KTFO. The following ATSC 3.0 subchannels broadcast on RF channel 14:
https://en.wikipedia.org/wiki/Ben%20Pridmore
Ben Pridmore (born October 14, 1976) is a former world memory champion, memory sport competitor and accountant. Achievements Pridmore is a three-time World Memory Champion winning the title 2004, 2008 and 2009. From Derby in the United Kingdom, Pridmore achieved this by winning a 10-discipline competition, the World Memory Championship, which has taken place every year since 1991. He has also earned the prestigious title of Master of Memory. He held the official world record for memorizing the order of a randomly shuffled 52-card deck, and has memorised a pack in a time of 24.68 seconds on television. This record was beaten in 2010 by German memory athlete and lawyer Simon Reinhard. Pridmore's victory at the 2009 World Championship was his eighth consecutive memory competition win since coming second at the 2007 World Championship. He is the title holder for the UK Memory Champion for the years 2007–2011 and 2013 and Welsh Open Memory Champion 2009–2012 and 2014. Besides memory sports he is famous for his mental calculation skills and took part in the Mental Calculation World Cup in 2004, 2006 and 2010. He has also won several medals at the Mind Sports Olympiad including becoming the 2001 World Champion at the ten disciplined mind sport competition the decamentathlon including also chess and reversi. He also participated in the Memoriad World Mental Olympics in the year 2012 in Antalya and won one gold medal with the title "Memoriad Speed Cards World Memory Champion". He is thought to have an IQ of 159 putting him in the genius range. He was prominently featured in the music video for DJ Shadow's single "Scale It Back". Method Like most memory experts, he creates a mental story, comprises a sequence of images in a variation of the Mnemonic Major System. In Pridmore's system for cards, two cards are represented as a three-letter word by the first consonant derived from the suits, the vowel from the first card's number, and the final consonant from the second ca
https://en.wikipedia.org/wiki/Parser%20Grammar%20Engine
The Parser Grammar Engine (PGE, originally the Parrot Grammar Engine) is a compiler and runtime for Raku rules for the Parrot virtual machine. PGE uses these rules to convert a parsing expression grammar into Parrot bytecode. It is therefore compiling rules into a program, unlike most virtual machines and runtimes, which store regular expressions in a secondary internal format that is then interpreted at runtime by a regular expression engine. The rules format used by PGE can express any regular expression and most formal grammars, and as such it forms the first link in the compiler chain for all of Parrot's front-end languages. When executed, the bytecode generated by PGE will parse text as described in the input rules, generating a parse tree. The parse tree can be manipulated directly, or fed into the next stage of the Parrot compiler toolchain in order to generate an AST from which code generation can occur (if the grammar describes a programming language). History Originally named P6GE and written in C, PGE was translated to native Parrot and renamed not long after its initial release in November 2004. Its author is Patrick R. Michaud. PGE was written in order to reduce the amount of work required to implement a compiler on top of Parrot. It was also written to allow Perl 6 to easily self-host, though current Pugs development no longer uses PGE as its primary rules back-end in favor of a native engine called PCR. Internals PGE combines three styles of parsing: Raku rules an operator precedence parser custom parse subroutines The primary form is Raku rules, so a PGE rule might look like this for an addition-only grammar: rule term { <number> | \( <expr> \) } rule number { \d+ } rule expr { <term> ( '+' <term> )* } The operator precedence parser allows an operator table to be built and used directly in a Perl 6 rule style parser like so: rule expr is optable { ... } rule term { <number> | \( <expr> \) } rule number { \d+ } proto term: is prece
https://en.wikipedia.org/wiki/Addison-Wesley%20Secondary%20Math%3A%20An%20Integrated%20Approach%3A%20Focus%20on%20Algebra
Focus on Algebra was the widely cited 812-page-long algebra textbook which contained significant content outside the traditional field of mathematics. The real-life context, intended to make mathematics more relevant, included chili recipes, ancient myths, and photographs of famous people. Although it was a widely used textbook, it made headlines when it was dubbed "rainforest algebra" by critics. Senate Testimony Senator Robert Byrd, Democrat from West Virginia, joined critics of reform mathematics on the floor of the senate by dubbing Addison-Wesley Secondary Math: An Integrated Approach: Focus on Algebra the "Texas rainforest algebra book". It had received an "F" grade on a report card produced by Mathematically Correct, a back-to-basics group, who claimed that it had no algebraic content on the first hundred pages. Structure Each of the 10 Chapters was composed of two or three themes, or "Superlessons," each of which connected the algebraic content to another discipline. Each Superlesson began with an opening page with discussion questions relating to the theme. Critics of the programs cited these questions as evidence of the lack of math in the books. Examples of questions cited: What other kinds of pollution besides air pollution might threaten our planet? [page 163, in the introduction to 3-1 Functional Relationships] Each year the Oilfield Chili Appreciation Society holds a chili cook-off. . . . 1. The chili cook-off raises money for charity. Describe some ways the organizers could raise money in the cook- off. 2. What is the hottest kind of pepper that you have eaten? People who have tasted them agree that cayenne peppers are hotter than pimento peppers. How would you set up a hotness scale for peppers? . . . . [page 217, in the introduction to 4-1 Solving Linear Equations] What role should zoos play in today's society? . . . . [page 233, in the introduction to 4-2 Other Techniques for Solving Linear Equations] Exercises within lessons that rel
https://en.wikipedia.org/wiki/Stem%20cell%20research%20policy
Stem cell research policy varies significantly throughout the world. There are overlapping jurisdictions of international organizations, nations, and states or provinces. Some government policies determine what is allowed versus prohibited, whereas others outline what research can be publicly financed. Of course, all practices not prohibited are implicitly permitted. Some organizations have issued recommended guidelines for how stem cell research is to be conducted. International bodies The United Nations adopted a declaration on human cloning that can be interpreted as calling on member states to prohibit somatic cell nuclear transfer, or therapeutic cloning. In 2005, in a divided vote, "Member States were called on to adopt all measures necessary to prohibit all forms of human cloning in as much as they are incompatible with human dignity and the protection of human life." The World Health Organization has opposed a ban on cloning techniques in stem cell research. The Council of Europe's Convention on Human Rights and Biomedicine seems to ban the creation of embryos solely for research purposes. It has been signed by 31 countries and ratified by 19: Bulgaria, Croatia, Cyprus, the Czech Republic, Denmark, Estonia, Georgia, Greece, Hungary, Iceland, Lithuania, Moldova, Portugal, Romania, San Marino, Slovakia, Slovenia, Spain, and Turkey. The Hinxton Group Researchers, ethicists and assorted spokespersons from 14 different countries have published a set of legal and ethical guidelines relating to stem cell research, in an effort to address conflicting international laws in this area. The ‘Hinxton Group’ met recently for the first time, in Cambridge, and published a consensus statement calling for a ‘flexible’ regulatory framework, which can simultaneously accommodate rapid scientific advance and at the same time accommodate the diversity of international approaches towards stem cell science. It also recommends that, in countries which oppose embryonic stem cell
https://en.wikipedia.org/wiki/Congenic
In genetics, two organisms that differ in only one locus and a linked segment of chromosome are defined as congenic. Similarly, organisms that are coisogenic differ in one locus only and not in the surrounding chromosome. Unlike congenic organisms, coisogenic organisms cannot be bred and only occur through spontaneous or targeted mutation at the locus. Generating congenic strains Congenic strains are generated in the laboratory by mating two inbred strains (usually rats or mice), and back-crossing the descendants 5–10 generations with one of the original strains, known as the recipient strain. Typically selection for either phenotype or genotype is performed prior to each back-cross generation. In this manner either an interesting phenotype, or a defined chromosomal region assayed by genotype, is passed from the donor strain onto an otherwise uniform recipient background. Congenic mice or rats can then be compared to the pure recipient strain to determine whether they are phenotypically different if selection was for a genotypic region, or to identify the critical genetic locus, if selection was for a phenotype. Speed congenics can be produced in as little as five back-cross generations, through the selection at each generation of offspring that not only retain the desired chromosomal fragment, but also 'lose' the maximum amount of background genetic information from the donor strain. This is also known as marker-assisted congenics, due to the use of genetic markers, typically microsatellite markers, but now, more commonly, single nucleotide polymorphism markers (SNPs). The process can be further aided by the superovulation of females, to produce many more eggs. See also Gene knockout Notes and references Further reading Congenic strains are discussed in detail in Lee Silver's online book Mouse Genetics: Concepts and Applications: Genetics
https://en.wikipedia.org/wiki/List%20of%20common%20physics%20notations
This is a list of common physical constants and variables, and their notations. Note that bold text indicates that the quantity is a vector. Latin characters Greek characters Other characters See also List of letters used in mathematics and science Glossary of mathematical symbols List of mathematical uses of Latin letters Greek letters used in mathematics, science, and engineering Physical constant Physical quantity International System of Units ISO 31
https://en.wikipedia.org/wiki/Richards%20equation
The Richards equation represents the movement of water in unsaturated soils, and is attributed to Lorenzo A. Richards who published the equation in 1931. It is a quasilinear partial differential equation; its analytical solution is often limited to specific initial and boundary conditions. Proof of the existence and uniqueness of solution was given only in 1983 by Alt and Luckhaus. The equation is based on Darcy-Buckingham law representing flow in porous media under variably saturated conditions, which is stated as where is the volumetric flux; is the volumetric water content; is the liquid pressure head, which is negative for unsaturated porous media; is the unsaturated hydraulic conductivity; is the geodetic head gradient, which is assumed as for three-dimensional problems. Considering the law of mass conservation for an incompressible porous medium and constant liquid density, expressed as , where is the sink term [T], typically root water uptake. Then substituting the fluxes by the Darcy-Buckingham law the following mixed-form Richards equation is obtained: . For modeling of one-dimensional infiltration this divergence form reduces to . Although attributed to L. A. Richards, the equation was originally introduced 9 years earlier by Lewis Fry Richardson in 1922. Formulations The Richards equation appears in many articles in the environmental literature because it describes the flow in the vadose zone between the atmosphere and the aquifer. It also appears in pure mathematical journals because it has non-trivial solutions. The above-given mixed formulation involves two unknown variables: and . This can be easily resolved by considering constitutive relation , which is known as the water retention curve. Applying the chain rule, the Richards equation may be reformulated as either -form (head based) or -form (saturation based) Richards equation. Head-based By applying the chain rule on temporal derivative leads to , where is known as the r
https://en.wikipedia.org/wiki/Quasicontraction%20semigroup
In mathematical analysis, a C0-semigroup Γ(t), t ≥ 0, is called a quasicontraction semigroup if there is a constant ω such that ||Γ(t)|| ≤ exp(ωt) for all t ≥ 0. Γ(t) is called a contraction semigroup if ||Γ(t)|| ≤ 1 for all t ≥ 0. See also Contraction (operator theory) Hille–Yosida theorem Lumer–Phillips theorem
https://en.wikipedia.org/wiki/Hydrologic%20Evaluation%20of%20Landfill%20Performance
The Hydrologic Evaluation of Landfill Performance (HELP) model is a quasi-two-dimensional hydrologic numerical model for conducting water balance analysis of landfills, cover systems, and other solid waste containment facilities; it was developed for the United States Environmental Protection Agency. Versions Public domain (free) version: HELP v4.0 – Microsoft Excel-based version developed by EPA ORD's Center for Environmental Solutions and Emergency Management, HELP 4 user manual and further documentation for HELP v3.0 Commercial versions: Visual HELP – based on the HELP version 3.07, offers a Microsoft Windows GUI to view and edit soil profiles and to generate weather data. HELP 3.95 D – by Dr. Klaus Berger at the University of Hamburg; offers a Microsoft Windows UI and includes the model HELP 3.07 and the enhanced model HELP 3.95 D.
https://en.wikipedia.org/wiki/Michael%20Coey
John Michael David Coey (born 24 February 1945), known as Michael Coey, is a Belfast-born experimental physicist working in the fields of magnetism and spintronics. He got a BA in Physics at Jesus College, Cambridge (1966), and a PhD from University of Manitoba (1971) for a thesis on "Mössbauer Effect of 57Fe in Magnetic Oxides" with advisor Allan H. Morrish. Trinity College Dublin (TCD), where he has been in the physics department since 1978, awarded him ScD (1987) and the University of Grenoble awarded him Dip. d'Habilitation (1986) and an honorary doctorate (1994). He served as Erasmus Smith's Professor of Natural and Experimental Philosophy at TCD from 2007 to 2012. Career Mike Coey has been a Professor of Physics at TCD since 1987, and was the last appointed Erasmus Smith's Professor of Natural and Experimental Philosophy (2007–2012), a chair that dates from 1724. He has supervised over 50 PhD students, and authored or edited 5 volumes. Recognised as a distinguished European specialist in magnetic materials; internationally he continues to be a leader in the field of magnetism. In 1994 Coey founded Magnetic Solutions and went on to be the cofounder of CRANN Ireland's Nanoscience Research institute (2002) and conceived Dublin's unique Science Gallery (2006). He has published over 700 scientific articles on diverse aspects of magnetism, many of which have had significant impact on the scientific community. As Ireland's most highly cited scientist, with an h-index of 109 (). Mike Coey continues to make an impact at both the cutting edge of his chosen areas of specialisation and to the wider scientific community. His recent textbook Magnetism and Magnetic Materials has met the need for a general, tangible text about modern magnetism. He delivered a public lecture on the History of Magnetism in Paris in 2010. Currently Mike holds positions at National University Singapore and the Max Planck Institute for Chemical Physics of Solids in Dresden. His belief in advan
https://en.wikipedia.org/wiki/Skewness%20risk
Skewness risk in financial modeling is the risk that results when observations are not spread symmetrically around an average value, but instead have a skewed distribution. As a result, the mean and the median can be different. Skewness risk can arise in any quantitative model that assumes a symmetric distribution (such as the normal distribution) but is applied to skewed data. Ignoring skewness risk, by assuming that variables are symmetrically distributed when they are not, will cause any model to understate the risk of variables with high skewness. Skewness risk plays an important role in hypothesis testing. The analysis of variance, one of the most common tests used in hypothesis testing, assumes that the data is normally distributed. If the variables tested are not normally distributed because they are too skewed, the test cannot be used. Instead, nonparametric tests can be used, such as the Mann–Whitney test for unpaired situation or the sign test for paired situation. Skewness risk and kurtosis risk also have technical implications in calculation of value at risk. If either are ignored, the Value at Risk calculations will be flawed. Benoît Mandelbrot, a French mathematician, extensively researched this issue. He feels that the extensive reliance on the normal distribution for much of the body of modern finance and investment theory is a serious flaw of any related models (including the Black–Scholes model and CAPM). He explained his views and alternative finance theory in a book: The (Mis)Behavior of Markets: A Fractal View of Risk, Ruin and Reward. In options markets, the difference in implied volatility at different strike prices represents the market's view of skew, and is called volatility skew. (In pure Black–Scholes, implied volatility is constant with respect to strike and time to maturity.) Skewness for bonds Bonds have a skewed return. A bond will either pay the full amount on time (very likely to much less likely depending on quality)
https://en.wikipedia.org/wiki/Pyrrolizidine%20alkaloid
Pyrrolizidine alkaloids (PAs), sometimes referred to as necine bases, are a group of naturally occurring alkaloids based on the structure of pyrrolizidine. Pyrrolizidine alkaloids are produced by plants as a defense mechanism against insect herbivores. More than 660 PAs and PA N-oxides have been identified in over 6,000 plants, and about half of them exhibit hepatotoxicity. They are found frequently in plants in the Boraginaceae, Asteraceae, Orchidaceae and Fabaceae families; less frequently in the Convolvulaceae and Poaceae, and in at least one species in the Lamiaceae. It has been estimated that 3% of the world’s flowering plants contain pyrrolizidine alkaloids. Honey can contain pyrrolizidine alkaloids, as can grains, milk, offal and eggs. To date (2011), there is no international regulation of PAs in food, unlike those for herbs and medicines. Unsaturated pyrrolizidine alkaloids are hepatotoxic, that is, damaging to the liver. PAs also cause hepatic veno-occlusive disease and liver cancer. PAs are tumorigenic. Disease associated with consumption of PAs is known as pyrrolizidine alkaloidosis. In humans, the use of medicinal herbs containing PAs, notably borage leaf, comfrey and coltsfoot in the West, and some Chinese medicinal herbs, has been shown to pose health risks. The degree of toxicity can vary based on age and gender, with fetuses and neonates showing high sensitivity, including instances of death. Some ruminant animals, for example cattle, show no change in liver enzyme activities or any clinical signs of poisoning when fed low concentrations of plants containing pyrrolizidine alkaloids. Yet, Australian studies have demonstrated toxicity. Sheep and goats especially, and to a lesser degree cattle, are much more resistant and tolerate much higher PA dosages, thought to be due to thorough detoxification via PA-destroying rumen microbes. PA is also used as a defense mechanism by some organisms such as Utetheisa ornatrix. Utetheisa ornatrix caterp
https://en.wikipedia.org/wiki/TOM%20%28mascot%29
TOM (Tigers Of Memphis) is the name of three Bengal Tigers which have served as the mascot of the Memphis Tigers since 1972. The most recent, TOM III, was a beloved Bengal Tiger mascot for the University of Memphis during one of the most glorious periods in University and athletics history. He died on September 18, 2020, less than three weeks after his 12th birthday. The Tigers' football team also has a costumed mascot called Pouncer. TOM III was housed and cared for by the Tiger Guard, a committee of the Highland Hundred football booster club. University funds are not used to provide for the tiger's needs. The University of Memphis was one of two universities in the United States that use a live tiger as a mascot (the other being LSU) and has received criticism from animal welfare organizations. Until Tom II Memphis was the only school to have a live tiger mascot present at football games. TOM attends Memphis Tiger home games at Liberty Bowl Memorial Stadium in a special sound proof, air conditioned trailer. History TOM I The first tiger, TOM, was purchased for $1,500 by the Highland Hundred Football Boosters in 1972. TOM was placed in a dog kennel and flown to Memphis on November 9, 1972. The tiger cub was taken to Athletic Director Billy J. Murphy's office for a press conference and was officially presented to Memphis University in a Liberty Bowl Memorial Stadium ceremony during the November 11, 1972, football game against the University of Cincinnati. TOM was initially named Shane at the suggestion of the breeder’s daughter. Once in Memphis, a contest was held to rename the mascot and over 2,500 entries were submitted to a committee chaired by Harry Pierotti. The list was reduced to two choices, Shane, and TOM, which stands for Tigers Of Memphis and TOM was the victor. The winning entry was submitted by Mrs. Lauraine Huddleston of Memphis. During his first few months in Memphis, TOM was housed in Highland Hundred member Bill Proctor's garage. TOM was lat
https://en.wikipedia.org/wiki/History%20of%20the%20present%20illness
Following the chief complaint in medical history taking, a history of the present illness (abbreviated HPI) (termed history of presenting complaint (HPC) in the UK) refers to a detailed interview prompted by the chief complaint or presenting symptom (for example, pain). Questions to include Different sources include different questions to be asked while conducting an HPI. Several acronyms have been developed to categorize the appropriate questions to include. The Centers for Medicare and Medicaid Services has published criteria for what constitutes a reimbursable HPI. A "brief HPI" constitutes one to three of these elements. An "extended HPI" includes four or more of these elements. Also usable is SOCRATES. For chronic pain, the Stanford Five may be assessed to understand the pain experience from the patient's primary belief system. See also Medical record Medical history Pain scale
https://en.wikipedia.org/wiki/Default%20password
Where a device needs a username and/or password to log in, a default password is usually provided to access the device during its initial setup, or after resetting to factory defaults. Manufacturers of such equipment typically use a simple password, such as admin or password on all equipment they ship, expecting users to change the password during configuration. The default username and password are usually found in the instruction manual (common for all devices) or on the device itself. Default passwords are one of the major contributing factors to large-scale compromises of home routers. Leaving such a password on devices available to the public is a major security risk. Some devices (such as wireless routers) will have unique default router usernames and passwords printed on a sticker, which is more secure than a common default password. Some vendors will however derive the password from the device's MAC address using a known algorithm, in which case the password can also be easily reproduced by attackers. Default access To access internet-connected devices on a network, a user must know its default IP address. Manufacturers typically use 192.168.1.1 or 10.0.0.1 as default router IP addresses. However, some will have variations on this. Similarly to login details, leaving this unchanged can lead to security issues. See also Backdoor (computing) Internet of things Cyber-security regulation
https://en.wikipedia.org/wiki/Attack%20rate
In epidemiology, the attack rate is the proportion of an at-risk population that contracts the disease during a specified time interval. It is used in hypothetical predictions and during actual outbreaks of disease. An at-risk population is defined as one that has no immunity to the attacking pathogen, which can be either a novel pathogen or an established pathogen. It is used to project the number of infections to expect during an epidemic. This aids in marshalling resources for delivery of medical care as well as production of vaccines and/or anti-viral and anti-bacterial medicines. The rate is arrived at by taking the number of new cases in the population at risk and dividing by the number of persons at risk in the population. See also Incidence (epidemiology) Compartmental models in epidemiology Herd immunity Risk assessment in public health Vaccine-naive
https://en.wikipedia.org/wiki/Geminin
Geminin, DNA replication inhibitor, also known as GMNN, is a protein in humans encoded by the GMNN gene. A nuclear protein present in most eukaryotes and highly conserved across species, numerous functions have been elucidated for geminin including roles in metazoan cell cycle, cellular proliferation, cell lineage commitment, and neural differentiation. One example of its function is the inhibition of Cdt1. History Geminin was originally identified as an inhibitor of DNA replication and substrate of the anaphase-promoting complex. Coincidentally, geminin was also shown to expand the neural plate in the developing Xenopus embryo. Structure Geminin is a nuclear protein made up of about 200 amino acids, with a molecular weight of approximately 25 kDa. It contains an atypical leucine zipper coiled-coil domain. It has no known enzymatic activity nor DNA binding motifs. Function Cell cycle control Geminin is absent during G1 phase and accumulates through S, G2 phase and M phases of the cell cycle. Geminin levels drop at the metaphase-anaphase transition of mitosis when it is degraded by the anaphase-promoting complex. S phase During S phase, geminin is a negative regulator of DNA replication. In many cancer cell lines, inhibition of geminin by RNA interference results in re-replication of portions of the genome, which leads to aneuploidy. In these cell lines, geminin knockdown leads to markedly slowed growth and apoptosis within several days. However, the same is not true for primary and immortalized human cell lines, where other mechanisms exists to prevent DNA re-replication. Since geminin knockdown leads to cell death in many cancer cell lines but not primary cell lines, it has been proposed as a potential therapeutic target for cancer treatment. Mitosis At the start of the S-phase until late mitosis, geminin inhibits the replication factor Cdt1, preventing the assembly of the pre-replication complex. In early G1, the anaphase promoting complex trigg
https://en.wikipedia.org/wiki/Tom%20Fink
Thomas A. Fink (August 26, 1928 – June 4, 2021) was an American Republican politician from Alaska. He was Mayor of Anchorage from 1987 to 1994 and Speaker of the Alaska House of Representatives from 1973 to 1975. He was also a member of the Federal Retirement Thrift Investment Board, serving from 1997 to 2010. Personal life Fink was born in Peoria, Illinois. He received a Bachelor of Science from Bradley University in 1950, and a J.D. from the University of Illinois Law School in 1952. He moved to Anchorage, Alaska in 1952, and has worked as a life insurance salesman since 1958. He received his Chartered Life Underwriter certification from American College in 1963. Fink was in partnership with Don Schroer for 20 years, doing business as The Schroer-Fink Agency. Schroer was also often involved in Fink's various campaigns for office. He died on June 4, 2021, at the age of 92. Political career Fink was elected to the Alaska House of Representatives in 1966, and became Speaker of the House in 1973. In 1975, he resigned in protest of a new law that would have required him to release a list of his insurance clients. Fink mounted an unsuccessful bid to succeed term-limited Republican Governor of Alaska Jay Hammond in 1982. He ran on a platform promoting the relocation of the state capital from Juneau to Willow, but was defeated by Democrat Bill Sheffield. In 1986, Fink was elected Mayor of Anchorage in the wake of a dramatic drop in the price of oil, which devastated the local economy. During his term in office, he advocated the sale of ATU, the municipal telephone utility. He received national attention for his stance on gay rights when he vetoed a 1993 municipal ordinance that would protect city employees from discrimination on the basis of sexual orientation. The same year, he called for the cancellation of funding for a local theater group that included works with homosexual themes in their repertoire. In both cases, he was overridden by the Anchorage Assembly
https://en.wikipedia.org/wiki/Streaking%20%28microbiology%29
In microbiology, streaking is a technique used to isolate a pure strain from a single species of microorganism, often bacteria. Samples can then be taken from the resulting colonies and a microbiological culture can be grown on a new plate so that the organism can be identified, studied, or tested. The modern streak plate method has progressed from the efforts of Robert Koch and other microbiologists to obtain microbiological cultures of bacteria in order to study them. The dilution or isolation by streaking method was first developed by Loeffler and Gaffky in Koch's laboratory, which involves the dilution of bacteria by systematically streaking them over the exterior of the agar in a Petri dish to obtain isolated colonies which will then grow into quantity of cells, or isolated colonies. If the agar surface grows microorganisms which are all genetically same, the culture is then considered as a microbiological culture. Technique Streaking is rapid and ideally a simple process of isolation dilution. The technique is done by diluting a comparatively large concentration of bacteria to a smaller concentration. The decrease of bacteria should show that colonies are sufficiently spread apart to affect the separation of the different types of microbes. Streaking is done using a sterile tool, such as a cotton swab or commonly an inoculation loop. Aseptic techniques are used to maintain microbiological cultures and to prevent contamination of the growth medium. There are many different types of methods used to streak a plate. Picking a technique is a matter of individual preference and can also depend on how large the number of microbes the sample contains. The three-phase streaking pattern, known as the T-Streak, is recommended for beginners. The streaking is done using a sterile tool, such as a cotton swab or commonly an inoculation loop. The inoculation loop is first sterilized by passing it through a flame. When the loop is cool, it is dipped into an inoculum such as
https://en.wikipedia.org/wiki/Patch%20point
In electronic audio technology, a patch point is a connection that allows a signal to be withdrawn from a device, modified in some way, and returned. This can, for example, be done using a phone connector, using the tip of the plug for the outgoing mono signal, and the ring for the returning signal, a configuration known as "tip send, ring return". Commonly known as an insert on professional audio mixing consoles.
https://en.wikipedia.org/wiki/American%20Photonics
American Photonics, Inc. (API) was a very early developer of local area network technologies in the 1980s, based first in Brewster, New York, moving later to Brookfield Center, Connecticut. History American Photonics, Inc., was founded in 1982 by James Walyus (1938–2000) while he was employed by Exxon Optical Information Systems (Exxon OIS) of Elmsford, New York. His intention was to create an organization that would develop leading-edge, yet commercially viable, optical communication technologies that could be sold into large potential markets. After some initial research in networking technologies, API was contracted by Interlan (another early Ethernet networking company, subsequently acquired by Micom and then by Racal Electronics PLC) to develop an adjunct to its 10BASE5 Ethernet transceivers and network interface cards (NICs, or network cards). This adjunct product was to extend the distance between the transceiver and the NIC by way of fiber optics, as the distance was severely limited by the 15-pin Attachment Unit Interface (AUI) cable used in this connection. Building upon this early success, API developed the RL1000 line of Ethernet 10BASE5 transceivers. The RL1000 physical design was patterned on the rugged 3Com 3C107 transceiver, with the added feature of indicator lamps much like the Cabletron Systems ST500 transceiver, and it became relatively popular as a result. Another early Ethernet product designed by API was the RL6000 Ethernet Repeater. This unit directly competed with the Digital Equipment Corporation (DEC) DEREP-AA repeater, but had the advantages of being modular (allowing for fiber interfaces, Thinnet or AUI Cable interfaces) and smaller (occupying less than half the space of a DEREP-AA). Consequently, API was able to overtake DEC in sales of this product in 1984, a significant feat for a start-up in the Ethernet industry. One of the last Ethernet products developed by API was the RL8000 Modular Ethernet Hub. This unit was released at
https://en.wikipedia.org/wiki/Teleavia
Teleavia was a French manufacturer of televisions in the mid-20th century, it was created by aviation company "Sud Aviation" as a diversification operation to employ redundant workforce. It was later absorbed by Thomson SA. The brand was best known for the innovative television set designs created by the French designer Roger Tallon in the 1950s and 1960s. The 1968 edition, known as the Portavia 111, was the year's most photographed appliance. Tallon later said, in explanation of the trademark's success and longevity: "It was the first time the shape, the function and the material absolutely intertwined". The brand's motto, from 1963, was "TELEAVIA, design also matters".
https://en.wikipedia.org/wiki/Web%20Services%20Distributed%20Management
Web Services Distributed Management (WSDM, pronounced wisdom) is a web service standard for managing and monitoring the status of other services. The goal of WSDM is to allow a well-defined network protocol for controlling any other service that is WSDM-compliant. For example, a third-party digital dashboard or network management system could be used to monitor the status or performance of other services, and potentially take corrective actions to restart services if failures occur. Some aspects of WSDM overlap or displace functionality of SNMP. Specifications WSDM 1.0 was approved as an OASIS standard on March 9, 2005. The approval of the WSDM 1.1 specification occurred on September 7, 2006. WSDM consists of two specifications: Management Using Web Services (MUWS) — WSDM MUWS defines how to represent and access the manageability interfaces of resources as Web services. It defines a basic set of manageability capabilities, such as resource identity, metrics, configuration, and relationships, which can be composed to express the capability of the management instrumentation. WSDM MUWS also provides a standard management event format to improve interoperability and correlation. Management Of Web Services (MOWS) — WSDM MOWS defines how to manage Web services as resources and how to describe and access that manageability using MUWS. MOWS provides mechanisms and methodologies that enable manageable Web services applications to interoperate across enterprise and organizational boundaries. See also OASIS (organization) (Organization for the Advancement of Structured Information Standards)
https://en.wikipedia.org/wiki/Automated%20vacuum%20collection
An automated vacuum waste collection system, also known as pneumatic refuse collection, or automated vacuum collection (AVAC), transports waste at a high speed through underground pneumatic tubes to a collection station where it is compacted and sealed in containers. When the container is full, it is transported away and then emptied. The system helps facilitate the separation and recycling of waste. The process begins with the deposition of trash into intake hatches, called portholes, which may be specialized for waste, recycling, or compost. Portholes are located in public areas and on private property where the owner has opted in. The waste is then pulled through an underground pipeline by an air pressure difference created by large industrial fans, in response to porthole sensors that indicate when the trash needs to be emptied and help ensure that only one kind of waste material travels through the pipe at a time. The pipelines converge on a central processing facility that uses automated software to direct the waste to the proper container, and then be trucked to its final location, such as a landfill or composting a plant. History The first system was created in Sweden in the 1960s, designed by the Swedish corporation Envac AB (formerly known as Centralsug AB). The first installation was in 1961 at Sollefteå Hospital. The first vacuum system for household waste, was installed in the new residential district of Ör-Hallonbergen, Sweden in 1965. Overview Pneumatic waste collection systems provide a number of environmental benefits. These systems can decrease emissions from transit of waste by up to 90%. Systems in Europe provide separate outlets for food, recycling, and non-recycling, making waste separation and recycling more efficient. Some systems require household ID cards to use, and limit the amount of non-recyclable waste allowed per month, issuing a tax if the threshold is crossed. In Bergen, Norway, this system resulted in a 29% increase in plastic
https://en.wikipedia.org/wiki/List%20of%20eukaryotic%20picoplankton%20species
List of eukaryotic species that belong to picoplankton, meaning one of their cell dimensions is smaller than 3 μm. Autotrophic species Chlorophyta Chlorophyceae Stichococcus cylindricus Butcher, 3 – 4.5 μm, brackish Pedinophyceae Marsupiomonas pelliculata Jones et al., 3 – 3 μm, brackish-marine Resultor micron Moestrup, 1.5 – 2.5 μm, marine Prasinophyceae Bathycoccus prasinos Eikrem et Throndsen, 1.5 – 2.5 μm, marine Crustomastix stigmatica Zingone, 3 – 5 μm, marine Dolichomastix lepidota Manton, 2.5 – 2.5 μm, marine Dolichomastix eurylepidea Manton, 3 μm, marine Dolichomastix tenuilepis Throndsen et Zingone, 3 – 4.5 μm, marine Mantoniella squamata Desikachary, 3 – 5 μm, marine Micromonas pusilla Manton et Parke, 1 – 3 μm, marine Ostreococcus tauri Courties et Chrétiennot-Dinet, 0.8 – 1.1 μm, marine Picocystis salinarum Lewin, 2 – 3 μm, hypersaline Prasinococcus capsulatus Miyashita et Chihara, 3 – 5.5 μm, marine Prasinoderma coloniale Hasegawa et Chihara, 2.5 – 5.5 μm, marine Pseudoscourfieldia marina Manton, 3 – 3.5 μm, marine Pycnococcus provasolii Guillard, 1.5 – 4 μm, marine Pyramimonas virginica Pennick, 2.7 – 3.5 μm, marine Trebouxiophyceae Chlorella nana Andreoli et al., 1.5 – 3 μm, marine Picochlorum oklahomensis Henley et al., 2 – 2 μm, hypersaline Picochlorum atomus Henley et al., 2 – 3 μm, brackish Picochlorum eukaryotum Henley et al., 3 – 3 μm, marine Picochlorum maculatus Henley et al., 3 – 3 μm, brackish Cryptophyta Cryptophyceae Hillea marina Butcher, 1.5 – 2.5 μm, marine Haptophyta Prymnesiaceae Chrysochromulina tenuisquama Estep et al., 2 – 5 μm, marine Chrysochromulina minor Parke et Manton, 2.5 – 7.5 μm, marine Chrysochromulina apheles Moestrup et Thomsen, 3 – 4 μm, marine Dicrateria inornata Parke, 3 – 5.5 μm, marine Ericiolus spiculiger Thomsen, 3 – 3.8 μm, marine Imantonia rotunda Reynolds, 2 – 4 μm, marine Phaeocystis cordata Zingone, 3 – 4 μm, marine Phaeocystis pouchetii Lagerheim, 3 – 8 μm, marine
https://en.wikipedia.org/wiki/Matita
Matita is an experimental proof assistant under development at the Computer Science Department of the University of Bologna. It is a tool aiding the development of formal proofs by man-machine collaboration, providing a programming environment where formal specifications, executable algorithms and automatically verifiable correctness certificates naturally coexist. Matita is based on a dependent type system known as the Calculus of (Co)Inductive Constructions (a derivative of Calculus of Constructions), and is compatible, to some extent, with Coq. The word "matita" means "pencil" in Italian (a simple and widespread editing tool). It is a reasonably small and simple application, whose architectural and software complexity is meant to be mastered by students, providing a tool particularly suited for testing innovative ideas and solutions. Matita adopts a tactic-based editing mode; (XML-encoded) proof objects are produced for storage and exchange. Main features Existential variables are native in Matita, allowing a simpler management of dependent goals. Matita implements a bidirectional type inference algorithm exploiting both inferred and expected types. The power of the type inference system (refiner) is further augmented by a mechanism of hints that helps in synthesizing unifiers in particular situations specified by the user. Matita supports a sophisticated disambiguation strategy based on a dialog between the parser and the typechecker. At the interactive level, the system implements a small step execution of structured tactics allowing a much better management of the proof development, and naturally leading to more structured and readable scripts. Applications Matita has been employed in CerCo (Certified Complexity): a FP7 European Project focused on the development of a formally verified, complexity preserving compiler from a large subset of C to the assembler of a MCS-51 microprocessor. Documentation The Matita tutorial provides a pragmatic introduc
https://en.wikipedia.org/wiki/Filesystem-level%20encryption
Filesystem-level encryption, often called file-based encryption, FBE, or file/folder encryption, is a form of disk encryption where individual files or directories are encrypted by the file system itself. This is in contrast to the full disk encryption where the entire partition or disk, in which the file system resides, is encrypted. Types of filesystem-level encryption include: the use of a 'stackable' cryptographic filesystem layered on top of the main file system a single general-purpose file system with encryption The advantages of filesystem-level encryption include: flexible file-based key management, so that each file can be and usually is encrypted with a separate encryption key individual management of encrypted files e.g. incremental backups of the individual changed files even in encrypted form, rather than backup of the entire encrypted volume access control can be enforced through the use of public-key cryptography, and the fact that cryptographic keys are only held in memory while the file that is decrypted by them is held open. General-purpose file systems with encryption Unlike cryptographic file systems or full disk encryption, general-purpose file systems that include filesystem-level encryption do not typically encrypt file system metadata, such as the directory structure, file names, sizes or modification timestamps. This can be problematic if the metadata itself needs to be kept confidential. In other words, if files are stored with identifying file names, anyone who has access to the physical disk can know which documents are stored on the disk, although not the contents of the documents. One exception to this is the encryption support being added to the ZFS filesystem. Filesystem metadata such as filenames, ownership, ACLs, extended attributes are all stored encrypted on disk. The ZFS metadata relating to the storage pool is stored in plaintext, so it is possible to determine how many filesystems (datasets) are available in the p
https://en.wikipedia.org/wiki/List%20of%20cryptographic%20file%20systems
This is a list of filesystems with support for filesystem-level encryption. Not to be confused with full-disk encryption. General-purpose filesystems with encryption AdvFS on Digital Tru64 UNIX Novell Storage Services on Novell NetWare and Linux NTFS with Encrypting File System (EFS) for Microsoft Windows ZFS since Pool Version 30 Ext4, added in Linux kernel 4.1 in June 2015 F2FS, added in Linux 4.2 APFS, macOS High Sierra (10.13) and later. Cryptographic filesystems FUSE-based file systems Integrated into the Linux kernel eCryptfs Rubberhose filesystem (discontinued) StegFS (discontinued) Integrated into other UNIXes geli on FreeBSD EFS (Encrypted File System) on AIX See also Comparison of disk encryption software
https://en.wikipedia.org/wiki/System-specific%20impulse
The term System-specific Impulse, Issp is a measure that describes performance of jet propulsion systems. A reference number is introduced, which defines the total impulse, Itot, delivered by the system, divided by the system mass, mPS: Issp=Itot/mPS Because of the resulting dimension, - delivered impulse per kilogram of system mass mPS, this number is called ‘System-specific Impulse’. In SI units, impulse is measured in newton-seconds (N·s) and Issp in N·s/kg. The Issp allows a more accurate determination of the propulsive performance of jet propulsion systems than the commonly used Specific Impulse, Isp, which only takes into account the propellant and the thrust engine performance characteristics. Therefore, the Issp permits an objective and comparative performance evaluation of systems of different designs and with different propellants. The Issp can be derived directly from actual jet propulsion systems by determining the total impulse delivered by the mass of contained propellant, divided by the known total (wet) mass of the propulsion system. This allows a quantitative comparison of for example, built systems. In addition, the Issp can be derived analytically, for example for spacecraft propulsion systems, in order to facilitate a preliminary selection of systems (chemical, electrical) for spacecraft missions of given impulse and velocity-increment requirements. A more detailed presentation of derived mathematical formulas for Issp and their applications for spacecraft propulsion are given in the cited references. In 2019 Koppel and others used ISSP as a criterion in selection of electric thrusters. See also Specific Impulse
https://en.wikipedia.org/wiki/Stellar%20triangulation
Stellar triangulation is a method of geodesy and of its subdiscipline space geodesy used to measure Earth's geometric shape. Stars were first used for this purpose by the Finnish astronomer Yrjö Väisälä in 1959, who made astrometric photographs of the night sky at two stations together with a lighted balloon probe between them. Even this first step showed the potential of the method, as Väisälä got the azimuth between Helsinki and Turku (a distance of 150 km) with an accuracy of 1″. Soon the method was successfully tested by ballistic rockets and for some special satellites. Adequate computer programs were written for the astrometric reduction of the photographic plates, the intersection of the "observation planes" containing the stations and the targets, and the least-squares adjustment of stellar-terrestrial networks with redundancy. The advantages of stellar triangulation were the possibility to cross far distances (terrestrial observations are restricted to approx. 30 km, and even in high mountains to 60 km), and the independency of the Earth's gravity field. The results are azimuths between the stations in the stellar-inertial navigation system, despite of no direct line of sight. In 1960, the first appropriate space probe was launched: Project Echo, a 30 m diameter balloon satellite. By then the whole of Western Europe could be linked together geodetically with accuracies 2–10 times better than by classical triangulation. During the late 1960s, a global project was begun by H.H. Schmid (Switzerland) to connect 45 stations all over the continents, with distances of 3000–5000 km. It was finished in 1974 by precise reduction of some 3000 stellar plates and network adjustment of 46 stations (2 additional ones in Germany and the Pacific, but without the areas of Russia and China). The mean accuracy was between ±5 m (Europe, USA) and 7–10 m (Africa, Antarctica), depending on weather and infrastructure conditions. Combined with Doppler measurements (such as f
https://en.wikipedia.org/wiki/PAGEOS
PAGEOS (PAssive Geodetic Earth Orbiting Satellite) was a balloon satellite which was launched by NASA in June 1966. Design PAGEOS had a diameter of exactly , consisted of a thick mylar plastic film coated with vapour deposited aluminium enclosing a volume of and was used for the Weltnetz der Satellitentriangulation (Worldwide Satellite Triangulation Network) – a global cooperation organized by Hellmut Schmid (Switzerland & USA) 1969–1973. Finished in 1974, the network connected 46 stations (3000–5000 km distance) of all continents with an accuracy of 3–5 m (approx. 20 times better than terrestrial triangulations at that time). Orbit The PAGEOS spacecraft was placed into a polar orbit (inclination 85–86°) with a height of approx. 4000 km, which had gradually lowered during its 9 years of operation. The satellite partly disintegrated in July 1975, which was followed by a second break-up that occurred in January 1976 resulting in the release of a large number of fragments. Most of these re-entered during the following decade. PAGEOS data have been released in 11 data sets. PAGEOS' predecessors in satellite triangulation were the balloons Echo 1 (1960, 30 m) and Echo 2 (1964, 40 m) which were also used for passive telecommunication. Their apparent magnitude (brightness) was 1 mag, that of Pageos 2 mag (like Polaris) due to its higher orbit. Pageos could therefore be observed simultaneously e.g. from the ground in places such as Europe and North America. PAGEOS appeared as a slow-moving star (at first glance it would appear to be stationary). Its orbital period was approximately three hours. Because of its high orbit and polar inclination it would avoid the Earth's shadow and be observed any time of the night (low-orbit satellites are only observable shortly after sunset and before sunrise). In the early 1970s PAGEOS varied from 2nd apparent magnitude to beyond visibility over a period of a few minutes. In 2016, one of the largest fragments of PAGEOS de-orbited.
https://en.wikipedia.org/wiki/ProDigi
Mitsubishi's ProDigi was a professional audio, reel-to-reel, digital audio tape format with a stationary head position, similar to Sony's Digital Audio Stationary Head, which competed against ProDigi when the format was available in the mid-1980s through the early 1990s. Audio was digitally recorded linearly on the tape and is guarded by a powerful error correction scheme of cyclic redundancy checks to ensure integrity of the signal even if data is lost during playback. Prodigi recorders were available in 2-track variations, which used 1/4" tape; 32-track variations, which used 1" tape, and a 16-track version using 1/2" tape. All of the machines require the use of metal particle tape. 2-track recorders (1/4"): X-86 X-86HS (capable of recording and playing back at 88.2 kHz and 96 kHz sample rates as well as the X-86's 44.1 kHz and 48 kHz) X-86C (for "compatible"; the X-86C could play back 50.4 kHz tapes made on the X-80 as well as normal X-86 tapes) 16-track recorder (1/2"): X-400 32 track recorders (1"): X-800 X-850 X-880 Otari DTR-900 (an X-850, rebadged for Otari). Mitsubishi and Otari collaborated on the design of the X-850 and X-880. The tape transport of both machines was derived from the Otari MTR90 Mk II, modified to handle 1" tape. Some mechanical parts were interchangeable between the X-850 and MTR90, the PC cards in the transport control section were manufactured by Otari and with two exceptions (the capstan servo and master CPU cards) were interchangeable between the Mitsubishi and Otari machines. The section of the X-850 service manual concerning transport adjustments was a verbatim reprint of the corresponding section of the MTR90 service manual. The ProDigi format was extremely popular for use in country music. Specifically, at studios in Nashville, Tennessee, where nearly all of the large recording studios used Prodigi machines. The format fell from favor by the mid-1990s with the popularity of Digidesign's Pro Tools hard drive-based multi-trac
https://en.wikipedia.org/wiki/Communications%20server
Communications servers are open, standards-based computing systems that operate as a carrier-grade common platform for a wide range of communications applications and allow equipment providers to add value at many levels of the system architecture. Based on industry-managed standards such as AdvancedTCA, MicroTCA, Carrier Grade Linux and Service Availability Forum specifications, communications servers are the foundational platform upon which equipment providers build network infrastructure elements for deployments such as IP Multimedia Subsystem (IMS), IPTV and wireless broadband (e.g. WiMAX). Support for communications servers as a category of server is developing rapidly throughout the communications industry. Standards bodies, industry associations, vendor alliance programs, hardware and software manufacturers, communications server vendors and users are all part of an increasingly robust communications server ecosystem. Regardless of their specific, differentiated features, communications servers have the following attributes: open, flexible, carrier-grade, and communications-focused. Attributes Open Based on industry-managed open standards Broad, multi-vendor ecosystem Industry certified interoperability Availability of tools that facilitate development and integration of applications at the standardized interfaces Multiple competitive options for standards-based modules Flexible Designed to easily incorporate application-specific added value at all levels of the solution Can be rapidly repurposed as needs change to protect customer investment Multi-level, scalable, bladed architecture Meets needs of multiple industries beyond telecommunications, such as medical imaging, defense and aerospace Carrier grade Designed for Longevity of supply Extended lifecycle (>10 years) support High availability (>5NINES) “Non-disruptively” upgradeable and updatable Hard real time capability to ensure quality of service for critical traffic Meets network building
https://en.wikipedia.org/wiki/NextWave%20Wireless
NextWave Wireless Inc. is a wireless technology company that produces mobile multimedia solutions and speculates in the wireless spectrum market. The company consists principally of various wireless spectrum holdings. The company is most notable for successfully suing the U.S. government for improperly seizing its assets while under bankruptcy protection. AT&T announced its acquisition of NextWave in 2012. History The company original spun out of QUALCOMM in 1995 and began life as the biggest bidder in the FCC C-Block. NextWave originally won the licenses in an auction intended for small businesses with limited resources in 1996. NextWave, which bid $4.7 billion for the licenses, made the minimum 10 percent down payment of $500 million for the spectrum. But shortly thereafter NextWave filed for bankruptcy protection and defaulted on its payments for the licenses. The FCC, in turn, confiscated the licenses and re-sold them to Verizon Wireless and the subsidiaries of AT&T Wireless and Cingular Wireless, among others, for $17 billion in an auction that ended in January 2001. Ultimately NextWave prevailed in the Supreme Court, 8-1, and was permitted to keep the PCS licenses. NextWave's bankruptcy protection lasted approximately ten years, during which time the asset value of the licenses had dramatically increased and NextWave was able to repay the original debt and sell their spectrum assets to Verizon Wireless, Cingular (now AT&T) and MetroPCS. They re-emerged as NextWave Wireless with $550M in capital. The reborn company had several areas of focus: development of a 4G broadband network through its Network Solutions Group in Las Vegas, NV, development of WiMAX baseband and RF integrated circuits and related technology in its Advanced Technology Group in San Diego, CA, and accumulation of spectrum and other carrier assets both in the U.S. and internationally. NextWave made several significant acquisitions that shaped its business and technology strategy. P
https://en.wikipedia.org/wiki/Global%20Area%20Reference%20System
The Global Area Reference System (GARS) is a standardized geospatial reference system developed by the National Geospatial-Intelligence Agency (NGA) for use across the United States Department of Defense. Under the Chairman of the Joint Chiefs of Staff Instruction CJCSI 3900.01C dated 30 June 2007, GARS was adopted for use by the US DoD as "the “area-centric” counterpart to the “point-centric” MGRS". It uses the WGS 1984 Datum and is based on lines of longitude (LONG) and latitude (LAT). It is intended to provide an integrated common frame of reference for joint force situational awareness to facilitate air-to-ground coordination, deconfliction, integration, and synchronization. This area reference system provides a common language between the components and simplifies communications. GARS is primarily designed as a battlespace management tool and not to be used for navigation or targeting. Design GARS divides the surface of the earth into 30-minute by 30-minute cells. Each cell is identified by a five-character designation. (ex. 006AG) The first three characters designate a 30-minute wide longitudinal band. Beginning with the 180-degree meridian and proceeding eastward, the bands are numbered from 001 to 720, so that 180 E to 179 30’W is band 001; 179 30’W to 179 00’W is band 002; and so on. The fourth and fifth characters designate a 30-minute wide latitudinal band. Beginning at the south pole and proceeding northward, the bands are lettered from AA to QZ (omitting I and O) so that 90 00’S to 89 30’S is band AA; 89 30’S to 89 00’S is band AB; and so on. Each 30-minute cell is divided into four 15-minute by 15-minute quadrants. The quadrants are numbered sequentially, from west to east, starting with the northernmost band. Specifically, the northwest quadrant is “1”; the northeast quadrant is “2”; the southwest quadrant is “3”; the southeast quadrant is “4”. Each quadrant is identified by a six-character designation. (ex. 006AG3) The first five charact
https://en.wikipedia.org/wiki/Exponential-Golomb%20coding
An exponential-Golomb code (or just Exp-Golomb code) is a type of universal code. To encode any nonnegative integer x using the exp-Golomb code: Write down x+1 in binary Count the bits written, subtract one, and write that number of starting zero bits preceding the previous bit string. The first few values of the code are: 0 ⇒ 1 ⇒ 1 1 ⇒ 10 ⇒ 010 2 ⇒ 11 ⇒ 011 3 ⇒ 100 ⇒ 00100 4 ⇒ 101 ⇒ 00101 5 ⇒ 110 ⇒ 00110 6 ⇒ 111 ⇒ 00111 7 ⇒ 1000 ⇒ 0001000 8 ⇒ 1001 ⇒ 0001001 ... In the above examples, consider the case 3. For 3, x+1 = 3 + 1 = 4. 4 in binary is '100'. '100' has 3 bits, and 3-1 = 2. Hence add 2 zeros before '100', which is '00100' Similarly, consider 8. '8 + 1' in binary is '1001'. '1001' has 4 bits, and 4-1 is 3. Hence add 3 zeros before 1001, which is '0001001'. This is identical to the Elias gamma code of x+1, allowing it to encode 0. Extension to negative numbers Exp-Golomb coding is used in the H.264/MPEG-4 AVC and H.265 High Efficiency Video Coding video compression standards, in which there is also a variation for the coding of signed numbers by assigning the value 0 to the binary codeword '0' and assigning subsequent codewords to input values of increasing magnitude (and alternating sign, if the field can contain a negative number): 0 ⇒ 0 ⇒ 1 ⇒ 1 1 ⇒ 1 ⇒ 10 ⇒ 010 −1 ⇒ 2 ⇒ 11 ⇒ 011 2 ⇒ 3 ⇒ 100 ⇒ 00100 −2 ⇒ 4 ⇒ 101 ⇒ 00101 3 ⇒ 5 ⇒ 110 ⇒ 00110 −3 ⇒ 6 ⇒ 111 ⇒ 00111 4 ⇒ 7 ⇒ 1000 ⇒ 0001000 −4 ⇒ 8 ⇒ 1001 ⇒ 0001001 ... In other words, a non-positive integer x≤0 is mapped to an even integer −2x, while a positive integer x>0 is mapped to an odd integer 2x−1. Exp-Golomb coding is also used in the Dirac video codec. Generalization to order k To encode larger numbers in fewer bits (at the expense of using more bits to encode smaller numbers), this can be generalized using a nonnegative integer parameter  k. To encode a nonnegative integer x in an order-k exp-Golomb code: Encode ⌊x/2k⌋ using order-0 exp-Golomb code describe
https://en.wikipedia.org/wiki/Scanning%20acoustic%20microscope
A scanning acoustic microscope (SAM) is a device which uses focused sound to investigate, measure, or image an object (a process called scanning acoustic tomography). It is commonly used in failure analysis and non-destructive evaluation. It also has applications in biological and medical research. The semiconductor industry has found the SAM useful in detecting voids, cracks, and delaminations within microelectronic packages. History The first scanning acoustic microscope (SAM), with a 50 MHz ultrasonic lens, was developed in 1974 by R. A. Lemons and C. F. Quate at the Microwave Laboratory of Stanford University. A few years later, in 1980, the first high-resolution (with a frequency up to 500 MHz) through-transmission SAM was built by R.Gr. Maev and his students at his Laboratory of Biophysical Introscopy of the Russian Academy of Sciences. First commercial SAM ELSAM, with a broad frequency range from 100 MHz up to 1.8 GHz, was built at the Ernst Leitz GmbH by the group led by Martin Hoppe and his consultants Abdullah Atalar (Stanford University), Roman Maev (Russian Academy of Sciences) and Andrew Briggs (Oxford University.) Since then, many improvements to such systems have been made to enhance resolution and accuracy. Most of them were described in detail in the monograph Advanced in Acoustic Microscopy, Ed. by Andrew Briggs, 1992, Oxford University Press and in monograph by Roman Maev, Acoustic Microscopy Fundamentals and Applications, Monograph, Wiley & Son - VCH, 291 pages, August 2008, as well as recently in. C-SAM versus other Techniques There are lots of methods for failure analysis of damages in microelectronic packages, including but not limited to laser decapsulation, wet etch decapsulation, optical microscopy, SEM microscopy, X-ray.There are many methods for failure analysis of damages in microelectronic packages. The problem with most of these methods is the fact that they are destructive. This means it’s possible that the damage itself will be do
https://en.wikipedia.org/wiki/Secrets%20of%20the%20Alchemist%20Dar
Secrets of the Alchemist Dar is a book written by Michael Stadther and published in September 2006 by the author's company, Treasure Trove, Inc. A story about fairies and other imaginary and fantastic creatures, the book includes hidden puzzles for an armchair treasure hunt with one hundred rings (valued at more than two million dollars) as prizes. The book is a sequel to A Treasure's Trove, another armchair treasure hunt and contains the same characters, although the author has stated at book signings that the puzzles in the two books are not connected. In 2007, Treasure Trove, Inc. was put into bankruptcy because of a dispute with its distributor, Simon and Schuster. None of the rings are known to have been claimed.
https://en.wikipedia.org/wiki/Krogh%27s%20principle
Krogh's principle states that "for such a large number of problems there will be some animal of choice, or a few such animals, on which it can be most conveniently studied." This concept is central to those disciplines of biology that rely on the comparative method, such as neuroethology, comparative physiology, and more recently functional genomics. History Krogh's principle is named after the Danish physiologist August Krogh, winner of the Nobel Prize in Physiology for his contributions to understanding the anatomy and physiology of the capillary system, who described it in The American Journal of Physiology in 1929. However, the principle was first elucidated nearly 60 years prior to this, and in almost the same words as Krogh, in 1865 by Claude Bernard, the French instigator of experimental medicine, on page 27 of his "Introduction à l'étude de la médecine expérimentale": Krogh wrote the following in his 1929 treatise on the then current 'status' of physiology (emphasis added): "Krogh's principle" was not utilized as a formal term until 1975 when the biochemist Hans Adolf Krebs (who initially described the Citric Acid Cycle), first referred to it. More recently, at the International Society for Neuroethology meeting in Nyborg, Denmark in 2004, Krogh's principle was cited as a central principle by the group at their 7th Congress. Krogh's principle has also been receiving attention in the area of functional genomics, where there has been increasing pressure and desire to expand genomics research to a more wide variety of organisms beyond the traditional scope of the field. Philosophy and applications A central concept to Krogh's principle is evolutionary adaptation. Evolutionary theory maintains that organisms are suited to particular niches, some of which are highly specialized for solving particular biological problems. These adaptations are typically exploited by biologists in several ways: Methodology: (e.g. Taq polymerase and PCR): The need to manipulat
https://en.wikipedia.org/wiki/Autosomal%20dominant%20nocturnal%20frontal%20lobe%20epilepsy
Autosomal dominant nocturnal frontal lobe epilepsy is an epileptic disorder that causes frequent violent seizures during sleep. These seizures often involve complex motor movements, such as hand clenching, arm raising/lowering, and knee bending. Vocalizations such as shouting, moaning, or crying are also common. ADNFLE is often misdiagnosed as nightmares. Attacks often occur in clusters and typically first manifest in childhood. There are four known loci for ADNFLE, three with known causative genes. These genes, CHRNA4, CHRNB2, and CHRNA2, encode various nicotinic acetylcholine receptor α and β subunits. Signs and symptoms ADNFLE is a partial epilepsy disorder characterized by brief violent seizures during sleep. Seizures are complex, consisting of arm and leg movements, fist clenching, and vocalizations such as yelling and moaning. These seizures often occur in clusters and can first manifest in childhood. Diagnosis is often initially incorrectly made as nightmares, night terrors, parasomnias and various psychiatric disorders. Causes While not well understood, it is believed that malfunction in thalamocortical loops plays a vital role in ADNFLE. The reasons for this belief are threefold. Firstly, thalamocortical loops are important in sleep and the frontal cortex is the origin of ADNFLE seizures. Secondly, both the thalamus and cortex receive cholinergic inputs and acetylcholine receptor subunits comprise the three known causative genes for ADNFLE. Thirdly, K-complex are almost invariably present at the start of seizures. It is thought that epilepsy is caused because these receptor subunits are expressed presynaptically by neurons that release the inhibitory transmitter GABA. Therefore, the mutation in the α4 subunit could lead to reduced GABA release, causing hyperexcitability. Pathophysiology CHRNA4 The first mutation associated with ADNFLE is a serine to phenylalanine transition at position 248 (S248F), located in the second transmembrane spanning region of
https://en.wikipedia.org/wiki/Single-ended%20primary-inductor%20converter
The single-ended primary-inductor converter (SEPIC) is a type of DC/DC converter that allows the electrical potential (voltage) at its output to be greater than, less than, or equal to that at its input. The output of the SEPIC is controlled by the duty cycle of the control switch (S1). A SEPIC is essentially a boost converter followed by an inverted buck-boost converter, therefore it is similar to a traditional buck-boost converter, but has advantages of having non-inverted output (the output has the same electrical polarity as the input), using a series capacitor to couple energy from the input to the output (and thus can respond more gracefully to a short-circuit output), and being capable of true shutdown: when the switch S1 is turned off enough, the output (V0) drops to 0 V, following a fairly hefty transient dump of charge. SEPICs are useful in applications in which a battery voltage can be above and below that of the regulator's intended output. For example, a single lithium ion battery typically discharges from 4.2 volts to 3 volts; if other components require 3.3 volts, then the SEPIC would be effective. Circuit operation The schematic diagram for a basic SEPIC is shown in Figure 1. As with other switched mode power supplies (specifically DC-to-DC converters), the SEPIC exchanges energy between the capacitors and inductors in order to convert from one voltage to another. The amount of energy exchanged is controlled by switch S1, which is typically a transistor such as a MOSFET. MOSFETs offer much higher input impedance and lower voltage drop than bipolar junction transistors (BJTs), and do not require biasing resistors as MOSFET switching is controlled by differences in voltage rather than a current, as with BJTs. Continuous mode A SEPIC is said to be in continuous-conduction mode ("continuous mode") if the currents through inductors L1 and L2 never fall to zero during an operating cycle. During a SEPIC's steady-state operation, the average voltage a
https://en.wikipedia.org/wiki/Plummer%20Terrier
The Plummer Terrier is a working terrier. It was originally bred by Brian Plummer to hunt vermin, especially rats. The breed, while unrecognized by any kennel club, is known for its rugged determination and hardiness. Origins and history In the late 1960s and throughout the 1970s, Brian Plummer worked as a somewhat reluctant teacher of several schools throughout southern Yorkshire and the Midlands. He was already well known in his local neighborhood for going around with a pack of terriers to catch rats, when he decided to create his own terrier breed in the 1970s. Well-versed in breeding, he strove to produce a unique strain of terrier by mixing the Jack Russell Terrier with the Beagle, Fell Terrier, and Bull Terrier. These terriers were worked hard and as the breed developed, so too did Plummer's reputation as a breeder of hardy terriers that bred true to type. Initially known as the Huddlesford Rat Pack, the breed is now named after him. The Beagle introduced to Plummer's lines in the 1960s was out of Catherine Sutton's Rossut show-bred strain that originated from U.S. imports, brought to the U.K. to tidy up British exhibits. It was owned by Philip Ainsley, a fellow teacher friend of Plummer's. Further outcrosses were introduced along the way. The addition of Fell Terrier blood, Jaeger from Nigel Hinchcliffe's lines and Flint from Brian Nuttall's lines, both noted working lines and most likely descending from Cyril Breay and Frank Buck's stock, infused refinement of shape and to a certain extent contributed to fixing type, like that seen in Pagan, a black and tan terrier, acknowledged as one of the early pillars of the breed. Further additions included a Jack Russell terrier known as Errol Forsyth's Pip, Alan Thomas's Hamish, and Laddie from the Chiddingfold and Leconfield foxhound kennels. Performance as an earth dog was and is an essential prerequisite of most, if not all terrier breeds and Plummers are no exception to this rule. These three dogs were kno
https://en.wikipedia.org/wiki/Atomic%20tourism
Atomic tourism or nuclear tourism is a recent form of tourism in which visitors learn about the Atomic Age by traveling to significant sites in atomic history such as nuclear test reactors, museums with nuclear weapon artifacts, delivery vehicles, sites where atomic weapons were detonated, and nuclear power plants. In the United States, the Center for Land Use Interpretation has conducted tours of the Nevada Test Site, Trinity Site, Hanford Site, and other historical atomic age sites, to explore the cultural significance of these Cold War nuclear zones. The book Overlook: Exploring the Internal Fringes of America describes the purpose of this tourism as "windows into the American psyche, landmarks that manifest the rich ambiguities of the nation's cultural history." A Bureau of Atomic Tourism was proposed by American photographer Richard Misrach and writer Myriam Weisang Misrach in 1990. Visitors to the Chernobyl Exclusion Zone often visit the nearby deserted city of Pripyat. The Hiroshima Peace Memorial (Genbaku Dome), which survived the destruction of Hiroshima, is now a UNESCO World Heritage Site at the center of Hiroshima Peace Memorial Park. Bikini Atoll was at one time the site of a diving tourism initiative. As of 2012, China planned to build a tourist destination at its first atomic test site, the Malan Base at Lop Nur in the Xinjiang Uyghur Autonomous Region. During the early atomic age when fission was viewed as a sign of progress and modernity, the city of Las Vegas and its Chamber of Commerce nicknamed Vegas as the "Atomic City" in the mid-1940s and early 1950s in an attempt to attract tourists. So called "bomb viewing parties" took place on desert hilltops, or more famously at the panoramic Sky Room at the Desert Inn, and casinos held Miss Atomic pageants while serving Atomic Cocktails. Several nuclear power plants offer tours of the facilities or provide education at visitor centers. Atomic museums Research and production Los Alamos Historical
https://en.wikipedia.org/wiki/Topological%20censorship
The topological censorship theorem (if valid) states that general relativity does not allow an observer to probe the topology of spacetime: any topological structure collapses too quickly to allow light to traverse it. More precisely, in a globally hyperbolic, asymptotically flat spacetime satisfying the null energy condition, every causal curve from past null infinity to future null infinity is fixed-endpoint homotopic to a curve in a topologically trivial neighbourhood of infinity. A 2013 paper by Sergey Krasnikov claims that the topological censorship theorem was not proven in the original article because of a gap in the proof.
https://en.wikipedia.org/wiki/Ecophysiology
Ecophysiology (from Greek , oikos, "house(hold)"; , physis, "nature, origin"; and , -logia), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym. Plants Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis. In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions. Light Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum
https://en.wikipedia.org/wiki/Half%20sphere%20exposure
Half Sphere exposure (HSE) is a protein solvent exposure measure that was first introduced by . Like all solvent exposure measures it measures how buried amino acid residues are in a protein. It is found by counting the number of amino acid neighbors within two half spheres of chosen radius around the amino acid. The calculation of HSE is found by dividing a contact number (CN) sphere in two halves by the plane perpendicular to the Cβ-Cα vector. This simple division of the CN sphere results in two strikingly different measures, HSE-up and HSE-down. HSE-up is defined as the number of Cα atoms in the upper half (containing the pseudo-Cβ atom) and analogously HSE-down is defined as the number of Cα atoms in the opposite sphere. If only Cα atoms are available (as is the case for many simplified representations of protein structure), a related measure, called HSEα, can be used. HSEα uses a pseudo-Cβ instead of the real Cβ atom for its calculation. The position of this pseudo-Cβ atom (pCβ) is derived from the positions of preceding Cα−1 and the following Cα+1. The Cα-pCβ vector is calculated by adding the Cα−1-Cα0 and Cα+1-Cα0 vectors. HSE is used in predicting discontinuous B-cell epitopes. Song et al. have developed an online webserver termed HSEpred to predict half-sphere exposure from protein primary sequences. HSEpred server can achieve the correlation coefficients of 0.72 and 0.68 between the predicted and observed HSE-up and HSE-down measures, respectively, when evaluated on a well-prepared non-homologous protein structure dataset. Moreover, residue contact number (CN) can also be accurately predicted by HSEpred webserver using the summation of the predicted HSE-up and HSE-down values, which has further enlarged the application of this new solvent exposure measure. Recently, Heffernan et al. has developed the most accurate predictor for both HSEα and HSEβ based on a big dataset by using multiple-step iterative deep neural-network learning. The predicted HSEa
https://en.wikipedia.org/wiki/Connected-component%20labeling
Connected-component labeling (CCL), connected-component analysis (CCA), blob extraction, region labeling, blob discovery, or region extraction is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. Connected-component labeling is not to be confused with segmentation. Connected-component labeling is used in computer vision to detect connected regions in binary digital images, although color images and data with higher dimensionality can also be processed. When integrated into an image recognition system or human-computer interaction interface, connected component labeling can operate on a variety of information. Blob extraction is generally performed on the resulting binary image from a thresholding step, but it can be applicable to gray-scale and color images as well. Blobs may be counted, filtered, and tracked. Blob extraction is related to but distinct from blob detection. Overview A graph, containing vertices and connecting edges, is constructed from relevant input data. The vertices contain information required by the comparison heuristic, while the edges indicate connected 'neighbors'. An algorithm traverses the graph, labeling the vertices based on the connectivity and relative values of their neighbors. Connectivity is determined by the medium; image graphs, for example, can be 4-connected neighborhood or 8-connected neighborhood. Following the labeling stage, the graph may be partitioned into subsets, after which the original information can be recovered and processed . Definition The usage of the term connected-components labeling (CCL) and its definition is quite consistent in the academic literature, whereas connected-components analysis (CCA) varies in terms of both terminology and problem definition. Rosenfeld et al. define connected components labeling as the “[c]reation of a labeled image in which the positions associated with the same connected component of the
https://en.wikipedia.org/wiki/Institute%20of%20Physics%2C%20Bhubaneswar
Institute of Physics, Bhubaneswar () is an autonomous research institution of the Department of Atomic Energy (DAE), Government of India. The institute was founded by Professor Bidhu Bhusan Das, who was Director of Public Instruction, Odisha, at that time. Das set up the institute in 1972, supported by the Government of Odisha under the patronage of Odisha's education minister Banamali Patnaik, and chose Dr. Trilochan Pradhan as its first director, when the Institute started theoretical research programs in the various branches of physics. Other notable physicists in the institute's early days included Prof. T. P. Das, of SUNY, Albany, New York, USA and Prof. Jagdish Mohanty of IIT Kanpur and Australian National University, Canberra. In 1981, the Institute moved to its present campus near Chandrasekharpur, Bhubaneswar. It was taken over by the Department of Atomic Energy, India on 25 March 1985 and started functioning as an autonomous body. Research The institute conducts research in theoretical and experimental physics. Theoretical physics Research areas in theoretical physics include condensed matter theory, nuclear and high energy physics. High-energy theorists at IOP have made contributions to field theories, phase transitions in early universe, cosmology, the Planck scale phenomena, string theory and high-energy nuclear physics such as qgp, equation of state and nuclear astrophysics, neutron stars, high-energy phenomenology and neutrino physics phenomenology. In theoretical condensed matter physics, research is centered on disordered systems, magnetism, superconductivity, low-dimensional systems, statistical physics, strongly correlated systems, phase transitions, clusters and nanomaterials. Experimental physics The experimental physics group encompasses accelerator-based research for advanced chemical and radioisotope analysis. The ion beam laboratory (IBL) is equipped with a 3MV tandem accelerator (NEC 9SDH-2). Research at the IBL includes Rutherford bac
https://en.wikipedia.org/wiki/Hellmut%20Schmid
Hellmut H. Schmid (12 September 1914 – 27 April 1998) was a Swiss professor of geodesy and photogrammetry. He taught at ETH Zürich (Switzerland). In the 1950s, he worked on space exploration in the United States. Between 1968 and 1974, he promoted the first intercontinental network of satellite geodesy. Research Geodetic measurement methods at the V2 project in Peenemünde (~1942) Beginnings of satellite geodesy 1959 Theory of analytical photogrammetry and matrix/IT developments (1950s, USA) High precision evaluation of photogram (ca. 1965-1978) Worldwide Satellite Triangulation Network (1969-1973 (publ. 1974): first regular intercontinental network, 46 stations (3000–5000 km apart), pioneering accuracy (±3m) Contributions to the least-squares adjustment, network optimization, Block triangulation Optimization of coordinate transformations (~1975) Development of 3D intersection methods in analytical photogrammetry See also PAGEOS, balloon satellites, stellar triangulation Global reference ellipsoid, "Earth polyhedron", geodetic system, WGS84 ETH Zürich, Ohio State University, Friedrich Hopfner Literature K.Ledersteger: "'Astronomische und Physikalische Geodäsie (Erdmessung)", JEK Vol.V (870 S., espec. §§ 2, 5, 13), J.B.Metzler, Stuttgart 1968 H.H. Schmid: "Das Weltnetz der Satellitentriangulation". Wiss. Mitteilungen ETH Zürich and Journal of Geophysical Research, 1974. Klaus Schnädelbach et al.: Western European Satellite Triangulation Programme (WEST), 2nd Experimental Computation. Mitteilungen Geodät.Inst. Graz 11/1, Graz 1972 Nothnagel, Schlüter, Seeger: Die Geschichte der geodätischen VLBI in Deutschland, Bonn 2000 ZfV 1998: Hellmut H. Schmid † (obituary). Professor Dr. h.c. HellmutH. Schmid (1914- 1998) "Dr. Schmid leaves Aberdeen after 12 year to join staff at GIMRADA": Geodesists 1914 births 1998 deaths Academic staff of ETH Zurich Photogrammetrists Swiss expatriates in the United States
https://en.wikipedia.org/wiki/Insertion%20time
The term insertion time is used to describe the length of time which is required to rearrange a subcritical mass of fissile material into a prompt critical mass. This is one of the three main requirements in a nuclear weapon design to create a working fission atomic bomb. The need for a short insertion time with plutonium-239 is the reason the implosion method was chosen for the first plutonium bomb, while with uranium-235 it is possible to use a gun design. The basic requirements are: To start with a subcritical system To create a super prompt critical system To make the change between these two states in a length of time (insertion time) which is shorter than the time between the random appearance of a neutron in the fissile material through spontaneous fission or by other random processes. Also at the right moment in time, neutrons must be injected into the fissile material to start up the fission process. This can be done by several methods. Alpha emitters such as polonium or plutonium-238 can be rapidly combined with beryllium to create a neutron source. Neutrons can be generated using an electrostatic discharge tube, this tube uses the D-T reaction.
https://en.wikipedia.org/wiki/Ferroelectret
A ferroelectret, also known as a piezoelectret, is a thin film of polymer foams, exhibiting piezoelectric and pyroelectric properties after electric charging. Ferroelectret foams usually consist of a cellular polymer structure filled with air. Polymer-air composites are elastically soft due to their high air content as well as due to the size and shape of the polymer walls. Their elastically soft composite structure is one essential key for the working principle of ferroelectrets, besides the permanent trapping of electric charges inside the polymer voids. The elastic properties allow large deformations of the electrically charged voids. However, the composite structure can also possibly limit the stability and consequently the range of applications. How it works The most common effect related to ferroelectrets is the direct and inverse piezoelectricity, but in these materials, the effect occurs in a way different from the respective effect in ferroelectric polymers. In ferroelectric polymers, a stress in the 3-direction mainly decreases the distance between the molecular chains, due to the relatively weak van der Waals and electrostatic interactions between chains in comparison to the strong covalent bonds within the chain. The thickness decrease thus results in an increase of the dipole density and thus in an increase of the charges on the electrodes, yielding a negative d33 coefficient from dipole-density (or secondary) piezoelectricity. In cellular polymers (ferroelectrets), stress in the 3-direction also decreases the thickness of the sample. The thickness decrease occurs dominantly across the voids, the macroscopic dipole moments decrease, and so do the electrode charges, yielding a positive d33 (intrinsic or direct (quasi-)piezoelectricity). New features In recent years, alternatives to the cellular-foam ferroelectrets were developed. In the new polymer systems, the required cavities are formed by means of e.g. stamps, templates, laser cutting, etc. Ther
https://en.wikipedia.org/wiki/Weak%20symbol
A weak symbol denotes a specially annotated symbol during linking of Executable and Linkable Format (ELF) object files. By default, without any annotation, a symbol in an object file is strong. During linking, a strong symbol can override a weak symbol of the same name. In contrast, in the presence of two strong symbols by the same name, the linker resolves the symbol in favor of the first one found. This behavior allows an executable to override standard library functions, such as malloc(3). When linking a binary executable, a weakly declared symbol does not need a definition. In comparison, (by default) a declared strong symbol without a definition triggers an undefined symbol link error. Weak symbols are not mentioned by the C or C++ language standards; as such, inserting them into code is not very portable. Even if two platforms support the same or similar syntax for marking symbols as weak, the semantics may differ in subtle points, e.g. whether weak symbols during dynamic linking at runtime lose their semantics or not. Syntax The GNU Compiler Collection and the Solaris Studio C compiler share the same syntax for annotating symbols as weak, namely a special #pragma, #pragma weak, and, alternatively, a function and variable attribute, __attribute__((weak)). Pragma // function declaration #pragma weak power2 int power2(int x); Attribute // function declaration int __attribute__((weak)) power2(int x); // or int power2(int x) __attribute__((weak)); // variable declaration; extern int __attribute__((weak)) global_var; Tools support The nm command identifies weak symbols in object files, libraries, and executables. On Linux a weak function symbol is marked with "W" if a weak default definition is available, and with "w" if it is not. Weakly defined variable symbols are marked with "V" and "v". On Solaris "nm" prints "WEAK" instead of "GLOB" for a weak symbol. Examples The following examples work on Linux and Solaris with GCC and Solaris Studi
https://en.wikipedia.org/wiki/WS-CAF
Web Services Composite Application Framework (WS-CAF) is an open framework developed by OASIS. Its purpose is to define a generic and open framework for applications that contain multiple services used together, which are sometimes referred to as composite applications. WS-CAF characteristics include interoperability, ease of implementation and ease of use. Scope The scope of WS-CAF includes: Provision of WSDL definitions for context, coordination and transactions. Message formats will be specified as SOAP headers and/or body content. The specification is to be programming language-neutral and platform-neutral. Demonstrated composability with other Web Service specifications that are being developed as open, recognized standards The goals of promoting convergence, consistent use, and a coherent architecture. Support composability as a critical architectural characteristic of Web service specifications. WS-CAF and WS-Context are targeted to become building blocks for other Web service specifications and standards. Input specifications The WS-CAF accepts the following Web services specifications as input: WS-Context WS-Coordination Framework (WS-CF) WS-Transaction Management (WS-TXM) Benefits The benefits and results of CAF are intended to be standard and interoperable ways to: Demarcate and coordinate web service activities Propagate and coordinate context information Notify participants of changes in an activity Define the relationship of coordinators to each other Recover transactions predictably and consistently in a business process execution. Interact across multiple transaction models (such as are used in CORBA, CICS, Enterprise JavaBeans or .NET environments). See also WS-Coordination - an alternative transaction standard Enterprise service bus External links NetBeans SOA Composite Application Project Home camelse Running Apache Camel in OpenESB
https://en.wikipedia.org/wiki/Gross%E2%80%93Pitaevskii%20equation
The Gross–Pitaevskii equation (GPE, named after Eugene P. Gross and Lev Petrovich Pitaevskii) describes the ground state of a quantum system of identical bosons using the Hartree–Fock approximation and the pseudopotential interaction model. A Bose–Einstein condensate (BEC) is a gas of bosons that are in the same quantum state, and thus can be described by the same wavefunction. A free quantum particle is described by a single-particle Schrödinger equation. Interaction between particles in a real gas is taken into account by a pertinent many-body Schrödinger equation. In the Hartree–Fock approximation, the total wave-function of the system of bosons is taken as a product of single-particle functions : where is the coordinate of the -th boson. If the average spacing between the particles in a gas is greater than the scattering length (that is, in the so-called dilute limit), then one can approximate the true interaction potential that features in this equation by a pseudopotential. At sufficiently low temperature, where the de Broglie wavelength is much longer than the range of boson–boson interaction, the scattering process can be well approximated by the s-wave scattering (i.e. in the partial-wave analysis, a.k.a. the hard-sphere potential) term alone. In that case, the pseudopotential model Hamiltonian of the system can be written as where is the mass of the boson, is the external potential, is the boson–boson s-wave scattering length, and is the Dirac delta-function. The variational method shows that if the single-particle wavefunction satisfies the following Gross–Pitaevskii equation the total wave-function minimizes the expectation value of the model Hamiltonian under normalization condition Therefore, such single-particle wavefunction describes the ground state of the system. GPE is a model equation for the ground-state single-particle wavefunction in a Bose–Einstein condensate. It is similar in form to the Ginzburg–Landau equation and is sometim
https://en.wikipedia.org/wiki/Side%20population
A side population (SP) in flow cytometry is a sub-population of cells that is distinct from the main population on the basis of the markers employed. By definition, cells in a side population have distinguishing biological characteristics (for example, they may exhibit stem cell-like characteristics), but the exact nature of this distinction depends on the markers used in identifying the side population. Examples Side populations were first identified in hematopoietic stem cells by Dr. Margaret Goodell. SPs have been identified in hepatocellular carcinomas and may be the cells that efflux chemotherapy drugs, accounting for the resistance of cancer to chemotherapy. Recent studies on testicular stem cells indicate that more than 40% of the SP (defined in this case as cells that show higher efflux of DNA-binding dye Hoechst 33342) was undifferentiated spermatogonia, while other differentiated fractions were represented by only 0.2%. SP cells can rapidly efflux lipophilic fluorescent dyes to produce a characteristic profile based on fluorescence-activated flow cytometric analysis. Previous studies have demonstrated SP cells in bone marrow obtained from patients with acute myeloid leukemia, suggesting that these cells might be candidate leukemic stem cells, and recent studies have found a SP of tumor progenitor cells in human solid tumors. These new data indicate that the ability of malignant SP cells to expel anticancer drugs may directly improve their survival and sustain their clonogenicity during exposure to cytostatic drugs, allowing disease recurrence when therapy is withdrawn. Identification of a tumor progenitor population with intrinsic mechanisms for cytostatic drug resistance might also provide clues for improved therapeutic intervention. The molecules involved in effluxing Hoechst 33342 are members of the ATP-binding cassette family, such as MDR1 (P-glycoprotein) and ABCG2.
https://en.wikipedia.org/wiki/Barley%20honey
Barley honey is a Japanese product prepared with barley starch, and it is typically combined with rice flour. It is often consumed as part of breakfast.
https://en.wikipedia.org/wiki/Unity%20amplitude
A sinusoidal waveform is said to have a unity amplitude when the amplitude of the wave is equal to 1. where . This terminology is most commonly used in digital signal processing and is usually associated with the Fourier series and Fourier Transform sinusoids that involve a duty cycle, , and a defined fundamental period, . Analytic signals with unit amplitude satisfy the Bedrosian Theorem.
https://en.wikipedia.org/wiki/Nonequilibrium%20Gas%20and%20Plasma%20Dynamics%20Laboratory
The Nonequilibrium Gas and Plasma Dynamics Laboratory (NGPDL) at the Aerospace Engineering Department of the University of Colorado Boulder is headed by Professor Iain D. Boyd and performs research of nonequilibrium gases and plasmas involving the development of physical models for various gas systems of interest, numerical algorithms on the latest supercomputers, and the application of challenging flows for several exciting projects. The lab places a great deal of emphasis on comparison of simulation with external experimental and theoretical results, having ongoing collaborative studies with colleagues at the University of Michigan such as the Plasmadynamics and Electric Propulsion Laboratory, other universities, and government laboratories such as NASA, United States Air Force Research Laboratory, and the United States Department of Defense. Current research areas of the NGPDL include electric propulsion, hypersonic aerothermodynamics, flows involving very small length scales (MEMS devices), and materials processing (jets used in deposition thin films for advanced materials). Due to nonequilibrium effects, these flows cannot always be computed accurately with the macroscopic equations of gas dynamics and plasma physics. Instead, the lab has adopted a microscopic approach in which the atoms/molecules in a gas and the ions/electrons in a plasma are simulated on computationally using a large number of model particles within sophisticated Monte Carlo methods. The lab has developed a general 2D/axi-symmetric/3D code, MONACO, for simulating nonequilibrium neutral flows that can run either on scalar workstations or in a parallel computing environment. The lab also has developed a general 2D/axi-symmetric/3D code, LeMANS, to numerically solve the Navier-Stokes equations using computational fluid dynamics when the Knudsen number is sufficiently small. This allows lab members to explore flows that would otherwise be too computationally expensive with a particle method. W
https://en.wikipedia.org/wiki/Methyllycaconitine
Methyllycaconitine (MLA) is a diterpenoid alkaloid found in many species of Delphinium (larkspurs). In common with many other diterpenoid alkaloids, it is toxic to animals, although the acute toxicity varies with species. Early research was focused on identifying, and characterizing the properties of methyllycaconitine as one of the principal toxins in larkspurs responsible for livestock poisoning in the mountain rangelands of North America. Methyllycaconitine has been explored as a possible therapeutic agent for the treatment of spastic paralysis, and it has been shown to have insecticidal properties. Most recently, it has become an important molecular probe for studying the pharmacology of the nicotinic acetylcholine receptor. Isolation The first isolation of MLA, from Delphinium brownii, Rydb., was probably made by Richard Manske at the National Research Laboratories in Ottawa, Canada, in 1938. Presumably because he did not obtain the compound in sufficiently pure form, Manske declined to give it a name. The name "methyl-lycaconitine" was assigned by John Goodson, working at the Wellcome Chemical Research Laboratories in London, England, when he isolated the alkaloid, in purer form, from seeds of Delphinium elatum, L. in 1943. A more modern isolation procedure is described by Pelletier and his co-workers, who used seeds of the "garden larkspur", Consolida ambigua (also referred to as Delphinium ajacis) as their plant source. Structure determination The complete molecular structure for MLA, correct in all but one detail, was first published by Kuzovkov and Platonova in 1959. This structure, supported in part by X-ray crystallography (considered usually to be a "definitive" analytical technique) of a chemical derivative of MLA performed by Maria Przybylska, was accepted as correct until the early 1980s. At that time, the research groups of Pelletier and of Edwards and Przybylska independently corrected the stereochemistry of the methoxy group at C-1 from the β- t
https://en.wikipedia.org/wiki/Per%20Enflo
Per H. Enflo (; born 20 May 1944) is a Swedish mathematician working primarily in functional analysis, a field in which he solved problems that had been considered fundamental. Three of these problems had been open for more than forty years: The basis problem and the approximation problem and later the invariant subspace problem for Banach spaces. In solving these problems, Enflo developed new techniques which were then used by other researchers in functional analysis and operator theory for years. Some of Enflo's research has been important also in other mathematical fields, such as number theory, and in computer science, especially computer algebra and approximation algorithms. Enflo works at Kent State University, where he holds the title of University Professor. Enflo has earlier held positions at the Miller Institute for Basic Research in Science at the University of California, Berkeley, Stanford University, École Polytechnique, (Paris) and The Royal Institute of Technology, Stockholm. Enflo is also a concert pianist. Enflo's contributions to functional analysis and operator theory In mathematics, Functional analysis is concerned with the study of vector spaces and operators acting upon them. It has its historical roots in the study of functional spaces, in particular transformations of functions, such as the Fourier transform, as well as in the study of differential and integral equations. In functional analysis, an important class of vector spaces consists of the complete normed vector spaces over the real or complex numbers, which are called Banach spaces. An important example of a Banach space is a Hilbert space, where the norm arises from an inner product. Hilbert spaces are of fundamental importance in many areas, including the mathematical formulation of quantum mechanics, stochastic processes, and time-series analysis. Besides studying spaces of functions, functional analysis also studies the continuous linear operators on spaces of functions.
https://en.wikipedia.org/wiki/Borel%20conjecture
In mathematics, specifically geometric topology, the Borel conjecture (named for Armand Borel) asserts that an aspherical closed manifold is determined by its fundamental group, up to homeomorphism. It is a rigidity conjecture, asserting that a weak, algebraic notion of equivalence (namely, homotopy equivalence) should imply a stronger, topological notion (namely, homeomorphism). Precise formulation of the conjecture Let and be closed and aspherical topological manifolds, and let be a homotopy equivalence. The Borel conjecture states that the map is homotopic to a homeomorphism. Since aspherical manifolds with isomorphic fundamental groups are homotopy equivalent, the Borel conjecture implies that aspherical closed manifolds are determined, up to homeomorphism, by their fundamental groups. This conjecture is false if topological manifolds and homeomorphisms are replaced by smooth manifolds and diffeomorphisms; counterexamples can be constructed by taking a connected sum with an exotic sphere. The origin of the conjecture In a May 1953 letter to Jean-Pierre Serre, Armand Borel raised the question whether two aspherical manifolds with isomorphic fundamental groups are homeomorphic. A positive answer to the question "Is every homotopy equivalence between closed aspherical manifolds homotopic to a homeomorphism?" is referred to as the "so-called Borel Conjecture" in a 1986 paper of Jonathan Rosenberg. Motivation for the conjecture A basic question is the following: if two closed manifolds are homotopy equivalent, are they homeomorphic? This is not true in general: there are homotopy equivalent lens spaces which are not homeomorphic. Nevertheless, there are classes of manifolds for which homotopy equivalences between them can be homotoped to homeomorphisms. For instance, the Mostow rigidity theorem states that a homotopy equivalence between closed hyperbolic manifolds is homotopic to an isometry—in particular, to a homeomorphism. The Borel conjecture is a
https://en.wikipedia.org/wiki/Primary%20pseudoperfect%20number
In mathematics, and particularly in number theory, N is a primary pseudoperfect number if it satisfies the Egyptian fraction equation where the sum is over only the prime divisors of N. Properties Equivalently, N is a primary pseudoperfect number if it satisfies Except for the primary pseudoperfect number N = 2, this expression gives a representation for N as the sum of distinct divisors of N. Therefore, each primary pseudoperfect number N (except N = 2) is also pseudoperfect. The eight known primary pseudoperfect numbers are 2, 6, 42, 1806, 47058, 2214502422, 52495396602, 8490421583559688410706771261086 . The first four of these numbers are one less than the corresponding numbers in Sylvester's sequence, but then the two sequences diverge. It is unknown whether there are infinitely many primary pseudoperfect numbers, or whether there are any odd primary pseudoperfect numbers. The prime factors of primary pseudoperfect numbers sometimes may provide solutions to Znám's problem, in which all elements of the solution set are prime. For instance, the prime factors of the primary pseudoperfect number 47058 form the solution set {2,3,11,23,31} to Znám's problem. However, the smaller primary pseudoperfect numbers 2, 6, 42, and 1806 do not correspond to solutions to Znám's problem in this way, as their sets of prime factors violate the requirement that no number in the set can equal one plus the product of the other numbers. Anne (1998) observes that there is exactly one solution set of this type that has k primes in it, for each k ≤ 8, and conjectures that the same is true for larger k. If a primary pseudoperfect number N is one less than a prime number, then N × (N + 1) is also primary pseudoperfect. For instance, 47058 is primary pseudoperfect, and 47059 is prime, so 47058 × 47059 = 2214502422 is also primary pseudoperfect. History Primary pseudoperfect numbers were first investigated and named by Butske, Jaje, and Mayernik (2000). Using computational search te
https://en.wikipedia.org/wiki/Cahen%27s%20constant
In mathematics, Cahen's constant is defined as the value of an infinite series of unit fractions with alternating signs: Here denotes Sylvester's sequence, which is defined recursively by Combining these fractions in pairs leads to an alternative expansion of Cahen's constant as a series of positive unit fractions formed from the terms in even positions of Sylvester's sequence. This series for Cahen's constant forms its greedy Egyptian expansion: This constant is named after (also known for the Cahen–Mellin integral), who was the first to introduce it and prove its irrationality. Continued fraction expansion The majority of naturally occurring mathematical constants have no known simple patterns in their continued fraction expansions. Nevertheless, the complete continued fraction expansion of Cahen's constant is known: it is where the sequence of coefficients is defined by the recurrence relation All the partial quotients of this expansion are squares of integers. Davison and Shallit made use of the continued fraction expansion to prove that is transcendental. Alternatively, one may express the partial quotients in the continued fraction expansion of Cahen's constant through the terms of Sylvester's sequence: To see this, we prove by induction on that . Indeed, we have , and if holds for some , then where we used the recursion for in the first step respectively the recursion for in the final step. As a consequence, holds for every , from which it is easy to conclude that . Best approximation order Cahen's constant has best approximation order . That means, there exist constants such that the inequality has infinitely many solutions , while the inequality has at most finitely many solutions . This implies (but is not equivalent to) the fact that has irrationality measure 3, which was first observed by . To give a proof, denote by the sequence of convergents to Cahen's constant (that means, ). But now it follows from and the recursi
https://en.wikipedia.org/wiki/Category%20of%20finite-dimensional%20Hilbert%20spaces
In mathematics, the category FdHilb has all finite-dimensional Hilbert spaces for objects and the linear transformations between them as morphisms. Whereas the theory described by the normal category of Hilbert spaces, Hilb, is ordinary quantum mechanics, the corresponding theory on finite dimensional Hilbert spaces is called fdQM. Properties This category is monoidal, possesses finite biproducts, and is dagger compact. According to a theorem of Selinger, the category of finite-dimensional Hilbert spaces is complete in the dagger compact category. Many ideas from Hilbert spaces, such as the no-cloning theorem, hold in general for dagger compact categories. See that article for additional details.
https://en.wikipedia.org/wiki/Uniformly%20Cauchy%20sequence
In mathematics, a sequence of functions from a set S to a metric space M is said to be uniformly Cauchy if: For all , there exists such that for all : whenever . Another way of saying this is that as , where the uniform distance between two functions is defined by Convergence criteria A sequence of functions {fn} from S to M is pointwise Cauchy if, for each x ∈ S, the sequence {fn(x)} is a Cauchy sequence in M. This is a weaker condition than being uniformly Cauchy. In general a sequence can be pointwise Cauchy and not pointwise convergent, or it can be uniformly Cauchy and not uniformly convergent. Nevertheless, if the metric space M is complete, then any pointwise Cauchy sequence converges pointwise to a function from S to M. Similarly, any uniformly Cauchy sequence will tend uniformly to such a function. The uniform Cauchy property is frequently used when the S is not just a set, but a topological space, and M is a complete metric space. The following theorem holds: Let S be a topological space and M a complete metric space. Then any uniformly Cauchy sequence of continuous functions fn : S → M tends uniformly to a unique continuous function f : S → M. Generalization to uniform spaces A sequence of functions from a set S to a uniform space U is said to be uniformly Cauchy if: For all and for any entourage , there exists such that whenever . See also Modes of convergence (annotated index) Functional analysis Convergence (mathematics)
https://en.wikipedia.org/wiki/Extended%20precision
Extended precision refers to floating-point number formats that provide greater precision than the basic floating-point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in intermediate values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types (with a storage count that usually is not a power of two) using special software (or, rarely, hardware). Extended precision implementations There is a long history of extended floating-point formats reaching back nearly to the middle of the last century. Various manufacturers have used different formats for extended precision for different machines. In many cases the format of the extended precision is not quite the same as a scale-up of the ordinary single- and double-precision formats it is meant to extend. In a few cases the implementation was merely a software-based change in the floating-point data format, but in most cases extended precision was implemented in hardware, either built into the central processor itself, or more often, built into the hardware of an optional, attached processor called a "floating-point unit" (FPU) or "floating-point processor" (FPP), accessible to the CPU as a fast input / output device. IBM extended precision formats The IBM 1130, sold in 1965, offered two floating-point formats: A 32-bit "standard precision" format and a 40-bit "extended precision" format. Standard precision format contains a 24-bit two's complement significand while extended precision utilizes a 32-bit two's complement significand. The latter format makes full use of the CPU's 32-bit integer operations. The characteristic in both formats is an 8-bit field containing the power of two biased by 128. Floating-point arithmetic operations are performed by software, and double precision is not supported at all. The extended format occupies three 16-bit words, with the ext
https://en.wikipedia.org/wiki/Prostatic%20urethra
The prostatic urethra, the widest and most dilatable part of the urethra canal, is about 3 cm long. It runs almost vertically through the prostate from its base to its apex, lying nearer its anterior than its posterior surface; the form of the canal is spindle-shaped, being wider in the middle than at either extremity, and narrowest below, where it joins the membranous portion. A transverse section of the canal as it lies in the prostate is horse-shoe-shaped, with the convexity directed forward. The keyhole sign, in ultrasound, is associated with a dilated bladder and prostatic urethra. Additional images
https://en.wikipedia.org/wiki/Spongy%20urethra
The spongy urethra (cavernous portion of urethra, penile urethra) is the longest part of the male urethra, and is contained in the corpus spongiosum of the penis. It is about 15 cm long, and extends from the termination of the membranous portion to the external urethral orifice. Commencing below the inferior fascia of the urogenital diaphragm it passes forward and upward to the front of the pubic symphysis; and then, in the flaccid condition of the penis, it bends downward and forward. It is narrow, and of uniform size in the body of the penis, measuring about 6 mm in diameter; it is dilated behind, within the bulb, and again anteriorly within the glans penis, where it forms the fossa navicularis urethrae. The spongy urethra runs along the length of the penis on its ventral (underneath) surface. It is about 15–16 cm in length, and travels through the corpus spongiosum. The ducts from the urethral gland (gland of Littré) enter here. The openings of the bulbourethral glands are also found here. Some textbooks will subdivide the spongy urethra into two parts, the bulbous and pendulous urethra. The urethral lumen (interior) runs effectively parallel to the penis, except at the narrowest point, the external urethral meatus, where it is vertical. This produces a spiral stream of urine and has the effect of cleaning the external urethral meatus. The lack of an equivalent mechanism in the female urethra partly explains why urinary tract infections occur so much more frequently in females. Epithelium Pseudostratified columnar – proximally, Stratified squamous – distally Additional images
https://en.wikipedia.org/wiki/Membranous%20urethra
The membranous urethra or intermediate part of male urethra is the shortest, least dilatable, and, with the exception of the urinary meatus, the narrowest part of the urethra. It extends downward and forward, with a slight anterior concavity, between the apex of the prostate and the bulb of the urethra, perforating the urogenital diaphragm about 2.5 cm below and behind the pubic symphysis. The hinder part of the urethral bulb lies in apposition with the inferior fascia of the urogenital diaphragm, but its upper portion diverges somewhat from this fascia: the anterior wall of the membranous urethra is thus prolonged for a short distance in front of the urogenital diaphragm; it measures about 2 cm in length, while the posterior wall which is between the two fasciæ of the diaphragm is only 1.25 cm long. The anatomical variation in membranous urethral length measurements in men have been reported to range from 0.5 cm to 3.4 cm. The membranous portion of the urethra is surrounded by the fibers of the Sphincter urethrae membranaceae. In front of it the deep dorsal vein of the penis enters the pelvis between the transverse ligament of the pelvis and the arcuate pubic ligament; on either side near its termination are the bulbourethral glands. Additional images
https://en.wikipedia.org/wiki/Dorsal%20veins%20of%20the%20penis
In human male anatomy, the dorsal veins of the penis are blood vessels that drain the shaft (corpora cavernosa, corpus spongiosum), the skin and the glans of the human penis. They are typically located in the midline on the dorsal aspect of the penis and they comprise the superficial dorsal vein of the penis, that lies in the subcutaneous tissue of the shaft, and the deep dorsal vein of the penis, that lies beneath the deep fascia. Superficial dorsal vein The superficial dorsal vein of the penis belongs to the superficial drainage system. It is located within the superficial dartos fascia, a continuation of the Colles fascia, on the dorsal surface of the penis and, in contrast to the deep dorsal vein, it lies outside the deeper Buck's fascia. It is formed by smaller superficial veins that merge on the dorsolateral aspect of the penis. It drains the prepuce and the skin of the shaft, and, running backward in the subcutaneous tissue, inclines to the right or left, and opens into the corresponding superficial external pudendal vein, a tributary of the great saphenous vein. Deep dorsal vein The deep dorsal vein of the penis belongs to the intermediate drainage system of the penis, along with the circumflex veins and their emissary veins. It runs directly beneath the superficial dorsal vein, with a layer of connective tissue, the deep fascia of the penis, separating the two vessels. It receives oxygen-depleted blood from the glans and corpora cavernosa and courses backward in the middle line accompanied by the dorsal arteries on each side. Near the root of the penis it passes between the two parts of the suspensory ligament and then through an aperture between the arcuate pubic ligament and the transverse ligament of the pelvis, and divides into two branches, which enter the vesical and prostatic plexuses. From the prostatic plexus, the deoxygenated blood travels through the venal system until it arrives at the center of the circulatory system for resupply with oxygen
https://en.wikipedia.org/wiki/Facet%20%28geometry%29
In geometry, a facet is a feature of a polyhedron, polytope, or related geometric structure, generally of dimension one less than the structure itself. More specifically: In three-dimensional geometry, a facet of a polyhedron is any polygon whose corners are vertices of the polyhedron, and is not a face. To facet a polyhedron is to find and join such facets to form the faces of a new polyhedron; this is the reciprocal process to stellation and may also be applied to higher-dimensional polytopes. In polyhedral combinatorics and in the general theory of polytopes, a face that has dimension n − 1 (an (n − 1)-face or hyperface) is also called a facet. A facet of a simplicial complex is a maximal simplex, that is a simplex that is not a face of another simplex of the complex. For (boundary complexes of) simplicial polytopes this coincides with the meaning from polyhedral combinatorics.
https://en.wikipedia.org/wiki/IUP%20%28software%29
The IUP Portable User Interface is a computer software development kit that provides a portable, scriptable toolkit to build graphical user interfaces (GUIs) using the programming languages C, Perl, Lua, Nim and Zig, among others. This allows rapid, zero-compile prototyping and refinement of deployable GUI applications. IUP's purpose is to allow programs user interface to run in different systems in unmodified form. It provides this ability by binding Lua with its C/C++ code, or simply writing C to the application programming interface (API). It handles user interface elements by using native controls provided by native APIs, such as Windows API in Windows, GTK+ in Linux, and Motif-LessTif in older Unices. It also provides some custom developed controls using graphics APIs such as CD - Canvas Draw or OpenGL. Features IUP's distinguishing features include: ANSI C API, one of the few plain C-capable toolkits, Single API for Windows or Linux, Built in support for Lua scripts calling IUP functions (controlled by Lua script), Removal of the restriction of class/instance object types, but retaining the prototype Lua-style hierarchy of inheritance. An abstract layout model, in which sizes and positions are calculated from horizontal and vertical containers, rather than explicit X and Y coordinates. Coordinate-based layout is also supported with a third container type. Small API, on the order of about 100 functions. Use of an event loop-callback mechanism. This main loop can be called inside Lua. Interface elements are created before they are mapped to the native elements. This is the reverse of the usual situation for assembling GUI elements. Available as source or pre-built static or dynamic libraries for a wide variety of compilers, including turnkey example source. The Lua scripting is done by binding Lua and IUPLua in (at least) a small C program called a host application. This program creates a Lua state, passes the Lua state to IUPLua for initializatio
https://en.wikipedia.org/wiki/Immortal%20DNA%20strand%20hypothesis
The immortal DNA strand hypothesis was proposed in 1975 by John Cairns as a mechanism for adult stem cells to minimize mutations in their genomes. This hypothesis proposes that instead of segregating their DNA during mitosis in a random manner, adult stem cells divide their DNA asymmetrically, and retain a distinct template set of DNA strands (parental strands) in each division. By retaining the same set of template DNA strands, adult stem cells would pass mutations arising from errors in DNA replication on to non-stem cell daughters that soon terminally differentiate (end mitotic divisions and become a functional cell). Passing on these replication errors would allow adult stem cells to reduce their rate of accumulation of mutations that could lead to serious genetic disorders such as cancer. Although evidence for this mechanism exists, whether it is a mechanism acting in adult stem cells in vivo is still controversial. Methods Two main assays are used to detect immortal DNA strand segregation: label-retention and label-release pulse/chase assays. In the label-retention assay, the goal is to mark 'immortal' or parental DNA strands with a DNA label such as tritiated thymidine or bromodeoxyuridine (BrdU). These types of DNA labels will incorporate into the newly synthesized DNA of dividing cells during S phase. A pulse of DNA label is given to adult stem cells under conditions where they have not yet delineated an immortal DNA strand. During these conditions, the adult stem cells are either dividing symmetrically (thus with each division a new 'immortal' strand is determined and in at least one of the stem cells the immortal DNA strand will be marked with DNA label), or the adult stem cells have not yet been determined (thus their precursors are dividing symmetrically, and once they differentiate into adult stem cells and choose an 'immortal' strand, the 'immortal strand' will already have been marked). Experimentally, adult stem cells are undergoing symmetric d
https://en.wikipedia.org/wiki/Wiggers%20diagram
A Wiggers diagram, named after its developer, Carl Wiggers, is a unique diagram that has been used in teaching cardiac physiology for more than a century. In the Wiggers diagram, the X-axis is used to plot time subdivided into the cardiac phases, while the Y-axis typically contains the following on a single grid: Blood pressure Aortic pressure Ventricular pressure Atrial pressure Ventricular volume Electrocardiogram Arterial flow (optional) Heart sounds (optional) The Wiggers diagram clearly illustrates the coordinated variation of these values as the heart beats, assisting one in understanding the entire cardiac cycle. Events Note that during isovolumetric/isovolumic contraction and relaxation, all the heart valves are closed; at no time are all the heart valves open. *S3 and S4 heart sounds are associated with pathologies and are not routinely heard. Additional images See also Pressure volume diagram
https://en.wikipedia.org/wiki/Benzethonium%20chloride
Benzethonium chloride, also known as hyamine is a synthetic quaternary ammonium salt. This compound is an odorless white solid, soluble in water. It has surfactant, antiseptic, and anti-infective properties, and it is used as a topical antimicrobial agent in first aid antiseptics. It is also found in cosmetics and toiletries such as soap, mouthwashes, anti-itch ointments, and antibacterial moist towelettes. Benzethonium chloride is also used in the food industry as a hard surface disinfectant. Uses Antimicrobial Benzethonium chloride exhibits a broad spectrum of microbiocidal activity against bacteria, fungi, mold and viruses. Independent testing shows that benzethonium chloride is highly effective against such pathogens as methicillin-resistant Staphylococcus aureus, Salmonella, Escherichia coli, Clostridium difficile, hepatitis B virus, hepatitis C virus, herpes simplex virus (HSV), human immunodeficiency virus (HIV), respiratory syncytial virus (RSV), and norovirus. The US Food and Drug Administration (FDA) specifies that the safe and effective concentrations for benzethonium chloride are 0.1-0.2% in first aid products. Aqueous solutions of benzethonium chloride are not absorbed through the skin. It is not approved in the US and Europe for use as a food additive. Being a quaternary ammonium salt, it is more toxic than negatively charged surfactants. However, in a two-year study on rats, there was no evidence of carcinogenic activity. It is available under trade names Salanine, BZT, Diapp, Quatrachlor, Polymine D, Phemithyn, Antiseptol, Disilyn, Phermerol, and others. It is also found in several grapefruit seed extract preparations and can be used as a preservative, as in the anaesthetics ketamine and alfaxalone. Other uses In addition to its highly effective antimicrobial activity, benzethonium chloride contains a positively charged nitrogen atom covalently bonded to four carbon atoms. This positive charge attracts it to the skin and hair. This contri
https://en.wikipedia.org/wiki/Populus%20trichocarpa
Populus trichocarpa, the black cottonwood, western balsam-poplar or California poplar, is a deciduous broadleaf tree species native to western North America. It is used for timber, and is notable as a model organism in plant biology. Description It is a large tree, growing to a height of and a trunk diameter over . It ranks 3rd in poplar species in the American Forests Champion Tree Registry. It is normally fairly short-lived, but some trees may live up to 400 years. A cottonwood in Willamette Mission State Park near Salem, Oregon, holds the national and world records. Last measured in April 2008, this black cottonwood was found to be standing at tall, around, with 527 points. The bark is grey and covered with lenticels, becoming thick and deeply fissured on old trees. The bark can become hard enough to cause sparks when cut with a chainsaw. The stem is grey in the older parts and light brown in younger parts. The crown is usually roughly conical and quite dense. In large trees, the lower branches droop downwards. Spur shoots are common. The wood has a light coloring and a straight grain. The leaves are usually long with a glossy, dark green upper side and glaucous, light grey-green underside; larger leaves may be up to long and may be produced on stump sprouts and very vigorous young trees. The leaves are alternate, elliptical with a crenate margin and an acute tip, and reticulate venation. The petiole is reddish. The buds are conical, long, narrow, and sticky, with a strong balsam scent in spring when they open. P. trichocarpa has an extensive and aggressive root system, which can invade and damage drainage systems. Sometimes, the roots can even damage the foundations of buildings by drying out the soil. Reproduction Flowering and fruiting P. trichocarpa is normally dioecious; male and female catkins are borne on separate trees. The species reaches flowering age around 10 years. Flowers may appear in early March to late May in Washington and Oregon,
https://en.wikipedia.org/wiki/Tacit%20collusion
Tacit collusion is a collusion between competitors, which do not explicitly exchange information and achieving an agreement about coordination of conduct. There are two types of tacit collusion – concerted action and conscious parallelism. In a concerted action also known as concerted activity, competitors exchange some information without reaching any explicit agreement, while conscious parallelism implies no communication. In both types of tacit collusion, competitors agree to play a certain strategy without explicitly saying so. It is also referred to as oligopolistic price coordination or tacit parallelism. A dataset of gasoline prices of BP, Caltex, Woolworths, Coles, and Gull from Perth gathered in the years 2001 to 2015 was used to show by statistical analysis the tacit collusion between these retailers. BP emerged as a price leader and influenced the behavior of the competitors. As result, the timing of price jumps became coordinated and the margins started to grow in 2010. Conscious parallelism In competition law, some sources use conscious parallelism as a synonym to tacit collusion in order to describe pricing strategies among competitors in an oligopoly that occurs without an actual agreement or at least without any evidence of an actual agreement between the players. In result, one competitor will take the lead in raising or lowering prices. The others will then follow suit, raising or lowering their prices by the same amount, with the understanding that greater profits result. This practice can be harmful to consumers who, if the market power of the firm is used, can be forced to pay monopoly prices for goods that should be selling for only a little more than the cost of production. Nevertheless, it is very hard to prosecute because it may occur without any collusion between the competitors. Courts have held that no violation of the antitrust laws occurs where firms independently raise or lower prices, but that a violation can be shown when plus fac
https://en.wikipedia.org/wiki/Chipcom
Chipcom was an early pioneering company in the Ethernet hub industry. Their products allowed Local Networks to be aggregated in a single place instead of being distributed across the length of a single coaxial cable. They competed with now-gone companies such as Cabletron Systems, SynOptics, Ungermann-Bass, David Systems, Digital Equipment Corporation, and American Photonics, all of which were early entrants in the "LAN Hub" industry. Chipcom also was involved in Token Ring, FDDI, and Asynchronous Transfer Mode (ATM). Some of Chipcom's innovations at the time are well-documented in the trade press of the era, such as Computerworld. In 1995, Chipcom was acquired by 3Com for $700 million in stock., although Cabletron was also interested in buying the company. 3Com was acquired by Hewlett-Packard in 2011. The firm's CEO at the time was John Robert Held.
https://en.wikipedia.org/wiki/Optimal%20stopping
In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance (related to the pricing of American options). A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming. Definition Discrete time case Stopping rule problems are associated with two objects: A sequence of random variables , whose joint distribution is something assumed to be known A sequence of 'reward' functions which depend on the observed values of the random variables in 1: Given those objects, the problem is as follows: You are observing the sequence of random variables, and at each step , you can choose to either stop observing or continue If you stop observing at step , you will receive reward You want to choose a stopping rule to maximize your expected reward (or equivalently, minimize your expected loss) Continuous time case Consider a gain process defined on a filtered probability space and assume that is adapted to the filtration. The optimal stopping problem is to find the stopping time which maximizes the expected gain where is called the value function. Here can take value . A more specific formulation is as follows. We consider an adapted strong Markov process defined on a filtered probability space where denotes the probability measure where the stochastic process starts at . Given continuous functions , and , the optimal stopping problem is This is sometimes called the MLS (which stand for Mayer, Lagrange, and supremum, respectively) formulation. Solution methods There are generally two approaches to solving optimal stopping problems. When the underlying process
https://en.wikipedia.org/wiki/Dailymotion
Dailymotion is a French video-sharing technology platform owned by Vivendi. North American launch partners included Vice Media, Bloomberg and Hearst Digital Media. It is among the earliest known platforms to support HD (720p) resolution video. Dailymotion is available worldwide in 183 languages and 43 localised versions featuring local home pages and local content. History In March 2005, Benjamin Bejbaum and Olivier Poitrey founded the website, pooling €6,000 (US $9,271) from six individuals to start it. In September 2006, Dailymotion raised funds in collaboration with Atlas Ventures and Partech International. They raised €7 million, which was considered to be the most funds raised in 2006 from the French Web 2.0. In 2007 Dailymotion created ASIC, together with other companies in the sector. Dailymotion supports a high-definition video resolution of 720p since February 2008, making it one of the earliest known HD video platforms. October 2009, the French government invested in Dailymotion through the Strategic Investment Fund. On 25 January 2011, Orange acquired a 49% stake in Dailymotion for €62 million, valuing the company at €120 million. On 10 January 2013, Orange bought the remaining 51% for €61 million. On or about 2 May 2013, the French government blocked Yahoo's acquisition of a majority stake in Dailymotion. On 25 February 2014, Orange revealed it was in discussions with Microsoft about a deal that could see Dailymotion extend into the US market. In an interview with a local television station in Barcelona Stéphane Richard, CEO of Orange, said there was "great hope" an agreement would be reached. Any deal would see Orange retain majority ownership of Dailymotion. Richard said his company was in talks with other potential partners as well with a view to expanding Dailymotion's international appeal, but said discussions with others were more in relation to content. In 2015, Vivendi purchased an 80% stake in Dailymotion from Orange S.A. Vivendi increas
https://en.wikipedia.org/wiki/Sandy%20Bridge
Sandy Bridge is the codename for Intel's 32 nm microarchitecture used in the second generation of the Intel Core processors (Core i7, i5, i3). The Sandy Bridge microarchitecture is the successor to Nehalem and Westmere microarchitecture. Intel demonstrated a Sandy Bridge processor in 2009, and released first products based on the architecture in January 2011 under the Core brand. Sandy Bridge is manufactured in the 32 nm process and has a soldered contact with the die and IHS (Integrated Heat Spreader), while Intel's subsequent generation Ivy Bridge uses a 22 nm die shrink and a TIM (Thermal Interface Material) between the die and the IHS. Technology Intel demonstrated a Sandy Bridge processor with A1 stepping at 2 GHz during the Intel Developer Forum in September 2009. Upgraded features from Nehalem include: CPU Intel Turbo Boost 2.0 32 KB data + 32 KB instruction L1 cache and 256 KB L2 cache per core Shared L3 cache which includes the processor graphics (LGA 1155) 64-byte cache line size New µOP cache, up to 1536-entry Improved 3 integer ALU, 2 vector ALU and 2 AGU per core Two load/store operations per CPU cycle for each memory channel Decoded micro-operation cache, and enlarged, optimized branch predictor Sandy Bridge retains the four branch predictors found in Nehalem: the branch target buffer (BTB), indirect branch target array, loop detector and renamed return stack buffer (RSB). Sandy Bridge has a single BTB that holds twice as many branch targets as the L1 and L2 BTBs in Nehalem. Improved performance for transcendental mathematics, AES encryption (AES instruction set), and SHA-1 hashing 256-bit/cycle ring bus interconnect between cores, graphics, cache and System Agent Domain Advanced Vector Extensions (AVX) 256-bit instruction set with wider vectors, new extensible syntax and rich functionality Up to 8 physical cores, or 16 logical cores through hyper-threading (From 6 core/12 thread) Integration of the GMCH (integrated graphics and memo
https://en.wikipedia.org/wiki/PPS.tv
PPS.tv (PPStream) is a Chinese peer-to-peer streaming video network software. Since the target users are on the Chinese mainland, there is no official English version, and the vast majority of channels are from East Asia, mostly Mainland China, Japan, South Korea, Hong Kong, Taiwan, Malaysia, and Singapore. Programmes vary from Chinese movies to Japanese anime, sports channels, as well as popular American TV and films. It had an 8.9% market share in China in Q3 2010, placing it third, behind Youku and Tudou. In May 2013, the online video business of PPS.tv was purchased by Baidu for $370 million. After the acquisition, PPS.tv continued to operate as a sub-brand under iQIYI, Baidu's online video platform. Applications However, the nature of peer to peer serving means that each user of the system is also a server. The upload speed of standard home broadband connections is usually a fraction of the download speed, so several upload sources may be required by each additional peer. Additionally, on services with high contention ratios or poorly configured switches, large numbers of people attempting to use the service may slow all internet usage to unusable speeds. Acting as an upload server to the limit of one's uploads, bandwidth increases the round trip time for webpage requests, making web browsing while using PPS.tv difficult. See also PPLive
https://en.wikipedia.org/wiki/Verdier%20duality
In mathematics, Verdier duality is a cohomological duality in algebraic topology that generalizes Poincaré duality for manifolds. Verdier duality was introduced in 1965 by as an analog for locally compact topological spaces of Alexander Grothendieck's theory of Poincaré duality in étale cohomology for schemes in algebraic geometry. It is thus (together with the said étale theory and for example Grothendieck's coherent duality) one instance of Grothendieck's six operations formalism. Verdier duality generalises the classical Poincaré duality of manifolds in two directions: it applies to continuous maps from one space to another (reducing to the classical case for the unique map from a manifold to a one-point space), and it applies to spaces that fail to be manifolds due to the presence of singularities. It is commonly encountered when studying constructible or perverse sheaves. Verdier duality Verdier duality states that (subject to suitable finiteness conditions discussed below) certain derived image functors for sheaves are actually adjoint functors. There are two versions. Global Verdier duality states that for a continuous map of locally compact Hausdorff spaces, the derived functor of the direct image with compact (or proper) supports has a right adjoint in the derived category of sheaves, in other words, for (complexes of) sheaves (of abelian groups) on and on we have Local Verdier duality states that in the derived category of sheaves on Y. It is important to note that the distinction between the global and local versions is that the former relates morphisms between complexes of sheaves in the derived categories, whereas the latter relates internal Hom-complexes and so can be evaluated locally. Taking global sections of both sides in the local statement gives the global Verdier duality. These results hold subject to the compactly supported direct image functor having finite cohomological dimension. This is the case if the there is a bound
https://en.wikipedia.org/wiki/Four-dimensionalism
In philosophy, four-dimensionalism (also known as the doctrine of temporal parts) is the ontological position that an object's persistence through time is like its extension through space. Thus, an object that exists in time has temporal parts in the various subregions of the total region of time it occupies, just like an object that exists in a region of space has at least one part in every subregion of that space. Four-dimensionalists typically argue for treating time as analogous to space, usually leading them to endorse the doctrine of eternalism. This is a philosophical approach to the ontological nature of time, according to which all points in time are equally "real", as opposed to the presentist idea that only the present is real. As some eternalists argue by analogy, just as all spatially distant objects and events are as real as those close to us, temporally distant objects and events are as real as those currently present to us. Perdurantism—or perdurance theory—is a closely related philosophical theory of persistence and identity, according to which an individual has distinct temporal parts throughout its existence, and the persisting object is the sum or set of all of its temporal parts. This sum or set is colloquially referred to as a "space-time worm", which has earned the perdurantist view the moniker of "the worm view". While all perdurantists are plausibly considered four dimensionalists, at least one variety of four dimensionalism does not count as perdurantist in nature. This variety, known as exdurantism or the "stage view", is closely akin to the perdurantist position. They also countenance a view of persisting objects which have temporal parts that succeed one another through time. However, instead of identifying the persisting object as the entire set or sum of its temporal parts, the exdurantist argues that any object under discussion is a single stage (time-slice, temporal part, etc.), and that the other stages or parts which compose the
https://en.wikipedia.org/wiki/Intel%20Paragon
The Intel Paragon is a discontinued series of massively parallel supercomputers that was produced by Intel in the 1990s. The Paragon XP/S is a productized version of the experimental Touchstone Delta system that was built at Caltech, launched in 1992. The Paragon superseded Intel's earlier iPSC/860 system, to which it is closely related. The Paragon series is based on the Intel i860 RISC microprocessor. Up to 2048 (later, up to 4096) i860s are connected in a 2D grid. In 1993, an entry-level Paragon XP/E variant was announced with up to 32 compute nodes. The system architecture is a partitioned system, with the majority of the system comprising diskless compute nodes and a small number of I/O nodes interactive service nodes. Since the bulk of the nodes have no permanent storage, it is possible to "Red/Black switch" the compute partition from classified to unclassified by disconnecting one set of I/O nodes with classified disks and then connecting an unclassified I/O partition. Intel intended the Paragon to run the OSF/1 AD distributed operating system on all processors. However, this was found to be inefficient in practice, and a light-weight kernel called SUNMOS was developed at Sandia National Laboratories to replace OSF/1 AD on the Paragon's compute processors. Oak Ridge National Laboratory operated a Paragon XP/S 150 MP, one of the largest Paragon systems, for several years. The prototype for the Intel Paragon was the Intel Delta, built by Intel with funding from DARPA and installed operationally at the California Institute of Technology in the late 1980s with funding from the National Science Foundation. The Delta was one of the few computers to sit significantly above the curve of Moore's Law. Compute nodes The computer boards was produced in two variants: the GP16 with 16 MB of memory and two CPUs, and the MP16 with three CPUs. Each node has a B-NIC interface that connects to the mesh routers on the backplane. The compute nodes are diskless and per
https://en.wikipedia.org/wiki/Warped%20linear%20predictive%20coding
Warped linear predictive coding (warped LPC or WLPC) is a variant of linear predictive coding in which the spectral representation of the system is modified, for example by replacing the unit delays used in an LPC implementation with first-order all-pass filters. This can have advantages in reducing the bitrate required for a given level of perceived audio quality/intelligibility, especially in wideband audio coding. History Warped LPC was first proposed in 1980 by Hans Werner Strube.
https://en.wikipedia.org/wiki/Promegakaryocyte
A promegakaryocyte is a precursor cell for a megakaryocyte. It arises from a megakaryoblast, into a promegakaryocyte and then into a megakaryocyte, which will eventually break off and become a platelet. The developmental stages of the megakaryocyte are: CFU-Me (pluripotential hemopoietic stem cell or hemocytoblast) → megakaryoblast → promegakaryocyte → megakaryocyte. When the megakaryoblast matures into the promegakaryocyte, it undergoes endoreduplication and forms a promegakaryocyte which has multiple nuclei, azurophilic granules, and a basophilic cytoplasm. The promegakaryocyte has rotary motion, but no forward migration. Promegakaryocytes and other precursor cells to megakaryocytes arise from pluripotential hematopoietic progenitors. The megakaryoblast is then produced, followed by the promegakaryocyte, the granular megakaryocyte, and then the mature megakaryocyte. When it is in its promegakaryocyte stage, it is considered an undifferentiated cell. Megakaryocyte pieces will eventually break off and begin circulating the body as platelets. Platelets are very important because of their role in blood clotting, immune response, and the formation of new blood vessels.
https://en.wikipedia.org/wiki/Promonocyte
A promonocyte (or premonocyte) is a cell arising from a monoblast and developing into a monocyte. See also Pluripotential hemopoietic stem cell Additional images External links "Monocyte Development" at tulane.edu Slide at marist.edu - "Bone marrow smear" "Maturation Sequence" at hematologyatlas.com (Promonocyte is in seventh row.) Blood cells Immune system
https://en.wikipedia.org/wiki/Cryo
Cryo- is from the Ancient Greek κρύος (krúos, “ice, icy cold, chill, frost”). Uses of the prefix Cryo- include: Physics and geology Cryogenics, the study of the production and behaviour of materials at very low temperatures and the study of producing extremely low temperatures Cryoelectronics, the study of superconductivity under cryogenic conditions and its applications Cryosphere, those portions of Earth's surface where water ice naturally occurs Cryotron, a switch that uses superconductivity Cryovolcano, a theoretical type of volcano that erupts volatiles instead of molten rock Biology and medicine Cryobiology, the branch of biology that studies the effects of low temperatures on living things Cryonics, the low-temperature preservation of people who cannot be sustained by contemporary medicine Cryoprecipitate, a blood-derived protein product used to treat some bleeding disorders Cryotherapy, medical treatment using cold Cryoablation, tissue removal using cold Cryosurgery, surgery using cold Cryo-electron microscopy (cryoEM), a technique that fires beams of electrons at proteins that have been frozen in solution, to deduce the biomolecules’ structure Other uses Cryo Interactive, a video game company Cryos, a planet in the video game Darkspore See also Kryo, a brand of CPUs by Qualcomm External links Cryogenics Cryobiology Cryonics Superconductivity
https://en.wikipedia.org/wiki/Left%20anterior%20descending%20artery
The left anterior descending artery (also LAD, anterior interventricular branch of left coronary artery, or anterior descending branch) is a branch of the left coronary artery. It supplies the anterior portion of the left ventricle. It provides about half of the arterial supply to the left ventricle and is thus considered the most important vessel supplying the left ventricle. Blockage of this artery is often called the widow-maker infarction due to a high risk of death. Structure Course It first passes at posterior to the pulmonary artery, then passes anteriorward between that pulmonary artery and the left atrium to reach the anterior interventricular sulcus, along which it descends to the notch of cardiac apex.In 78% of cases, it reaches the apex of the heart. Although rare, multiple anomalous courses of the LAD have been described. These include the origin of the artery from the right aortic sinus. Branches The LAD gives off two types of branches: septals and diagonals. Septals originate from the LAD at 90 degrees to the surface of the heart, perforating and supplying the anterior 2/3 of the interventricular septum. Diagonals run along the surface of the heart and supply the lateral wall of the left ventricle and the anterolateral papillary muscle. Segments Proximal: from LAD origin to, and including, the origin of the first septal branch (some definitions say to first diagonal, or to whichever comes first) Middle: from proximal segment to halfway of remaining distance to apex. A more technical definition is from the proximal segment to the point where the LAD forms an angle, as seen from a right anterior oblique view on angiography, which is often close to the origin of the second diagonal branch. Distal: from middle segment to apex, or in some cases beyond. Function The artery supplies the anterior region of the left ventricle, including: the anterolateral myocardium, apex, anterior interventricular septum, and anterolateral papillary muscle. The
https://en.wikipedia.org/wiki/Theta%20characteristic
In mathematics, a theta characteristic of a non-singular algebraic curve C is a divisor class Θ such that 2Θ is the canonical class. In terms of holomorphic line bundles L on a connected compact Riemann surface, it is therefore L such that L2 is the canonical bundle, here also equivalently the holomorphic cotangent bundle. In terms of algebraic geometry, the equivalent definition is as an invertible sheaf, which squares to the sheaf of differentials of the first kind. Theta characteristics were introduced by History and genus 1 The importance of this concept was realised first in the analytic theory of theta functions, and geometrically in the theory of bitangents. In the analytic theory, there are four fundamental theta functions in the theory of Jacobian elliptic functions. Their labels are in effect the theta characteristics of an elliptic curve. For that case, the canonical class is trivial (zero in the divisor class group) and so the theta characteristics of an elliptic curve E over the complex numbers are seen to be in 1-1 correspondence with the four points P on E with 2P = 0; this is counting of the solutions is clear from the group structure, a product of two circle groups, when E is treated as a complex torus. Higher genus For C of genus 0 there is one such divisor class, namely the class of -P, where P is any point on the curve. In case of higher genus g, assuming the field over which C is defined does not have characteristic 2, the theta characteristics can be counted as 22g in number if the base field is algebraically closed. This comes about because the solutions of the equation on the divisor class level will form a single coset of the solutions of 2D = 0. In other words, with K the canonical class and Θ any given solution of 2Θ = K, any other solution will be of form Θ + D. This reduces counting the theta characteristics to finding the 2-rank of the Jacobian variety J(C) of C. In the complex case, again, the result follows since J(C) is
https://en.wikipedia.org/wiki/Carbon%20detonation
Carbon detonation or carbon deflagration is the violent reignition of thermonuclear fusion in a white dwarf star that was previously slowly cooling. It involves a runaway thermonuclear process which spreads through the white dwarf in a matter of seconds, producing a type Ia supernova which releases an immense amount of energy as the star is blown apart. The carbon detonation/deflagration process leads to a supernova by a different route than the better known type II (core-collapse) supernova (the type II is caused by the cataclysmic explosion of the outer layers of a massive star as its core implodes). White dwarf density and mass increase A white dwarf is the remnant of a small to medium size star (the Sun is an example of these). At the end of its life, the star has burned its hydrogen and helium fuel, and thermonuclear fusion processes cease. The star does not have enough mass to either burn much heavier elements, or to implode into a neutron star or type II supernova as a larger star can, from the force of its own gravity, so it gradually shrinks and becomes very dense as it cools, glowing white and then red, for a period many times longer than the present age of the Universe. Occasionally, a white dwarf gains mass from another source – for example, a binary star companion that is close enough for the dwarf star to siphon sufficient amounts of matter onto itself; or from a collision with other stars, the siphoned matter having been expelled during the process of the companion's own late stage stellar evolution. If the white dwarf gains enough matter, its internal pressure and temperature will rise enough for carbon to begin fusing in its core. Carbon detonation generally occurs at the point when the accreted matter pushes the white dwarf's mass close to the Chandrasekhar limit of roughly 1.4 solar masses, the mass at which gravity can overcome the electron degeneracy pressure that prevents it from collapsing during its lifetime. This also happens when two whit
https://en.wikipedia.org/wiki/Butyrate%20esterase
α-Naphthyl butyrate esterase, also referred to as naphthyl butyrate esterase or butyrate esterase, is a histological stain specific for white blood cells of the monocytic proliferation line. It is used in the diagnosis of leukemia when staining touch preparation type slides of bone marrow. It is instrumental in the diagnosis of monocytic leukemias and the myelomonocytic variant of acute myelocytic leukemia.
https://en.wikipedia.org/wiki/Driven%20right%20leg%20circuit
A Driven Right Leg circuit or DRL circuit, also known as Right Leg Driving technique, is an electric circuit that is often added to biological signal amplifiers to reduce common-mode interference. Biological signal amplifiers such as ECG (electrocardiogram) EEG (electroencephalogram) or EMG circuits measure very small electrical signals emitted by the body, often as small as several micro-volts (millionths of a volt). However, the patient's body can also act as an antenna which picks up electromagnetic interference, especially 50/60 Hz noise from electrical power lines. This interference can obscure the biological signals, making them very hard to measure. Right leg driver circuitry is used to eliminate interference noise by actively cancelling the interference. Other methods of noise control include: Faraday cage Twisting Wires High Gain Instrumentation Amplifier Filtering Further reading J.G. Webster, "Medical Instrumentation", 3rd ed, New York: John Wiley & Sons, 1998, . B. B. Winter and J. G. Webster, “Driven-right-leg circuit design,” IEEE Trans. Biomed. Eng., vol. BME-30, no. 1, pp. 62–66, Jan. 1983. "Improving Common-Mode Rejection Using the Right-Leg Drive Amplifier" by Texas Instruments. Electronic design Analog circuits
https://en.wikipedia.org/wiki/All-interval%20tetrachord
An all-interval tetrachord is a tetrachord, a collection of four pitch classes, containing all six interval classes. There are only two possible all-interval tetrachords (to within inversion), when expressed in prime form. In set theory notation, these are [0,1,4,6] (4-Z15) and [0,1,3,7] (4-Z29). Their inversions are [0,2,5,6] (4-Z15b) and [0,4,6,7] (4-Z29b). The interval vector for all all-interval tetrachords is [1,1,1,1,1,1]. Table of interval classes as relating to all-interval tetrachords In the examples below, the tetrachords [0,1,4,6] and [0,1,3,7] are built on E. Use in modern music The unique qualities of the all-interval tetrachord have made it very popular in 20th-century music. Composers including Elliott Carter (First String Quartet) and George Perle used it extensively. See also All-interval twelve-tone row All-trichord hexachord Perfect ruler Serialism Trichord