source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Artery%20to%20the%20ductus%20deferens
|
The artery to the ductus deferens (deferential artery) is an artery in males that provides blood to the ductus deferens.
Anatomy
Origin
The artery arises from the superior vesical artery (usually), or from the inferior vesical artery.
Course, anastomoses, and distribution
It accompanies the ductus deferens into the testis, where it anastomoses with the testicular artery; in this way it also supplies blood to the testis and epididymis. A small branch also supplies the ureter.
See also
Spermatic cord
Additional Images
|
https://en.wikipedia.org/wiki/Autonym%20%28botany%29
|
In botanical nomenclature, autonyms are automatically created names, as regulated by the International Code of Nomenclature for algae, fungi, and plants that are created for certain subdivisions of genera and species, those that include the type of the genus or species. An autonym might not be mentioned in the publication that creates it as a side-effect. Autonyms "repeat unaltered" the genus name or species epithet of the taxon being subdivided, and no other name for that same subdivision is validly published (article 22.2). For example, Rubus subgenus Eubatus is not validly published, and the subgenus is known as Rubus subgen. Rubus.
Autonyms are cited without an author. The publication date of the autonym is taken to be the same as that of the subdivision(s) that automatically established the autonym, with some special provisions (the autonym is considered to have priority over the other names of the same rank established at the same time (article 11.6)).
Articles 6.8, 22.1-3 and 26.1-3 relate to establishing autonyms.
Autonyms are not created if the name of the genus or species being subdivided is illegitimate.
Definition
The definition of an autonym is in Art. 6.8 of the ICN:
"6.8. Autonyms are such names as can be established automatically under Art. 22.3 and 26.3, whether or not they appear in print in the publication in which they are created"
"22.3. The first instance of valid publication of a name of a subdivision of a genus under a legitimate generic name automatically establishes the corresponding autonym (see also Art. 11.6 and 32.8)." The form of this autonym is described in the earlier Art. 22.1: "The name of any subdivision of a genus that includes the type of the [...] name of the genus to which it is assigned is to repeat the generic name unaltered as its epithet, not followed by an author citation [...] Such names are termed autonyms".
"26.3. The first instance of valid publication of a name of an infraspecific taxon under a legitimate spe
|
https://en.wikipedia.org/wiki/Plantar%20arch
|
The plantar arch is a circulatory anastomosis formed from:
deep plantar artery, from the dorsalis pedis - a.k.a. dorsal artery of the foot
lateral plantar artery
The plantar arch supplies the underside, or sole, of the foot.
The plantar arch runs from the 5th metatarsal and extends medially to the 1st metatarsal (of the big toe).
The arch is formed when the lateral plantar artery turns medially to the interval between the bases of the first and second metatarsal bones, where it unites with the deep plantar branch of the dorsalis pedis artery, thus completing the plantar arch (or deep plantar arch).
|
https://en.wikipedia.org/wiki/Soy%20molasses
|
Soy molasses is brown viscous syrup with a typical bittersweet flavor. A by-product of aqueous alcohol soy protein concentrate production, soy molasses is a concentrated, desolventized, aqueous alcohol extract of defatted soybean flakes.
The term "soy molasses" was coined by Daniel Chajuss, the founder of Hayes Ashdod Ltd., which first commercially produced and marketed soy molasses in the early 1960s. The name was intended to distinguish the product from “soybean whey” or “condensed soybean solubles”, the by-products available at the time from soy protein isolate and acid washed soy protein concentrate production.
Manufacture
The alcohols are removed from the liquid extract by evaporation and the distillation residue is an aqueous solution of the sugars and other soy solubles. This solution is concentrated to viscous honey-like consistency to yield soy molasses.
Composition
Typically, soy molasses contains 50% total soluble solids. These solids consist of carbohydrates (60%), proteins and other nitrogenous substances (10%), minerals (10%), fats and lipoids (20%). The major constituents of soy molasses are sugars that include oligosaccharides (stachyose and raffinose), disaccharides (sucrose) and minor amounts of monosaccharides (fructose and glucose). Minor constituents include saponins, protein, lipid, minerals (ash), isoflavones, and other organic materials.
Use
Soy molasses is used as a feed ingredient in mixed feeds as pelleting aid, added to soybean meal (e.g. by spraying it into the soybean meal desolventizer toaster), mixed with soy hulls, and used in liquid animal feed diets. Soy molasses can be used as a fermentation aid, as a prebiotic, and as an ingredient in specialized breads.
It is also possible to burn soy molasses in a dedicated boiler to generate process steam. In combination with a support fuel (e.g. natural gas, ...) the low calorific liquid can be valorized in a steam boiler.
Soy molasses is an important commercial and biological product,
|
https://en.wikipedia.org/wiki/Poly-MVA
|
Poly-MVA (or Lipoic Acid Mineral Complex) is a dietary supplement created by Merrill Garnett (1931–), a former dentist turned biochemist. Poly-MVA is an ineffective alternative cancer treatment.
Description
The "MVA" in "Poly-MVA" means "minerals vitamins and amino acids". Poly-MVA contains lipoic acid, acetylcysteine, palladium, B vitamins, and other ingredients. The substance is red-brown liquid that is taken by mouth.
In 2004, a year's supply of Poly-MVA was reported as costing US$19,800. As of 2019, the cost appears to fluctuate according to an individual's situation and dosage.
Alternative medicine
Poly-MVA is promoted with claims that it can treat a variety of human diseases, including cancer and HIV/AIDS. The promotional effort is supported by customer testimonials, but there is no medical evidence that Poly-MVA confers any health benefit and some concern it may inhibit the effectiveness of mainstream cancer treatments if used at the same time.
In 2005, Poly-MVA was listed as one of the ineffective alternative cancer treatments being sold by the clinics clustered in and around Tijuana, Mexico. None of the information referenced in this review is specific to Poly-MVA.
See also
Antioxidant
List of unproven and disproven cancer treatments
|
https://en.wikipedia.org/wiki/Uncacheable%20speculative%20write%20combining
|
Uncacheable speculative write combining (USWC), is a computer BIOS setting for memory communication between a CPU and graphics card. It allows faster communication than the "uncachable" setting (the alternative to USWC in the BIOS), as long as the graphics card supports write combining (which most modern cards do), allowing data to be temporarily stored in write combine buffers (WCB) and released in burst mode rather than single bits.
See also
Bus (computing)
|
https://en.wikipedia.org/wiki/Occipital%20cryoneurolysis
|
Occipital cryoneurolysis is a procedure used to treat nerve pain generated by peripheral nerves (nerves located outside of the spinal column and skull) commonly due to the condition occipital neuralgia. A probe (no larger than a small needle) is carefully placed adjacent to the specific nerve. Once in the appropriate area the probe is first used to stimulate the affected nerve helping to verify positioning. Once certain of proper placement, the tip is cooled by nitrous oxide to temperatures between to envelope the nerve in an ice ball, thereby interrupting transmission. The nerve is still functional and returns to its normal (un-frozen) state immediately after the procedure is completed.
Side effects and adverse reactions are rare. Potential side effects or complications could include soreness from the procedure for a few days, trauma to the nerve, which may cause worsening of the pain or loss of nerve function, as well as infection or bleeding complications.
|
https://en.wikipedia.org/wiki/Thrifty%20gene%20hypothesis
|
The thrifty gene hypothesis, or Gianfranco's hypothesis is an attempt by geneticist James V. Neel to explain why certain populations and subpopulations in the modern day are prone to diabetes mellitus type 2. He proposed the hypothesis in 1962 to resolve a fundamental problem: diabetes is clearly a very harmful medical condition, yet it is quite common, and it was already evident to Neel that it likely had a strong genetic basis. The problem is to understand how disease with a likely genetic component and with such negative effects may have been favoured by the process of natural selection. Neel suggested the resolution to this problem is that genes which predispose to diabetes (called 'thrifty genes') were historically advantageous, but they became detrimental in the modern world. In his words they were "rendered detrimental by 'progress'". Neel's primary interest was in diabetes, but the idea was soon expanded to encompass obesity as well. Thrifty genes are genes which enable individuals to efficiently collect and process food to deposit fat during periods of food abundance in order to provide for periods of food shortage (feast and famine).
According to the hypothesis, the 'thrifty' genotype would have been advantageous for hunter-gatherer populations, especially child-bearing women, because it would allow them to fatten more quickly during times of abundance. Fatter individuals carrying the thrifty genes would thus better survive times of food scarcity. However, in modern societies with a constant abundance of food, this genotype efficiently prepares individuals for a famine that never comes. The result of this mismatch between the environment in which the brain evolved and the environment of today is widespread chronic obesity and related health problems like diabetes.
The hypothesis has received various criticisms and several modified or alternative hypotheses have been proposed.
Hypothesis and research by Neel
James Neel, a professor of Human Genetics at t
|
https://en.wikipedia.org/wiki/Superior%20phrenic%20vein
|
The superior phrenic vein, i.e., the vein accompanying the pericardiacophrenic artery, usually opens into the azygos vein.
|
https://en.wikipedia.org/wiki/Nasofrontal%20vein
|
The nasofrontal vein is a vein in the orbit around the eye. It drains into the superior ophthalmic vein. It can be used for endovascular access to the cavernous sinus.
Structure
The nasofrontal vein drains into the superior ophthalmic vein.
Clinical significance
The nasofrontal vein can be used to access the superior ophthalmic vein and the cavernous sinus with endovascular tools.
|
https://en.wikipedia.org/wiki/Deep%20facial%20vein
|
The anterior facial vein receives a branch of considerable size, the deep facial vein, from the pterygoid venous plexus.
|
https://en.wikipedia.org/wiki/Common%20digital%20veins
|
On the dorsum of the foot the dorsal digital veins receive, in the clefts between the toes, the intercapitular veins from the plantar venous arch and join to form short common digital veins which unite across the distal ends of the metatarsal bones in a dorsal venous arch.
|
https://en.wikipedia.org/wiki/Topoisomerase%20inhibitor
|
Topoisomerase inhibitors are chemical compounds that block the action of topoisomerases, which are broken into two broad subtypes: type I topoisomerases (TopI) and type II topoisomerases (TopII). Topoisomerase plays important roles in cellular reproduction and DNA organization, as they mediate the cleavage of single and double stranded DNA to relax supercoils, untangle catenanes, and condense chromosomes in eukaryotic cells. Topoisomerase inhibitors influence these essential cellular processes. Some topoisomerase inhibitors prevent topoisomerases from performing DNA strand breaks while others, deemed topoisomerase poisons, associate with topoisomerase-DNA complexes and prevent the re-ligation step of the topoisomerase mechanism. These topoisomerase-DNA-inhibitor complexes are cytotoxic agents, as the un-repaired single- and double stranded DNA breaks they cause can lead to apoptosis and cell death. Because of this ability to induce apoptosis, topoisomerase inhibitors have gained interest as therapeutics against infectious and cancerous cells.
History
In the 1940s, great strides were made in the field of antibiotic discovery by researchers like Albert Schatz, Selman A. Waksman, and H. Boyd Woodruff that inspired significant effort to be allocated to the search for novel antibiotics. Studies searching for antibiotic and anticancer agents in the mid to late 20th century have illuminated the existence of numerous unique families of both TopI and TopII inhibitors, with the 1960s alone resulting in the discovery of the camptothecin, anthracycline and epipodophyllotoxin classes. Knowledge of the first topoisomerase inhibitors, and their medical potential as anticancer drugs and antibiotics, predates the discovery of the first topoisomerase (Escherichia. coli omega protein, a TopI) by Jim Wang in 1971. In 1976, Gellert et al. detailed the discovery of the bacterial TopII DNA gyrase and discussed its inhibition when introduced to coumarin and quinolone class inhibitors, sp
|
https://en.wikipedia.org/wiki/Bacterivore
|
A bacterivore is an organism which obtains energy and nutrients primarily or entirely from the consumption of bacteria. The term is most commonly used to describe free-living, heterotrophic, microscopic organisms such as nematodes as well as many species of amoeba and numerous other types of protozoans, but some macroscopic invertebrates are also bacterivores, including sponges, polychaetes, and certain molluscs and arthropods. Many bacterivorous organisms are adapted for generalist predation on any species of bacteria, but not all bacteria are easily digested; the spores of some species, such as Clostridium perfringens, will never be prey because of their cellular attributes.
In microbiology
Bacterivores can sometimes be a problem in microbiology studies. For instance, when scientists seek to assess microorganisms in samples from the environment (such as freshwater), the samples are often contaminated with microscopic bacterivores, which interfere with the growing of bacteria for study. Adding cycloheximide can inhibit the growth of bacterivores without affecting some bacterial species, but it has also been shown to inhibit the growth of some anaerobic prokaryotes.
Examples of bacterivores
Caenorhabditis elegans
Ceriodaphnia quadrangula
Diaphanosoma brachyura
Vorticella
Paramecium
Many species of protozoa
Many benthic meiofauna, e.g. gastrotrichs
Springtails
Many sponges, e.g. Aplysina aerophoba
Many crustaceans
Many polychaetes, e.g. feather duster worms
Some marine molluscs
See also
Microbivory
|
https://en.wikipedia.org/wiki/Mating%20connection
|
A mating connection is any method of assembling of two or more component parts with mutually complementing shapes that, with some imagination, resembles the way two animals, male and female, are physically connected during the act of mating. In such connections one of the two components acts as male and the other as female, although more complex relationships exist. Any electrical connector, bolted joint, and jigsaw puzzle is an example of assembling based on mating connection.
See also
Gender of connectors and fasteners
|
https://en.wikipedia.org/wiki/Subclinical%20infection
|
A subclinical infection—sometimes called a preinfection or inapparent infection—is an infection by a pathogen that causes few or no signs or symptoms of infection in the host. Subclinical infections can occur in both humans and animals. Depending on the pathogen, which can be a virus or intestinal parasite, the host may be infectious and able to transmit the pathogen without ever developing symptoms; such a host is called an asymptomatic carrier. Many pathogens, including HIV, typhoid fever, and coronaviruses such as COVID-19 spread in their host populations through subclinical infection.
Not all hosts of asymptomatic subclinical infections will become asymptomatic carriers. For example, hosts of Mycobacterium tuberculosis bacteria will only develop active tuberculosis in approximately one-tenth of cases; the majority of those infected by Mtb bacteria have latent tuberculosis, a non-infectious type of tuberculosis that does not produce symptoms in individuals with sufficient immune responses.
Because subclinical infections often occur without eventual overt sign, in some cases their presence is only identified by microbiological culture or DNA techniques such as polymerase chain reaction (PCR) tests.
Transmission
In humans
Many pathogens are transmitted through their host populations by hosts with few or no symptoms, including sexually transmitted infections such as syphilis and genital warts. In other cases, a host may develop more symptoms as the infection progresses beyond its incubation period. These hosts create a natural reservoir of individuals that can transmit a pathogen to other individuals. Because cases often do not come to clinical attention, health statistics frequently are unable to measure the true prevalence of an infection in a population. This prevents accurate modeling of its transmissability.
In animals
Some animal pathogens are also transmitted through subclinical infections. The A(H5) and A(H7) strains of avian influenza are divided int
|
https://en.wikipedia.org/wiki/Zeo%2C%20Inc.
|
Zeo, Inc., formerly Axon Labs, was a private company founded by Brown University students. Established December 29, 2003 in Providence, Rhode Island and later headquartered in Boston, Massachusetts, it developed a smart alarm clock with sleep monitor (e.g., REM). Sleep states could be used to sound a wake-up alarm only when the sleeper was in the light stages of sleep, likely to awake more refreshed. Details of sleep could be uploaded to the MyZeo Web site, where they were stored, with detailed historical charts of sleep patterns downloadable, and email suggestion on improving sleep could be sent. The state of sleep was detected by a headband, essentially comprising three long-lasting electrodes made of electrically conductive fabric and a wireless unit, that transmitted data to a Zeo bedside clock unit or Apple iPhone which displayed data and sounded the wake alarm. The company also developed and marketed a personal sleep coaching Web service which allowed users of the clock to upload their sleep data, then measure and analyze their sleep patterns; this was later made available without charge.
Founding
The founders and board members include Daniel Rothman, Ben Rubin, Eric Shashoua, Jason Donahue, Terri Alpert from Stony Creek Brands, David Barone from Sleep Labs, Inc. and Jeff Stibel from Web.com.
Closure
By late 2012 the company was apparently in financial trouble, and it closed down in early 2013, although this was not officially reported. The Web site initially became inaccessible, Twitter tweets stopped and the Zeo Community Forum became 'currently unavailable and down for maintenance'. By May 2013 the content of the MyZeo website had been removed and the URL was for sale. According to the Better Business Bureau "this business has no rating because it is out of business".
Effect on users
Although the services provided by the MyZeo Web site and emails have stopped, functions that do not rely on the web site or Zeo's support staff are still functional. These
|
https://en.wikipedia.org/wiki/Neanderthal%20genome%20project
|
The Neanderthal genome project is an effort of a group of scientists to sequence the Neanderthal genome, founded in July 2006.
It was initiated by 454 Life Sciences, a biotechnology company based in Branford, Connecticut in the United States and is coordinated by the Max Planck Institute for Evolutionary Anthropology in Germany. In May 2010 the project published their initial draft of the Neanderthal genome (Vi33.16, Vi33.25, Vi33.26) based on the analysis of four billion base pairs of Neanderthal DNA. The study determined that some mixture of genes occurred between Neanderthals and anatomically modern humans and presented evidence that elements of their genome remain in modern humans outside Africa.
In December 2013, a high coverage genome of a Neanderthal was reported for the first time. DNA was extracted from a toe fragment from a female Neanderthal researchers have dubbed the "Altai Neandertal". It was found in Denisova Cave in the Altai Mountains of Siberia and is estimated to be 50,000 years old.
Findings
The researchers recovered ancient DNA of Neanderthals by extracting the DNA from the femur bones of three 38,000 year-old female Neanderthal specimens from Vindija Cave, Croatia, and other bones found in Spain, Russia, and Germany. Only about half a gram of the bone samples (or 21 samples each 50–100 mg) was required for the sequencing, but the project faced many difficulties, including the contamination of the samples by the bacteria that had colonized the Neanderthal's body and humans who handled the bones at the excavation site and at the laboratory.
In February 2009, the Max Planck Institute's team led by Svante Pääbo announced that they had completed the first draft of the Neanderthal genome. An early analysis of the data suggested in "the genome of Neanderthals, a human species driven to extinction" "no significant trace of Neanderthal genes in modern humans". New results suggested that some adult Neanderthals were lactose intolerant. On the questi
|
https://en.wikipedia.org/wiki/Groma%20%28surveying%29
|
The groma (as standardized in the imperial Latin, sometimes croma, or gruma in the literature of the republican times) was a Roman surveying instrument. The groma allowed projecting right angles and straight lines and thus enabling the centuriation (setting up of a rectangular grid). It is the only Roman surveying tool with examples that made it through to the present day.
Construction
The tool utilizes a rotating horizontal cross with plumb bobs hanging down from all four ends. The center of the cross represents the umbilicus soli (reference point). The cross is mounted on a vertical Jacob's staff, so called ferramentum. The umbilicus is offset with respect to the ferramentum by using a bracket pivoting on the top of the staff (frequently ferramentum is used to describe the whole tool). The purpose of offsetting the reference point from the Jacob's staff (vertical pole) is twofold: it enables sighting of lines on the ground through a pair of strings (used to suspend an opposite pair of plumbs from the cross) without the staff obscuring the view and allows placing the reference point over a sturdy object (like a boundary stone), where the staff cannot be inserted.
Bracket controversy
The pivoting bracket on the top of the staff was suggested in the 1912 reconstruction by Adolf Schulten and confirmed by soon afterwards. However, as asserted by Thorkild Schiöler in 1994, the 5-kilogram cross found in Pompeii is too heavy to be supported in this way, thus the bracket had never existed. Furthermore, there is no archeological evidence of the bracket, and the images of gromas on tombstones do not show it. The archeologists rejecting the bracket suggest that the staff was slightly angled to permit sighting without the pole obscuring the view.
Use
Despite a great deal of surviving information about the groma (and the simplicity of the tool itself), the details of its operation are not entirely clear. The general idea is straightforward: the staff was inserted into
|
https://en.wikipedia.org/wiki/Compact%20operator%20on%20Hilbert%20space
|
In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators (representable by finite-dimensional matrices) in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.
For example, the spectral theory of compact operators on Banach spaces takes a form that is very similar to the Jordan canonical form of matrices. In the context of Hilbert spaces, a square matrix is unitarily diagonalizable if and only if it is normal. A corresponding result holds for normal compact operators on Hilbert spaces. More generally, the compactness assumption can be dropped. As stated above, the techniques used to prove results, e.g., the spectral theorem, in the non-compact case are typically different, involving operator-valued measures on the spectrum.
Some results for compact operators on Hilbert space will be discussed, starting with general properties before considering subclasses of compact operators.
Definition
Let be a Hilbert space and be the set of bounded operators on . Then, an operator is said to be a compact operator if the image of each bounded set under is relatively compact.
Some general properties
We list in this section some general properties of compact operators.
If X and Y are separable Hilbert spaces (in fact, X Banach and Y normed will suffice), then T : X → Y is compact if and only if it is sequentially continuous when viewed as a map from X with the weak topology to Y (with the norm topology). (See , and note in this reference that the uniform boundedness will apply in the situation where F ⊆ X satisfies (∀φ ∈ Hom(X, K)) sup
|
https://en.wikipedia.org/wiki/The%20Amazing%20Transparent%20Man
|
The Amazing Transparent Man is a 1960 American science fiction thriller B-movie starring Marguerite Chapman in her final feature film. The plot follows an insane ex–U.S. Army major who uses an escaped criminal to steal materials to improve the invisibility machine his scientist prisoner made. It was one of two sci-fi films shot back-to-back in Dallas, Texas by director Edgar G. Ulmer (the other was Beyond the Time Barrier, also released that same year). The combined filming schedule for both films was only two weeks. The film was later featured in an episode of Mystery Science Theater 3000.
Plot
Former U.S. Army major Paul Krenner plans to conquer the world with an army of invisible soldiers and will do anything to achieve that goal. With the help of his hired muscle Julian, Krenner forces Dr. Peter Ulof to perfect the invisibility machine that Ulof invented. He imprisons Ulof's daughter Maria to keep Ulof in line.
The nuclear materials that Ulof needs to improve his invisibility machine are extremely rare and kept under guard in government facilities. Krenner arranges the prison break of notorious safecracker Joey Faust to steal the materials that he needs. Faust will do the jobs while invisible. Krenner offers Faust money for the jobs and Faust expresses his grievances against working for him. Faust tells him that he will snitch if he is returned to prison, but Krenner informs Faust that he is wanted dead or alive, so Faust reluctantly complies. However, when he meets Krenner's woman, Laura Matson, he slowly charms her into a double cross.
Faust continues attempting to escape and tries to get one over on Krenner. It looks as if he may have the edge on Krenner when Faust attacks Krenner while invisible. However, Dr. Ulof's guinea pig dies and, during the second time that he is invisible, Faust uncontrollably reverts from invisible to visible and back again. Despite these drawbacks, Faust forges ahead, intent on breaking free from Krenner's control.
Dr. Ulof rev
|
https://en.wikipedia.org/wiki/Medical%20underwriting
|
Medical underwriting is a health insurance term referring to the use of medical or health information in the evaluation of an applicant for coverage, typically for life or health insurance. As part of the underwriting process, an individual's health information may be used in making two decisions: whether to offer or deny coverage and what premium rate to set for the policy. The two most common methods of medical underwriting are known as moratorium underwriting, a relatively simple process, and full medical underwriting, a more indepth analysis of a client's health information. The use of medical underwriting may be restricted by law in certain insurance markets. If allowed, the criteria used should be objective, clearly related to the likely cost of providing coverage, practical to administer, consistent with applicable law, and designed to protect the long-term viability of the insurance system.
It is the process in which an underwriter considers the health conditions of the person who is applying for the insurance, keeping in mind certain factors like health condition, age, nature of work, and geographical zone. After looking at all the factors, an underwriter suggests whether a policy should be given to the person and at what price, or premium.
Health insurance
Underwriting is the process that a health insurer uses to weigh potential health risks in its pool of insured people against potential costs of providing coverage.
To search the medical underwriting, an insurer asks people who apply for coverage (typically people applying for individual or family coverage) about pre-existing medical conditions. In most US states, insurance companies are allowed to ask questions about a person's medical history to decide whom to offer coverage, whom to deny and if additional charges should apply to individually-purchased coverage.
While most discussions of medical underwriting in health insurance are about medical expense insurance, similar considerations apply for o
|
https://en.wikipedia.org/wiki/Hostages%20%28video%20game%29
|
Hostages is a 1988 tactical shooter video game developed and published by Infogrames for the Acorn Electron, Archimedes, Atari ST, Amiga, Apple IIGS, Amstrad CPC, BBC Micro, Commodore 64, MS-DOS, MSX, Nintendo Entertainment System, and ZX Spectrum. The game depicts a terrorist attack and hostage crisis at an embassy in Paris, with the player controlling a six-man GIGN counterterrorist team as they are deployed to defeat the terrorists and free their hostages.
An indirect sequel, Alcatraz, was released for the Amiga, Atari ST, and MS-DOS in 1992.
Gameplay
Hostages is split into two or three (depending on platform) distinct sections with different gameplay styles.
In the first section, the player controls GIGN snipers Delta, Echo, and Mike (names vary between versions, such as "Mike", "Steve", and "Jumbo" in the NES version) as they attempt to reach designated vantage points in buildings across the street from the embassy to cover the main assault; however, the terrorists have searchlights set up and are scanning the street for movement. The player controls one operative at a time in a side-scroller segment where they must reach one of the vantage points while avoiding the searchlights. To do so, the player must time their movements, take cover behind fences or in buildings, or roll, crawl, and dive to avoid the searchlights. If an operative is spotted by a searchlight, the terrorists will shoot at them; the player must roll, dive, or enter cover to avoid getting hit. Once an operative enters a building containing a vantage point, the player takes control of the next operative at the starting area. The section ends when all three operatives have reached a vantage point.
In the second section (linked to the first section in some versions), the player controls Delta, Echo, and Mike from the vantage points as they besiege the embassy with their sniper rifles while the entry team, Hotel, Tango, and Bravo (names again vary between versions, such as "Ron", "Dick", and "
|
https://en.wikipedia.org/wiki/Ran%20%28protein%29
|
Ran (RAs-related Nuclear protein) also known as GTP-binding nuclear protein Ran is a protein that in humans is encoded by the RAN gene. Ran is a small 25 kDa protein that is involved in transport into and out of the cell nucleus during interphase and also involved in mitosis. It is a member of the Ras superfamily.
Ran is a small G protein that is essential for the translocation of RNA and proteins through the nuclear pore complex. The Ran protein has also been implicated in the control of DNA synthesis and cell cycle progression, as mutations in Ran have been found to disrupt DNA synthesis.
Function
Ran cycle
Ran exists in the cell in two nucleotide-bound forms: GDP-bound and GTP-bound. RanGDP is converted into RanGTP through the action of RCC1, the nucleotide exchange factor for Ran. RCC1 is also known as RanGEF (Ran Guanine nucleotide Exchange Factor). Ran's intrinsic GTPase-activity is activated through interaction with Ran GTPase activating protein (RanGAP), facilitated by complex formation with Ran-binding protein (RanBP). GTPase-activation leads to the conversion of RanGTP to RanGDP, thus closing the Ran cycle.
Ran can diffuse freely within the cell, but because RCC1 and RanGAP are located in different places in the cell, the concentration of RanGTP and RanGDP differs locally as well, creating concentration gradients that act as signals for other cellular processes. RCC1 is bound to chromatin and therefore located inside the nucleus. RanGAP is cytoplasmic in yeast and bound to the nuclear envelope in plants and animals. In mammalian cells, it is SUMO modified and attached to the cytoplasmic side of the nuclear pore complex via interaction with the nucleoporin RANBP2 (Nup358). This difference in location of the accessory proteins in the Ran cycle leads to a high RanGTP to RanGDP ratio inside the nucleus and an inversely low RanGTP to RanGDP ratio outside the nucleus. In addition to a gradient of the nucleotide bound state of Ran, there is a gradient of t
|
https://en.wikipedia.org/wiki/Largest-scale%20trends%20in%20evolution
|
The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996).
History
Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential.
Hypothetical largest-scale trends
McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity.
Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether
|
https://en.wikipedia.org/wiki/Fielden%20Professor%20of%20Pure%20Mathematics
|
The Fielden Chair of Pure Mathematics is an endowed professorial position in the School of Mathematics, University of Manchester, England.
History
In 1870 Samuel Fielden, a wealthy mill owner from Todmorden, donated £150 to Owens College (as the Victoria University of Manchester was then called) for the teaching of evening classes and a further £3000 for the development of natural sciences at the college. From 1877 this supported the Fielden Lecturer, subsequently to become the Fielden Reader with the appointment of L. J. Mordell in 1922 and then Fielden Professor in 1923. Alex Wilkie FRS was appointed to the post in 2007.
Holders
Previous holders of the Fielden Chair (and lectureship) are:
A. T. Bentley (1876–1880) Lecturer in Pure Mathematics
J. E. A. Steggall (1880–1883) Lecturer in Pure Mathematics
R. F. Gwyther (1883–1907) Lecturer in Mathematics
F. T. Swanwick (1907–1912) Lecturer in Mathematics
H. R. Hasse (1912–1918) Lecturer in Mathematics
George Henry Livens (1920–1922) Lecturer in Mathematics
Louis Mordell (1923–1945)
Max Newman (1945–1964)
Frank Adams (1964–1971)
Ian G. Macdonald (1972–1976)
Norman Blackburn (1978–1994)
Mark Pollicott (1996–2004)
Alex Wilkie (2007–)
Related chairs
The other endowed chairs in mathematics at the University of Manchester are the Beyer Chair of Applied Mathematics, the Sir Horace Lamb Chair and the Richardson Chair of Applied Mathematics.
|
https://en.wikipedia.org/wiki/Hypersalivation
|
Hypersalivation or hypersialosis is the excessive production of saliva. It has also been defined as increased amount of saliva in the mouth, which may also be caused by decreased clearance of saliva.
Hypersalivation can contribute to drooling if there is an inability to keep the mouth closed or difficulty in swallowing (dysphagia) the excess saliva, which can lead to excessive spitting.
Hypersalivation also often precedes emesis (vomiting), where it accompanies nausea (a feeling of needing to vomit).
Causes
Excessive production
Conditions that can cause saliva overproduction include:
Rabies
Pellagra (niacin or Vitamin B3 deficiency)
Gastroesophageal reflux disease, in such cases specifically called a water brash (a loosely defined lay term), and is characterized by a sour fluid or almost tasteless saliva in the mouth
Gastroparesis (main symptoms are nausea, vomiting, and reflux)
Pregnancy
Fluoride therapy
Excessive starch intake
Anxiety (common sign of separation anxiety in dogs)
Pancreatitis
Liver disease
Serotonin syndrome
Mouth ulcers
Oral infections
Sjögren syndrome (an early symptom in some patients)
Medications that can cause overproduction of saliva include:
aripiprazole
clozapine
pilocarpine
ketamine
potassium chlorate
risperidone
pyridostigmine
Substances that can cause hypersalivation include:
mercury
copper
organophosphates (insecticide)
arsenic
nicotine
thallium
Decreased clearance
Causes of decreased clearance of saliva include:
Infections such as tonsillitis, retropharyngeal and peritonsillar abscesses, epiglottitis and mumps.
Problems with the jaw, e.g., fracture or dislocation
Radiation therapy
Neurologic disorders such as Amyotrophic lateral sclerosis, myasthenia gravis, Parkinson's disease, multiple system atrophy, rabies, bulbar paralysis, bilateral facial nerve palsy, and hypoglossal nerve palsy
Treatment
Hypersalivation is optimally treated by treating or avoiding the underlying cause. Mouthwash and tooth brushing may have drying effects.
|
https://en.wikipedia.org/wiki/Skolem%20arithmetic
|
In mathematical logic, Skolem arithmetic is the first-order theory of the natural numbers with multiplication, named in honor of Thoralf Skolem. The signature of Skolem arithmetic contains only the multiplication operation and equality, omitting the addition operation entirely.
Skolem arithmetic is weaker than Peano arithmetic, which includes both addition and multiplication operations. Unlike Peano arithmetic, Skolem arithmetic is a decidable theory. This means it is possible to effectively determine, for any sentence in the language of Skolem arithmetic, whether that sentence is provable from the axioms of Skolem arithmetic. The asymptotic running-time computational complexity of this decision problem is triply exponential.
Expressive power
First-order logic with equality and multiplication of positive integers can express the relation
. Using this relation and equality, we can define the following relations on positive integers:
Divisibility:
Greatest common divisor:
Least common multiple:
the constant :
Prime number:
Number is a product of primes (for a fixed ):
Number is a power of some prime:
Number is a product of exactly prime powers:
Idea of decidability
The truth value of formulas of Skolem arithmetic can be reduced to the truth value of sequences of non-negative integers constituting their prime factor decomposition, with multiplication becoming point-wise addition of sequences. The decidability then follows from the Feferman–Vaught theorem that can be shown using Quantifier elimination. Another way of stating this is that first-order theory of positive integers is isomorphic to the first-order theory of finite multisets of non-negative integers with the multiset sum operation, whose decidability reduces to the decidability of the theory of elements.
In more detail, according to the fundamental theorem of arithmetic, a positive integer can be represented as a product of prime powers:
If a prime number does not appear as a fact
|
https://en.wikipedia.org/wiki/Posterior%20spinal%20artery
|
The posterior spinal artery (dorsal spinal arteries) arises from the vertebral artery in 25% of humans or the posterior inferior cerebellar artery in 75% of humans, adjacent to the medulla oblongata. It is usually double, and spans the length of the spinal cord. It supplies the grey and white posterior columns of the spinal cord.
Structure
The posterior spinal artery arises above the foramen magnum. It passes posteriorly to descend the medulla passing through and behind of the posterior roots of the spinal nerves. Along its course it is reinforced by a succession of segmental or radiculopial branches, which enter the vertebral canal through the intervertebral foramina, forming a plexus called the vasocorona with the anterior vertebral arteries.
Below the medulla spinalis and upper cervical spine, the posterior spinal arteries are rather discontinuous; unlike the anterior spinal artery, which can be traced as a distinct channel throughout its course, the posterior spinal arteries are seen as somewhat larger longitudinal channels of an extensive pial arterial meshwork. At the level of the conus medullaris, the posterior spinals are more frequently seen as distinct arteries, communicating with the anterior spinal artery to form a characteristic "basket" which angiographically defines the caudal extent of the spinal cord and its transition to the cauda equina.
Branches
Close to its origin each posterior spinal artery gives off an ascending branch, which ends ipsilaterally near the fourth ventricle.
Anastomoses
Branches from the posterior spinal arteries form a free anastomosis around the posterior roots of the spinal nerves, and communicate, by means of very tortuous transverse branches, with the vessels of the opposite side. There are few anastomoses with the anterior spinal artery except at the inferior extremity of the spinal cord.
Functions
Most cranially, the posterior spinal artery supplies the dorsal column of the closed medulla containing fasiculus grac
|
https://en.wikipedia.org/wiki/Double%20harmonic%20scale
|
The double harmonic major scale is a musical scale with a flattened second and sixth degree. This is also known as Mayamalavagowla, Bhairav Raga, Byzantine scale, Arabic (Hijaz Kar), and Gypsy major. It can be likened to a gypsy scale because of the diminished step between the 1st and 2nd degrees. Arabic scale may also refer to any Arabic mode, the simplest of which, however, to Westerners, resembles the double harmonic major scale.
Details
The sequence of steps comprising the double harmonic scale is :
half, augmented second, half, whole, half, augmented second, half
Or, in relation to the tonic note
minor second, major third, perfect fourth and fifth, minor sixth, major seventh, octave
However, this scale is commonly represented with the first and last half step each being represented with quarter tones:
The non-quarter tone form is identical, in terms of notes, to the North Indian Thaat named Bhairav and the South Indian (Carnatic) Melakarta named Mayamalavagowla.
The double harmonic scale is arrived at by either:
lowering both the second and sixth of the Ionian mode by a semitone.
lowering the second note and raising the third note of the harmonic minor scale by one semitone.
raising the seventh of the Phrygian dominant scale (a mode of the harmonic minor scale) by a semitone. The Phrygian dominant in turn is produced by raising the third of the diatonic Phrygian mode (a mode of the major scale) by a semitone.
raising the third of the neapolitan minor scale by a semitone.
lowering the second note of the harmonic major scale by a semitone.
combining the lower half of the Phrygian dominant scale with the upper half of harmonic minor.
It is referred to as the "double harmonic" scale because it contains two harmonic tetrads featuring augmented seconds. By contrast, both the harmonic major and harmonic minor scales contain only one augmented second, located between their sixth and seventh degrees.
The scale contains a built-in tritone substitution, a domina
|
https://en.wikipedia.org/wiki/Meningeal%20branches%20of%20vertebral%20artery
|
The meningeal branches of vertebral artery (posterior meningeal branch) springs from the vertebral opposite the foramen magnum, ramifies between the bone and dura mater in the cerebellar fossa, and supplies the falx cerebelli.
It is frequently represented by one or two small branches.
|
https://en.wikipedia.org/wiki/List%20of%20misnamed%20theorems
|
This is a list of misnamed theorems in mathematics. It includes theorems (and lemmas, corollaries, conjectures, laws, and perhaps even the odd object) that are well known in mathematics, but which are not named for the originator. That is, these items on this list illustrate Stigler's law of eponymy (which is not, of course, due to Stephen Stigler, who credits Robert K Merton).
== Applied mathematics ==
Benford's law. This was first stated in 1881 by Simon Newcomb, and rediscovered in 1938 by Frank Benford. The first rigorous formulation and proof seems to be due to Ted Hill in 1988.; see also the contribution by Persi Diaconis.
Bertrand's ballot theorem. This result concerning the probability that the winner of an election was ahead at each step of ballot counting was first published by W. A. Whitworth in 1878, but named after Joseph Louis François Bertrand who rediscovered it in 1887. A common proof uses André's reflection method, though the proof by Désiré André did not use any reflections.
Algebra
Burnside's lemma. This was stated and proved without attribution in Burnside's 1897 textbook, but it had previously been discussed by Augustin Cauchy, in 1845, and by Georg Frobenius in 1887.
Cayley–Hamilton theorem. The theorem was first proved in the easy special case of 2×2 matrices by Cayley, and later for the case of 4×4 matrices by Hamilton. But it was only proved in general by Frobenius in 1878.
Hölder's inequality. This inequality was first established by Leonard James Rogers, and published in 1888. Otto Hölder discovered it independently, and published it in 1889.
Marden's theorem. This theorem relating the location of the zeros of a complex cubic polynomial to the zeros of its derivative was named by Dan Kalman after Kalman read it in a 1966 book by Morris Marden, who had first written about it in 1945. But, as Marden had himself written, its original proof was by Jörg Siebeck in 1864.
Pólya enumeration theorem. This was proven in 1927 in a difficult pape
|
https://en.wikipedia.org/wiki/Loop%20heat%20pipe
|
A loop heat pipe (LHP) is a two-phase heat transfer device that uses capillary action to remove heat from a source and passively move it to a condenser or radiator. LHPs are similar to heat pipes but have the advantage of being able to provide reliable operation over long distance and the ability to operate against gravity. They can transport a large heat load over a long distance with a small temperature difference. Different designs of LHPs ranging from powerful, large size LHPs to miniature LHPs (micro-loop heat pipe) have been developed and successfully employed in a wide sphere of applications both ground and space-based applications.
Construction
The most common coolants used in LHPs are anhydrous ammonia and propylene. LHPs are made by controlling the volumes of the reservoir carefully, condenser and vapor and liquid lines so that liquid is always available to the wick. The reservoir volume and fluid charge are set so that there is always fluid in the reservoir even if the condenser and vapor and liquid lines are completely filled.
Generally small pore size and large capillary pumping capability are necessary in a wick. There must be a balance in the wick pumping capability and the wick permeability when designing a heat pipe or loop heat pipe.
Mechanism
In a loop heat pipe, first the heat enters the evaporator and vaporizes the working fluid at the wick outer surface. The vapor then flows down the system of grooves and then goes to the evaporator and the vapor line towards the condenser, where it condenses as heat is removed by the radiator. The two-phase reservoir (or compensation chamber) at the end of the evaporator is specifically designed to operate at a slightly lower temperature than the evaporator (and the condenser). The lower saturation pressure in the reservoir draws the condensate through the condenser and liquid return line. The fluid then flows into a central pipe where it feeds the wick. A secondary wick hydraulically links the reservo
|
https://en.wikipedia.org/wiki/List%20of%20battery%20types
|
This list is a summary of notable electric battery types composed of one or more electrochemical cells. Three lists are provided in the table. The primary (non-rechargeable) and secondary (rechargeable) cell lists are lists of battery chemistry. The third list is a list of battery applications.
Battery cell types
Batteries by application
Automotive battery
Backup battery
Battery (vacuum tube)
Battery pack
Battery room
Battery-storage power station
Biobattery
Button cell
CMOS battery
Common battery
Commodity cell
Electric-vehicle battery
Flow battery
Home energy storage
Inverter battery
Lantern battery
Nanobatteries
Nanowire battery
Local battery
Polapulse battery
Photoflash battery
Reserve battery
Smart battery system
Watch battery
Water-activated battery
See also
Baghdad Battery
Battery nomenclature
Carnot battery
Comparison of commercial battery types
History of the battery
List of battery sizes
List of energy densities
Search for the Super Battery (2017 PBS film)
Fuel cell
|
https://en.wikipedia.org/wiki/Antiprotozoal
|
Antiprotozoal agents (ATC code: ATC P01) is a class of pharmaceuticals used in treatment of protozoan infection.
A paraphyletic group, protozoans have little in common with each other. For example, Entamoeba histolytica, a unikont eukaryotic organism, is more closely related to Homo sapiens (humans), which also belongs to the unikont phylogenetic group, than it is to Naegleria fowleri, a "protozoan" bikont. As a result, agents effective against one pathogen may not be effective against another.
Antiprotozoal agents can be grouped by mechanism or by organism. Recent papers have also proposed the use of viruses to treat infections caused by protozoa.
Overuse or misuse of antiprotozoals can lead to the development of antiprotozoal resistance.
Medical uses
Antiprotozoals are used to treat protozoal infections, which include amebiasis, giardiasis, cryptosporidiosis, microsporidiosis, malaria, babesiosis, trypanosomiasis, Chagas disease, leishmaniasis, and toxoplasmosis. Currently, many of the treatments for these infections are limited by their toxicity.
Outdated terminology
Protists were once considered protozoans, but of late the categorization of unicellar organisms has undergone rapid development, however in literature, including scientific, there tends to persist the usage of the term antiprotozoal when they really mean anti-protist. Protists are a supercategory of eukaryota which includes protozoa.
Mechanism
The mechanisms of antiprotozoal drugs differ significantly drug to drug. For example, it appears that eflornithine, a drug used to treat trypanosomiasis, inhibits ornithine decarboxylase, while the aminoglycoside antibiotic/antiprotozoals used to treat leishmaniasis are thought to inhibit protein synthesis.
Examples
Eflornithine
Furazolidone
Hydroxychloroquine
Melarsoprol
Metronidazole
Nifursemizone
Nitazoxanide
Ornidazole
Paromomycin sulfate
Pentamidine
Pyrimethamine
Quinapyramine
Ronidazole
Tinidazole
|
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Brain%20Research
|
The Max Planck Institute for Brain Research is located in Frankfurt, Germany. It was founded as Kaiser Wilhelm Institute for Brain Research in Berlin 1914, moved to Frankfurt-Niederrad in 1962 and more recently in a new building in Frankfurt-Riedberg. It is one of 83 institutes in the Max Planck Society (Max Planck Gesellschaft).
Research
Research at the Max Planck Institute for Brain Research focuses on the operation of networks of neurons in the brain. The institute hosts three scientific departments (with directors Moritz Helmstaedter of the Helmstaedter Department, Gilles Laurent of the Laurent Department, and Erin Schuman of the Schuman Department), the Singer Emeritus Group, two Max Planck Research Groups, namely Johannes Letzkus' Neocortical Circuits Group and Tatjana Tchumatchenko's Theory of Neural Dynamics Group, as well as several additional research units. The common research goal of the Institute is a mechanistic understanding of neurons and synapses, of the structural and functional circuits which they form, of the computational rules which describe their operations, and ultimately, of their roles in driving perception and behavior. The experimental focus is on all scales required to achieve this understanding - from networks of molecules in dendritic compartments to networks of interacting brain areas. This includes interdisciplinary analyses at the molecular, cellular, multi-cellular, network and behavioral levels, often combined with theoretical approaches.
History
The "Kaiser-Wilhelm-Institut für Hirnforschung" (KWI for Brain Research) was founded in Berlin in 1914, making it one of the oldest institutes of the "Kaiser Wilhelm Society for the Advancement of Science", itself founded in 1911. It was based on the Neurologische Zentralstation (Neurological Center), a private research institute established by Oskar Vogt in 1898 and run together with his wife Cécile Vogt-Mugnier, also an accomplished brain researcher.
From 1901 to 1910, Vogt's cowor
|
https://en.wikipedia.org/wiki/Steven%20Salzberg
|
Steven Lloyd Salzberg (born 1960) is an American computational biologist and computer scientist who is a Bloomberg Distinguished Professor of Biomedical Engineering, Computer Science, and Biostatistics at Johns Hopkins University, where he is also Director of the Center for Computational Biology.
Early life and education
Salzberg was born in 1960 as one of four children to Herman Salzberg, a Distinguished Professor Emeritus of Psychology, and Adele Salzberg, a retired school teacher. Salzberg did his undergraduate studies at Yale University where he received his Bachelor of Arts degree in English in 1980. In 1981 he returned to Yale, and he received his Master of Science and Master of Philosophy degrees in Computer Science in 1982 and 1984, respectively. After several years in a startup company, he enrolled at Harvard University, where he earned a Ph.D. in Computer Science in 1989.
Career
After obtaining his undergraduate degree, he worked for a local power company in South Carolina, where he gained programming experience using IBM mainframe. He also learned COBOL and IBM Assembler. He then joined a Boston-based AI startup upon completion of his masters degree in Computer Science.
After earning his Ph.D., Salzberg joined Johns Hopkins University as an assistant professor in the Department of Computer Science, and was promoted to associate professor in 1997. From 1998 to 2005, he was the head of the Bioinformatics department at The Institute for Genomic Research, one of the world's largest genome sequencing centers. Salzberg then joined the Department of Computer Science at the University of Maryland, College Park, where he was the Horvitz Professor of Computer Science as well as the Director of the Center for Bioinformatics and Computational Biology. In 2011, Salzberg returned to Johns Hopkins University as a professor in the Department of Medicine. From 2014, he was a professor in the Department of Biomedical Engineering in the School of Medicine; the Departme
|
https://en.wikipedia.org/wiki/Storage%20Module%20Device
|
Storage Module Drive (SMD) is a family of storage devices (hard disk drives) that were first shipped by Control Data Corporation in December 1973 as the CDC 9760 40 MB (unformatted) storage module disk drive. The CDC 9762 80 MB variant was announced in June 1974 and the CDC 9764 150 MB and the CDC 9766 300 MB variants were announced in 1975 (all capacities unformatted). A non-removable media variant family of 12, 24 and 48 MB capacity, the MMD, was then announced in 1976. This family's interface, SMD, derived from the earlier Digital RP0x interface, was documented as ANSI Standard X3.91M - 1982, Storage Module Interfaces with Extensions for Enhanced Storage Module Interfaces.
The SMD interface is based upon a definition of two flat interface cables ("A" control and "B" data) which run from the disk drive to a controller and then to a computer. This interface allows data to be transferred at 9.6 Mbit/s. The SMD interface was supported by many 8 inch and 14 inch removable and non-removable disk drives. It was mainly implemented on disk drives used with mainframes and minicomputers and was later itself replaced by SCSI.
Control Data shipped its 100,000th SMD drive in July 1981. By 1983 at least 25 manufacturers had supplied SMD drives, including, Ampex, Century Data Systems, CDC, Fujitsu, Hitachi, Micropolis, Pertec, Priam, NEC and Toshiba.
CDC 976x disk geometry
The CDC 9762 80 MB variant has 5 × 14" platters. The top and bottom platters are guard platters and not used for storage. The top and bottom guard platters are exactly the same size as the data platters, and are usually made from a data platter which had too many errors to be usable as a data platter. The remaining 3 platters give 5 data surfaces and one servo surface for head positioning, being the upper surface of the center platter.
The CDC 9766 300 MB variant has 12 × 14" platters. Again the top and bottom platters are guard platters and not used for storage. The remaining 10 platters give 19 data
|
https://en.wikipedia.org/wiki/Diffusion%20%28acoustics%29
|
Diffusion, in architectural acoustics, is the spreading of sound energy evenly in a given environment. A perfectly diffusive sound space is one in which the reverberation time is the same at any listening position.
Most interior spaces are non-diffusive; the reverberation time is considerably different around the room. At low frequencies, they suffer from prominent resonances called room modes.
Diffusor
Diffusors (or diffusers) are used to treat sound aberrations, such as echoes, in rooms. They are an excellent alternative or complement to sound absorption because they do not remove sound energy, but can be used to effectively reduce distinct echoes and reflections while still leaving a live sounding space. Compared to a reflective surface, which will cause most of the energy to be reflected off at an angle equal to the angle of incidence, a diffusor will cause the sound energy to be radiated in many directions, hence leading to a more diffusive acoustic space. It is also important that a diffusor spreads reflections in time as well as spatially. Diffusors can aid sound diffusion, but this is not why they are used in many cases; they are more often used to remove coloration and echoes.
Diffusors come in many shapes and materials. The birth of modern diffusors was marked by Manfred R. Schroeders' invention of number-theoretic diffusors in the 1970s.
Maximum length sequence diffusors
Maximum length sequence based diffusors are made of strips of material with two different depths. The placement of these strips follows an MLS. The width of the strips is smaller than or equal to quarter the wavelength of the frequency where the maximum scattering effect is desired. Ideally, small vertical walls are placed between lower strips, improving the scattering effect in the case of tangential sound incidence. The bandwidth of these devices is rather limited; at one octave above the design frequency, diffusor efficiency drops to that of a flat surface.
Quadratic-residue dif
|
https://en.wikipedia.org/wiki/NOON%20state
|
In quantum optics, a NOON state or N00N state is a quantum-mechanical many-body entangled state:
which represents a superposition of N particles in mode a with zero particles in mode b, and vice versa. Usually, the particles are photons, but in principle any bosonic field can support NOON states.
Applications
NOON states are an important concept in quantum metrology and quantum sensing for their ability to make precision phase measurements when used in an optical interferometer. For example, consider the observable
The expectation value of for a system in a NOON state switches between +1 and −1 when changes from 0 to . Moreover, the error in the phase measurement becomes
This is the so-called Heisenberg limit, and gives a quadratic improvement over the standard quantum limit. NOON states are closely related to Schrödinger cat states and GHZ states, and are extremely fragile.
Towards experimental realization
There have been several theoretical proposals for creating photonic NOON states. Pieter Kok, Hwang Lee, and Jonathan Dowling proposed the first general method based on post-selection via photodetection. The down-side of this method was its exponential scaling of the success probability of the protocol. Pryde and White subsequently introduced a simplified method using intensity-symmetric multiport beam splitters, single photon inputs, and either heralded or conditional measurement. Their method, for example, allows heralded production of the N = 4 NOON state without the need for postselection or zero photon detections, and has the same success probability of 3/64 as the more complicated circuit of Kok et al. Cable and Dowling proposed a method that has polynomial scaling in the success probability, which can therefore be called efficient.
Two-photon NOON states, where N = 2, can be created deterministically from two identical photons and a 50:50 beam splitter. This is called the Hong–Ou–Mandel effect in quantum optics. Three- and four-photon NOO
|
https://en.wikipedia.org/wiki/Arylsulfatase
|
Arylsulfatase (EC 3.1.6.1, sulfatase, nitrocatechol sulfatase, phenolsulfatase, phenylsulfatase, p-nitrophenyl sulfatase, arylsulfohydrolase, 4-methylumbelliferyl sulfatase, estrogen sulfatase) is a type of sulfatase enzyme with systematic name aryl-sulfate sulfohydrolase. This enzyme catalyses the following chemical reaction
an aryl sulfate + H2O a phenol + sulfate
Types include:
Arylsulfatase A (also known as "cerebroside-sulfatase")
Arylsulfatase B (also known as "N-acetylgalactosamine-4-sulfatase")
Steroid sulfatase (formerly known as "arylsulfatase C")
ARSC2
ARSD
ARSF
ARSG
ARSH
ARSI
ARSJ
ARSK
ARSL (formerly known as "arylsulfatase E", "ARSE")
See also
Aryl
|
https://en.wikipedia.org/wiki/Serodiscordant
|
A serodiscordant relationship, also known as mixed-status, is one where one partner is infected by HIV and the other is not. This contrasts with seroconcordant relationships, in which both partners are of the same HIV status. Serodiscordancy contributes to the spread of HIV/AIDS, particularly in Sub-Saharan nations such as Lesotho.
Serodiscordant couples face numerous issues not faced by seroconcordant couples, including decisions as to what level of sexual activity is comfortable for them, knowing that practicing safer sex reduces but does not eliminate the risk of transmission to the HIV-negative partner. There are also potential psychological issues arising out of taking care of a sick partner, and survivor guilt. Financial strains may also be more accentuated as one partner becomes ill and potentially less able or unable to work.
Research involving serodiscordant couples has offered insights into how the virus is passed and how individuals who are HIV positive may be able to reduce the risk of passing the virus to their partner.
Experts predict that there are thousands of serodiscordant couples in the US who wish to have children, and researchers report a growing stream of calls from these couples wanting reproductive help. The Special Program of Assisted Reproduction was developed in 1996 to help serodiscordant couples conceive safely, however, it is solely designed to help couples where the male partner is infected. WHO 2013 guidelines for starting assisted reproduction technology now consider all serodiscordant couples for treatment.
See also
Undetectable = Untransmittable
HIV testing
HIV/AIDS
HIV/AIDS in Lesotho
Serology
Serosorting
Serostatus
Sperm washing
Safe sex
|
https://en.wikipedia.org/wiki/Emopamil%20binding%20protein
|
Emopamil binding protein is a protein that in humans is encoded by the EBP gene, located on the X chromosome. The protein is shown to have a high-affinity reception for anti-ischemic drugs, such as Emopamil, resulting in its discovery and given name. EBP has a mass of 27.3 kDa and resembles a σ-receptor that resides in the endoplasmic reticulum of various tissues as an integral membrane protein.
Clinical significance
Mutations in EBP cause Conradi–Hünermann syndrome and impairs cholesterol biosynthesis. Unborn males affected with EBP mutations are not expected to be liveborn, (with up to only 5% male births). Individuals, mostly female, that are liveborn with EBP mutations experience stunted growth, limb reduction and back problems. Later in life, the individual may develop cataracts along with coarse hair and hair loss.
Cloning
Isolation, replication and characterization of the EBP and EBP-like protein have been performed in yeast/E. Coli strains (which lack the EBP protein in nature) to study the high-affinity drug binding effects.
See also
Emopamil
Cholestenol Delta-isomerase
Sigma-1 receptor
Sigma-2 receptor
|
https://en.wikipedia.org/wiki/Ramanujan%27s%20congruences
|
In mathematics, Ramanujan's congruences are some remarkable congruences for the partition function p(n). The mathematician Srinivasa Ramanujan discovered the congruences
This means that:
If a number is 4 more than a multiple of 5, i.e. it is in the sequence
4, 9, 14, 19, 24, 29, . . .
then the number of its partitions is a multiple of 5.
If a number is 5 more than a multiple of 7, i.e. it is in the sequence
5, 12, 19, 26, 33, 40, . . .
then the number of its partitions is a multiple of 7.
If a number is 6 more than a multiple of 11, i.e. it is in the sequence
6, 17, 28, 39, 50, 61, . . .
then the number of its partitions is a multiple of 11.
Background
In his 1919 paper, he proved the first two congruences using the following identities (using q-Pochhammer symbol notation):
He then stated that "It appears there are no equally simple properties for any moduli involving primes other than these".
After Ramanujan died in 1920, G. H. Hardy extracted proofs of all three congruences from an unpublished manuscript of Ramanujan on p(n) (Ramanujan, 1921). The proof in this manuscript employs the Eisenstein series.
In 1944, Freeman Dyson defined the rank function and conjectured the existence of a crank function for partitions that would provide a combinatorial proof of Ramanujan's congruences modulo 11. Forty years later, George Andrews and Frank Garvan found such a function, and proved the celebrated result that the crank simultaneously "explains" the three Ramanujan congruences modulo 5, 7 and 11.
In the 1960s, A. O. L. Atkin of the University of Illinois at Chicago discovered additional congruences for small prime moduli. For example:
Extending the results of A. Atkin, Ken Ono in 2000 proved that there are such Ramanujan congruences modulo every integer coprime to 6. For example, his results give
Later Ken Ono conjectured that the elusive crank also satisfies exactly the same types of general congruences. This was proved by his Ph.D. student Karl Mahl
|
https://en.wikipedia.org/wiki/Brownian%20dynamics
|
In physics, Brownian dynamics is a mathematical approach for describing the dynamics of molecular systems in the diffusive regime. It is a simplified version of Langevin dynamics and corresponds to the limit where no average acceleration takes place. This approximation is also known as overdamped Langevin dynamics or as Langevin dynamics without inertia.
Definition
In Brownian dynamics, the following equation of motion is used to describe the dynamics of a stochastic system with coordinates :
where:
is the velocity, the dot being a time derivative
is the particle interaction potential
is the gradient operator, such that is the force calculated from the particle interaction potential.
is Boltzmann's constant.
is the temperature.
is a diffusion coefficient in units of .
is a white noise term, in units of , satisfying and
Derivation
In Langevin dynamics, the equation of motion using the same notation as above is as follows:
where:
is the mass of the particle.
is the acceleration
is the friction constant or tensor, in units of .
It is often of form , where is the collision frequency with the solvent, a damping constant in units of .
For spherical particles of radius r in the limit of low Reynolds number, Stokes' law gives .
The above equation may be rewritten as In Brownian dynamics, the inertial force term is so much smaller than the other three that it is considered negligible. In this case, the equation is approximately
For spherical particles of radius in the limit of low Reynolds number, we can use the Stokes-Einstein relation. In this case, , and the equation reads:
For example, when the magnitude of the friction tensor increases, the damping effect of the viscous force becomes dominant relative to the inertial force. Consequently, the system transitions from the inertial to the diffusive (Brownian) regime. For this reason, Brownian dynamics are also known as overdamped Langevin dynamics or Langevin dynamics without inertia.
Algorithm
|
https://en.wikipedia.org/wiki/Wasserstein%20metric
|
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.
Intuitively, if each distribution is viewed as a unit amount of earth (soil) piled on , the metric is the minimum "cost" of turning one pile into the other, which is assumed to be the amount of earth that needs to be moved times the mean distance it has to be moved. This problem was first formalised by Gaspard Monge in 1781. Because of this analogy, the metric is known in computer science as the earth mover's distance.
The name "Wasserstein distance" was coined by R. L. Dobrushin in 1970, after learning of it in the work of Leonid Vaseršteĭn on Markov processes describing large systems of automata (Russian, 1969). However the metric was first defined by Leonid Kantorovich in The Mathematical Method of Production Planning and Organization (Russian original 1939) in the context of optimal transport planning of goods and materials. Some scholars thus encourage use of the terms "Kantorovich metric" and "Kantorovich distance". Most English-language publications use the German spelling "Wasserstein" (attributed to the name "Vaseršteĭn" () being of German origin).
Definition
Let be a metric space that is a Radon space. For , the Wasserstein -distance between two probability measures and on with finite -moments is
where is the set of all couplings of and ; is defined to be and corresponds to a supremum norm. A coupling is a joint probability measure on whose marginals are and on the first and second factors, respectively. That is,
Intuition and connection to optimal transport
One way to understand the above definition is to consider the optimal transport problem. That is, for a distribution of mass on a space , we wish to transport the mass in such a way that it is transformed into the distribution on the same space; transforming the 'pile of earth' to
|
https://en.wikipedia.org/wiki/Varifold
|
In mathematics, a varifold is, loosely speaking, a measure-theoretic generalization of the concept of a differentiable manifold, by replacing differentiability requirements with those provided by rectifiable sets, while maintaining the general algebraic structure usually seen in differential geometry. Varifolds generalize the idea of a rectifiable current, and are studied in geometric measure theory.
Historical note
Varifolds were first introduced by Laurence Chisholm Young in , under the name "generalized surfaces". Frederick J. Almgren Jr. slightly modified the definition in his mimeographed notes and coined the name varifold: he wanted to emphasize that these objects are substitutes for ordinary manifolds in problems of the calculus of variations. The modern approach to the theory was based on Almgren's notes and laid down by William K. Allard, in the paper .
Definition
Given an open subset of Euclidean space , an m-dimensional varifold on is defined as a Radon measure on the set
where is the Grassmannian of all m-dimensional linear subspaces of an n-dimensional vector space. The Grassmannian is used to allow the construction of analogs to differential forms as duals to vector fields in the approximate tangent space of the set .
The particular case of a rectifiable varifold is the data of a m-rectifiable set M (which is measurable with respect to the m-dimensional Hausdorff measure), and a density function defined on M, which is a positive function θ measurable and locally integrable with respect to the m-dimensional Hausdorff measure. It defines a Radon measure V on the Grassmannian bundle of ℝn
where
is the −dimensional Hausdorff measure
Rectifiable varifolds are weaker objects than locally rectifiable currents: they do not have any orientation. Replacing M with more regular sets, one easily see that differentiable submanifolds are particular cases of rectifiable manifolds.
Due to the lack of orientation, there is no boundary operator defined on th
|
https://en.wikipedia.org/wiki/Schramm%E2%80%93Loewner%20evolution
|
In probability theory, the Schramm–Loewner evolution with parameter κ, also known as stochastic Loewner evolution (SLEκ), is a family of random planar curves that have been proven to be the scaling limit of a variety of two-dimensional lattice models in statistical mechanics. Given a parameter κ and a domain in the complex plane U, it gives a family of random curves in U, with κ controlling how much the curve turns. There are two main variants of SLE, chordal SLE which gives a family of random curves from two fixed boundary points, and radial SLE, which gives a family of random curves from a fixed boundary point to a fixed interior point. These curves are defined to satisfy conformal invariance and a domain Markov property.
It was discovered by as a conjectured scaling limit of the planar uniform spanning tree (UST) and the planar loop-erased random walk (LERW) probabilistic processes, and developed by him together with Greg Lawler and Wendelin Werner in a series of joint papers.
Besides UST and LERW, the Schramm–Loewner evolution is conjectured or proven to describe the scaling limit of various stochastic processes in the plane, such as critical percolation, the critical Ising model, the double-dimer model, self-avoiding walks, and other critical statistical mechanics models that exhibit conformal invariance. The SLE curves are the scaling limits of interfaces and other non-self-intersecting random curves in these models. The main idea is that the conformal invariance and a certain Markov property inherent in such stochastic processes together make it possible to encode these planar curves into a one-dimensional Brownian motion running on the boundary of the domain (the driving function in Loewner's differential equation). This way, many important questions about the planar models can be translated into exercises in Itô calculus. Indeed, several mathematically non-rigorous predictions made by physicists using conformal field theory have been proven using this st
|
https://en.wikipedia.org/wiki/Seven%20management%20and%20planning%20tools
|
The seven management and planning tools have their roots in operations research work done after World War II and the Japanese total quality control (TQC) research.
The New seven tools
Affinity Diagram [KJ method]
Affinity diagrams are a special kind of brainstorming tool that organize large amount of disorganized data and information into groupings based on natural relationships.
It was created in the 1960s by the Japanese anthropologist Jiro Kawakita. It is also known as KJ diagram, after Jiro Kawakita. An affinity diagram is used when:
You are confronted with many facts or ideas in apparent chaos.
Issues seem too large and complex to grasp.
Interrelationship diagram
Interrelationship diagrams (IDs) displays all the interrelated cause-and-effect relationships and factors involved in a complex problem and describes desired outcomes. The process of creating an interrelationship diagram helps a group analyze the natural links between different aspects of a complex situation.
Tree diagram
This tool is used to break down broad categories into finer and finer levels of detail. It can map levels of details of tasks that are required to accomplish a goal or solution or task. Developing a tree diagram directs concentration from generalities to specifics.
Prioritization matrix
This tool is used to prioritize items and describe them in terms of weighted criteria. It uses a combination of tree and matrix diagramming techniques to do a pair-wise evaluation of items and to narrow down options to the most desired or most effective. Popular applications for the prioritization matrix include return on investment (ROI) or cost–benefit analysis (investment vs. return), time management matrix (urgency vs. importance), etc.
Matrix diagram or quality table
This tool shows the relationship between two or more sets of elements. At each intersection, a relationship is either absent or present. It then gives information about the relationship, such as its strength, the roles
|
https://en.wikipedia.org/wiki/Kv1.1
|
Potassium voltage-gated channel subfamily A member 1 also known as Kv1.1 is a shaker related voltage-gated potassium channel that in humans is encoded by the KCNA1 gene. Isaacs syndrome is a result of an autoimmune reaction against the Kv1.1 ion channel.
Genomics
The gene is located on the Watson (plus) strand of the short arm of chromosome 12 (12p13.32). The gene itself is 8,348 bases in length and encodes a protein of 495 amino acids (predicted molecular weight 56.466 kiloDaltons).
Alternative names
The recommended name for this protein is potassium voltage-gated channel subfamily A member 1 but a number of alternatives have been used in the literature including HuK1 (human K+ channel I), RBK1 (rubidium potassium channel 1), MBK (mouse brain K+ channel), voltage gated potassium channel HBK1, voltage gated potassium channel subunit Kv1.1, voltage-gated K+ channel HuKI and AEMK (associated with myokymia with periodic ataxia).
Structure
The protein is believed to have six domains (S1-S6) with the loop between S5 and S6 forming the channel pore. This region also has a conserved selectivity filter motif. The functional channel is a homotetramer. The N-terminus of the protein associates with β subunits. These subunits regulate channel inactivation as well as its expression. The C-terminus is associated with a PDZ domain protein involved in channel targeting.
Function
The protein functions as a potassium selective channel through which the potassium ion may pass in consensus with the electrochemical gradient. They play a role in repolarisation of membranes.
RNA editing
The pre-mRNA of this protein is subject to RNA editing.
Type
A to I RNA editing is catalyzed by a family of adenosine deaminases acting on RNA (ADARs) that specifically recognize adenosines within double-stranded regions of pre-mRNAs (e.g. Potassium channel RNA editing signal) and deaminate them to inosine. Inosines are recognised as guanosine by the cells translational machinery. There a
|
https://en.wikipedia.org/wiki/Hypercube%20graph
|
In graph theory, the hypercube graph is the graph formed from the vertices and edges of an -dimensional hypercube. For instance, the cube graph is the graph formed by the 8 vertices and 12 edges of a three-dimensional cube.
has vertices, edges, and is a regular graph with edges touching each vertex.
The hypercube graph may also be constructed by creating a vertex for each subset of an -element set, with two vertices adjacent when their subsets differ in a single element, or by creating a vertex for each -digit binary number, with two vertices adjacent when their binary representations differ in a single digit. It is the -fold Cartesian product of the two-vertex complete graph, and may be decomposed into two copies of connected to each other by a perfect matching.
Hypercube graphs should not be confused with cubic graphs, which are graphs that have exactly three edges touching each vertex. The only hypercube graph that is a cubic graph is the cubical graph .
Construction
The hypercube graph may be constructed from the family of subsets of a set with elements, by making a vertex for each possible subset and joining two vertices by an edge whenever the corresponding subsets differ in a single element. Equivalently, it may be constructed using vertices labeled with -bit binary numbers and connecting two vertices by an edge whenever the Hamming distance of their labels is one. These two constructions are closely related: a binary number may be interpreted as a set (the set of positions where it has a digit), and two such sets differ in a single element whenever the corresponding two binary numbers have Hamming distance one.
Alternatively, may be constructed from the disjoint union of two hypercubes , by adding an edge from each vertex in one copy of to the corresponding vertex in the other copy, as shown in the figure. The joining edges form a perfect matching.
The above construction gives a recursive algorithm for constructing the adjacency matri
|
https://en.wikipedia.org/wiki/Pac-Mania
|
is a cavalier perspective maze game that was developed and released by Namco for arcades in 1987. In the game, the player controls Pac-Man as he must eat all of the dots while avoiding the colored ghosts that chase him in the maze. Eating large flashing "Power Pellets" will allow Pac-Man to eat the ghosts for bonus points, which lasts for a short period of time. A new feature to this game allows Pac-Man to jump over the ghosts to evade capture. It is the ninth title in the Pac-Man video game series and was the last one developed for arcades up until the release of Pac-Man Arrangement in 1996. Development was directed by Pac-Man creator Toru Iwatani. It was licensed to Atari Games for release in North America.
Pac-Mania gained a highly-positive critical reception for its uniqueness and gameplay. It was nominated for "Best Coin-Op Conversion of the Year" at the Golden Joystick Awards in 1987, although it lost to Taito's Operation Wolf. Pac-Mania was ported to several home consoles and computers, including the Atari ST, MSX2, Sega Genesis and Nintendo Entertainment System, the last of which being published by Tengen. Several Pac-Man and Namco video game collections also included the game. Ports for the Wii Virtual Console, iOS and mobile phones were also produced.
Gameplay
Pac-Mania is a maze game viewed from an oblique perspective and with a gameplay similar to the franchise's original installment. The player controls Pac-Man, a yellow circular creature that must eat all of the pellets in each stage while avoiding five colored ghosts - Blinky (red), Pinky (pink), Inky (cyan), Clyde (orange) and Sue (purple). Eating large Power Pellets will cause the ghosts to turn blue and flee, allowing Pac-Man to eat them for bonus points and send them to the house in the middle of the stage. Clearing the stage of dots and pellets will allow Pac-Man to move to the next. Mazes scroll both horizontally and vertically, and the left and right edges of some layouts wrap around to each
|
https://en.wikipedia.org/wiki/Crystallography%20and%20NMR%20system
|
CNS or Crystallography and NMR system, is a software library for computational structural biology. It is an offshoot of X-PLOR and uses much of the same syntax. It is used in the fields of X-ray crystallography and NMR spectroscopy of biological macromolecules.
|
https://en.wikipedia.org/wiki/Loss%20reserving
|
Loss reserving is the calculation of the required reserves for a tranche of insurance business, including outstanding claims reserves.
Typically, the claims reserves represent the money which should be held by the insurer so as to be able to meet all future claims arising from policies currently in force and policies written in the past.
Methods of calculating reserves in general insurance are different from those used in life insurance, pensions and health insurance since general insurance contracts are typically of a much shorter duration. Most general insurance contracts are written for a period of one year, and typically there is only one payment of premium at the start of the contract in exchange for coverage over the year. Reserves are calculated differently from contracts of a longer duration with multiple premium payments since there are no future premiums to consider in this case. The reserves are calculated by forecasting future losses from past losses.
Methods
The most popular methods of claims reserving include the chain-ladder method and the Bornhuetter–Ferguson method.
Another method is frequency-severity approach, used mainly when data is sparse.
The chain-ladder method, also known as the development method, assumes that past experience is an indicator of future experience. Loss development patterns in the past are used to estimate how claim amounts will increase (or decrease) in the future.
The Bornhuetter–Ferguson method uses both past loss development as well as an independently derived prior estimate of ultimate expected losses.
Outstanding claims reserves
Outstanding claims reserves in general insurance are a type of technical reserve or accounting provision in the financial statements of an insurer. They seek to quantify the loss liabilities for insurance claims which have been reported and not yet settled
(RBNS) or which have been incurred but not yet reported (IBNR) reserves. This is a technical reserve of an insurance company, and is es
|
https://en.wikipedia.org/wiki/Weakless%20universe
|
A weakless universe is a hypothetical universe that contains no weak interactions, but is otherwise very similar to our own universe.
In particular, a weakless universe is constructed to have atomic physics and chemistry identical to standard atomic physics and chemistry. The dynamics of a weakless universe includes a period of Big Bang nucleosynthesis, star formation, stars with sufficient fuel to burn for billions of years, stellar nuclear synthesis of heavy elements and also supernovae that distribute the heavy elements into the interstellar medium.
Motivation and anthropics
The strength of the weak interaction is an outstanding problem in modern particle physics. A theory should ideally explain why the weak interaction is 32 orders of magnitude stronger than gravity; this is known as the hierarchy problem. There are various models that address the hierarchy problem in a dynamical and natural way, for example, supersymmetry, technicolor, warped extra dimensions, and so on.
An alternative approach to explaining the hierarchy problem is to invoke the anthropic principle: One assumes that there are many other patches of the universe (or multiverse) in which physics is very different. In particular one can assume that the “landscape” of possible universes contains ones where the weak force has a different strength compared to our own. In such a scenario observers would presumably evolve wherever they can. If the observed strength of the weak force is then vital for the emergence of observers, this would explain why the weak force is indeed observed with this strength. Barr and others argued that if one only allows the electroweak symmetry breaking scale to vary between universes, keeping all other parameters fixed, atomic physics would change in ways that would not allow life as we know it.
Anthropic arguments have recently been boosted by the realization that string theory has many possible solutions, or vacua, called the “string landscape”, and by Steven Weinb
|
https://en.wikipedia.org/wiki/Taping%20knife
|
A taping knife or joint knife is a drywall tool with a wide blade for spreading joint compound, also known as "mud". It can be used to spread mud over nail and screw indents in new drywall applications and is also used when using paper or fiberglass drywall tape to cover seams. Other common uses include patching holes, smoothing wall-coverings and creating specialty artistic wall finishes. Common sizes range from 15cm to 30cm wide (five to 12 inches). Spackle knives are a smaller version, used for patching small holes.
A right-angle joint knife allows one to apply joint compound to inside corners where walls meet. The handle is offset to allow clearance for fingers.
|
https://en.wikipedia.org/wiki/Cognitive%20shifting
|
Cognitive shifting is the mental process of consciously redirecting one's attention from one fixation to another. In contrast, if this process happened unconsciously, then it is referred to as task switching. Both are forms of cognitive flexibility.
In the general framework of cognitive therapy and awareness management, cognitive shifting refers to the conscious choice to take charge of one's mental habits—and redirect one's focus of attention in helpful, more successful directions. In the term's specific usage in corporate awareness methodology, cognitive shifting is a performance-oriented technique for refocusing attention in more alert, innovative, charismatic and empathic directions.
Origins in cognitive therapy
In cognitive therapy, as developed by its founder Aaron T. Beck and others, a client is taught to shift his or her cognitive focus from one thought or mental fixation to a more positive, realistic focus—thus the descriptive origins of the term "cognitive shifting". In "third wave" ACT therapy as taught by Steven C. Hayes and his associates in the Acceptance and Commitment Therapy movement, cognitive shifting is employed not only to shift from negative to positive thoughts, but also to shift into a quiet state of mindfulness. Cognitive shifting is also employed quite dominantly in the meditative-health procedures of medical and stress-reduction researchers such as Jon Kabat-Zinn at the University of Massachusetts Medical School.
Gradually over the 1990's and 2000's, cognitive shifting has become a common term among therapists especially on the West Coast, as well as in discussions of mind management methodology, and has begun to regularly appear in medical and psychiatric journals.
Examples of usage
In research: The term has become fairly common in psychiatric research, used in the following manner: "Neuropsychological findings in obsessive-compulsive disorder (OCD) have been explained in terms of reduced cognitive shifting ability as a result of lo
|
https://en.wikipedia.org/wiki/HP%20Compaq%20tc1100
|
The HP Compaq TC1100 is a tablet PC sold by Hewlett-Packard that was the follow-up to the Compaq TC1000. The TC1100 had either an Intel Celeron or an Intel Pentium M chip set and could be upgraded up to 2 gigabytes of memory. The switch from Transmeta Crusoe processors to the Pentium M and the ability to add memory was welcomed after numerous complaints about the poor performance of the TC1000. The TC1100 was the last version from HP in this style of tablet. It was replaced by the HP Compaq TC4200, which featured a more traditional one-piece design.
Design
The TC1100 has a 10.4 inch LCD display and pressure-sensitive pen that shares the same basic design as its predecessor, the TC1000. It has a design often referred to as a hybrid tablet, as it has the properties of both a convertible and slate tablet. All the necessary hardware components are stored within the casing of the display and digitizer. This allows it to work with or without a keyboard attached. With the keyboard attached, it can be used as a laptop, with the display rising by an adjustable hinge behind the keyboard. By rotating the display, the keyboard can fold inside the unit; or the keyboard can be removed entirely. Either method lets the user write on the screen easily, using a virtual keyboard or handwriting-recognition application for occasional character entry. This versatility gave the product a loyal following when there were no similar designs on the market.
HP also made an optional docking station, which connected to the port on the bottom of the tablet, which allowed the user to easily connect their tablet to charge, while also offering a VGA output, an Ethernet jack, audio input and output jacks, 4 USB 2.0 ports and a laptop-style CD/DVD drive. All of the ports, excluding the USB ports, are shared with the ones on the tablet, the audio ports having priority over the ones on the right side of the tablet.
Hardware
The range of processors includes Pentium-M 1.0 GHz, 1.1 GHz, and 1.2 GHz.
|
https://en.wikipedia.org/wiki/Quantitative%20computed%20tomography
|
Quantitative computed tomography (QCT) is a medical technique that measures bone mineral density (BMD) using a standard X-ray Computed Tomography (CT) scanner with a calibration standard to convert Hounsfield Units (HU) of the CT image to bone mineral density values. Quantitative CT scans are primarily used to evaluate bone mineral density at the lumbar spine and hip.
In general, solid phantoms placed in a pad under the patient during CT image acquisition are used for calibration. These phantoms contain materials that represent a number of different equivalent bone mineral densities. Usually either calcium hydroxyapatite (CaHAP) or potassium phosphate (K2HPO4) are used as the reference standard.
History
QCT was invented at University of California San Francisco (UCSF) during the 1970s. Douglas Boyd, PhD and Harry Genant, MD used a CT head scanner to do some of the seminal work on QCT.
At the same time, CT imaging technology progressed rapidly and Genant and Boyd worked with one of EMI's first whole body CT systems in the late 1970s and early 1980s to apply the quantitative CT method to the spine, coining the term "QCT." Genant later published several articles on spinal QCT in the early 1980s with Christopher E. Cann, PhD. Today, QCT is being used in hundreds of medical imaging centers around the world, both clinically and as a powerful research tool.
Three-dimensional QCT imaging
Originally, conventional 2D QCT used individual, thick CT slice images through each of multiple vertebrae which involved tilting the CT scanner gantry to align the slice with each vertebra. Today, modern 3D QCT uses the ability of CT scanners to rapidly acquire multiple slices to construct three-dimensional images of the human body. Using 3D imaging substantially reduced image acquisition time, improved reproducibility and enabled QCT bone density analysis of the hip.
Diagnostic use
QCT exams are typically used in the diagnosis and monitoring of osteoporosis.
Lumbar spine
At the
|
https://en.wikipedia.org/wiki/Logic%20probe
|
A logic probe is a low-cost hand-held test probe used for analyzing and troubleshooting the logical states (boolean 0 or 1) of a digital circuit. When many signals need to be observed or recorded simultaneously, a logic analyzer is used instead.
Overview
While most logic probes are powered by the circuit under test, some devices use batteries. They can be used on either TTL (transistor-transistor logic) or CMOS (complementary metal-oxide semiconductor) logic integrated circuit devices, such as 7400-series, 4000 series, and newer logic families that support similar voltages.
Most modern logic probes typically have one or more LEDs on the body of the probe:
an LED to indicate a high (1) logic state.
an LED to indicate a low (0) logic state.
an LED to indicate changing back and forth between low and high states. The pulse-detecting electronics usually has a pulse-stretcher circuit so that even very short pulses become visible on the LED.
A control on the logic probe allows either the capture and storage of a single event or continuous running.
When the logic probe is either connected to an invalid logic level (a fault condition or a tri-stated output) or not connected at all, none of the LEDs light up.
Another control on the logic probe allow selection of either TTL or CMOS family logic. This is required as these families have different thresholds for the logic-high (VIH) and logic-low (VIL) circuit voltages.
Some logic probes have an audible tone, of which vary across models. A model may 1) emit a tone for high logic state otherwise no tone, or 2) emit a higher frequency tone for a high logic state, lower frequency tone for a low logic state, and no tone for no connection or tri-state. An oscillating signal causes the probe to alternate between high-state and low-state tones.
History
The logic probe was invented by Gary Gordon in 1968 while he was employed by Hewlett-Packard.
|
https://en.wikipedia.org/wiki/Doubling%20time
|
The doubling time is the time it takes for a population to double in size/value. It is applied to population growth, inflation, resource extraction, consumption of goods, compound interest, the volume of malignant tumours, and many other things that tend to grow over time. When the relative growth rate (not the absolute growth rate) is constant, the quantity undergoes exponential growth and has a constant doubling time or period, which can be calculated directly from the growth rate.
This time can be calculated by dividing the natural logarithm of 2 by the exponent of growth, or approximated by dividing 70 by the percentage growth rate (more roughly but roundly, dividing 72; see the rule of 72 for details and derivations of this formula).
The doubling time is a characteristic unit (a natural unit of scale) for the exponential growth equation, and its converse for exponential decay is the half-life.
As an example, Canada's net population growth was 2.7 percent in the year 2022, dividing 72 by 2.7 gives an approximate doubling time of about 27 years. Thus if that growth rate were to remain constant, Canada's population would double from its 2023 figure of about 39 million to about 78 million by 2050.
History
The notion of doubling time dates to interest on loans in Babylonian mathematics. Clay tablets from circa 2000 BCE include the exercise "Given an interest rate of 1/60 per month (no compounding), come the doubling time." This yields an annual interest rate of 12/60 = 20%, and hence a doubling time of 100% growth/20% growth per year = 5 years. Further, repaying double the initial amount of a loan, after a fixed time, was common commercial practice of the period: a common Assyrian loan of 1900 BCE consisted of loaning 2 minas of gold, getting back 4 in five years, and an Egyptian proverb of the time was "If wealth is placed where it bears interest, it comes back to you redoubled."
Examination
Examining the doubling time can give a more intuitive sense of the
|
https://en.wikipedia.org/wiki/Minus%20Cube
|
The Minus Cube (aka BLOXBOX by Piet Hein; aka Varikon Box) is a 3D mechanical variant of the n-puzzle, which was manufactured in the Soviet Union. It consists of a bonded transparent plastic box containing seven small cubes, each glued together from two U-shape parts: one white and one coloured. The length of one side of the interior of the box is slightly more than twice the length of the side of a small cube. There is an empty space for one small cube inside the box, and the small cubes are moveable inside the box by tilting the box, causing a cube to fall into the space. The goal of the puzzle is to shuffle the cubes in such a way that on each side of the box, all of the faces of the small cubes have the same color.
There were two types of Minus Cubes manufactured: the so-called "Moscow Minus Cube" (red–white) and "Sverdlovsk Minus Cube" (blue–white), each named after the cities in which they were produced. They differed only in the orientation of one of the small cubes. Because of this difference, there are 12 times as many "solved" arrangements for the Moscow Minus Cube, and thus the Sverdlovsk Minus Cube is 12 times as difficult to solve. However, if one does not confine oneself to these two types of the Minus Cube, there are 48 Minus Cube variants that can be solved.
See also
15 puzzle
Combination puzzle
Mechanical puzzle
|
https://en.wikipedia.org/wiki/Tian%20Gang
|
Tian Gang (; born November 24, 1958) is a Chinese mathematician. He is a professor of mathematics at Peking University and Higgins Professor Emeritus at Princeton University. He is known for contributions to the mathematical fields of Kähler geometry, Gromov-Witten theory, and geometric analysis.
As of 2020, he is the Vice Chairman of the China Democratic League and the President of the Chinese Mathematical Society. From 2017 to 2019 he served as the Vice President of Peking University.
Biography
Tian was born in Nanjing, Jiangsu, China. He qualified in the second college entrance exam after Cultural Revolution in 1978. He graduated from Nanjing University in 1982, and received a master's degree from Peking University in 1984. In 1988, he received a Ph.D. in mathematics from Harvard University, under the supervision of Shing-Tung Yau.
In 1998, he was appointed as a Cheung Kong Scholar professor at Peking University. Later his appointment was changed to Cheung Kong Scholar chair professorship. He was a professor of mathematics at the Massachusetts Institute of Technology from 1995 to 2006 (holding the chair of Simons Professor of Mathematics from 1996). His employment at Princeton started from 2003, and was later appointed the Higgins Professor of Mathematics. Starting 2005, he has been the director of the Beijing International Center for Mathematical Research (BICMR); from 2013 to 2017 he was the Dean of School of Mathematical Sciences at Peking University. He and John Milnor are Senior Scholars of the Clay Mathematics Institute (CMI). In 2011, Tian became director of the Sino-French Research Program in Mathematics at the Centre national de la recherche scientifique (CNRS) in Paris. In 2010, he became scientific consultant for the International Center for Theoretical Physics in Trieste, Italy.
Tian has served on many committees, including for the Abel Prize and the Leroy P. Steele Prize. He is a member of the editorial boards of many journals, including Advance
|
https://en.wikipedia.org/wiki/Matti%20Jutila
|
Matti Ilmari Jutila (born 1943) is a mathematician and a professor emeritus at the University of Turku. He researches in the field of analytic number theory.
Education and career
Jutila completed a doctorate at the University of Turku in 1970, with a dissertation related to Linnik's constant supervised by .
Jutila's work has repeatedly succeeded in lowering the upper bound for Linnik's constant. He is the author of a monograph, Lectures on a method in the theory of exponential sums (1987). He has been a member of the Finnish Academy of Science and Letters since 1982.
|
https://en.wikipedia.org/wiki/Borisova%20Gradina%20TV%20Tower
|
The Borisova Gradina TV Tower, or the Old TV Tower, is a TV tower (including the aerial) in the garden Borisova gradina in Sofia, the capital of Bulgaria. It is known as the tower used for the first Bulgarian National Television broadcasts in 1959.
History
The tower was designed by architect Lyuben Podponev, engineer A. Voynov and technologist Georgi Kopkanov. Construction began in December 1958 and the tower was officially opened on 26 December 1959. Its typical floor has an area of , there are 14 floors and three platforms. It is located at , at above sea level.
In 1962 the Borisova Gradina TV Tower was the place from which the first Bulgarian VHF radio broadcast (of the Bulgarian National Radio's Programme 1) was realized. Programme 2 of the BNR's broadcasting began in 1965, and Programme 3's in 1971. BNT's Kanal 1 transmitter was replaced with a more modern conforming to the OIRT colour television requirements in January 1972, and the broadcasting of Efir 2 began in 1975.
Since 1985 the newer Vitosha Mountain TV Tower has been the main facility for broadcasting television and the programs of the Bulgarian National Radio in and around Sofia. The Old TV Tower broadcasts private radio stations as well as DVB-T terrestrial television.
|
https://en.wikipedia.org/wiki/South%20African%20National%20Bioinformatics%20Institute
|
The South African National Bioinformatics Institute (SANBI) is a non-profit organisation in Cape Town, South Africa dedicated to bioinformatics, biotechnology and genomics in health research.
SANBI maintains current collaborations with institutes and laboratories at Harvard University, Oxford University, Cambridge University, Stanford University, the Pasteur Institute, the RIKEN institute, the Allen Institute for Brain Science, the Institut National de la Santé et de la Recherche Médicale and the European Bioinformatics Institute.
SANBI is the South African national node of the European Molecular Biology Network, the regional site for the World Health Organization African centre for training in pathogen bioinformatics, and an affiliate of the Ludwig Institute for Cancer Research.
SANBI is funded by several organisations including the South African Medical Research Council, the National Research Foundation of South Africa, the Claude Leon Foundation, the John E. Fogarty Foundation for International Health at the National Institutes of Health, and the European Commission.
SANBI was founded in 1996 by computational biologist Winston Hide, the founding director, as part of the faculty of Natural Sciences of the University of the Western Cape. The SANBI research team includes faculty in the areas of genetic diversity, gene regulation, cancer, sleeping sickness and the Human Immunodeficiency Virus.
See also
Computational biology
|
https://en.wikipedia.org/wiki/Biodiversity%20action%20plan
|
A biodiversity action plan (BAP) is an internationally recognized program addressing threatened species and habitats and is designed to protect and restore biological systems. The original impetus for these plans derives from the 1992 Convention on Biological Diversity (CBD). As of 2009, 191 countries have ratified the CBD, but only a fraction of these have developed substantive BAP documents.
The principal elements of a BAP typically include: (a) preparing inventories of biological information for selected species or habitats; (b) assessing the conservation status of species within specified ecosystems; (c) creation of targets for conservation and restoration; and (d) establishing budgets, timelines and institutional partnerships for implementing the BAP.
Species plans
A fundamental method of engagement to a BAP is thorough documentation regarding individual species, with emphasis upon the population distribution and conservation status. This task, while fundamental, is highly daunting, since only an estimated ten percent of the world’s species are believed to have been characterized as of 2006, most of these unknowns being fungi, invertebrate animals, micro-organisms and plants. For many bird, mammal and reptile species, information is often available in published literature; however, for fungi, invertebrate animals, micro-organisms and many plants, such information may require considerable local data collection. It is also useful to compile time trends of population estimates in order to understand the dynamics of population variability and vulnerability. In some parts of the world complete species inventories are not realistic; for example, in the Madagascar dry deciduous forests, many species are completely undocumented and much of the region has never even been systematically explored by scientists.
A species plan component of a country’s BAP should ideally entail a thorough description of the range, habitat, behaviour, breeding and interaction with other
|
https://en.wikipedia.org/wiki/B%C3%A9la%20Sz%C5%91kefalvi-Nagy
|
Béla Szőkefalvi-Nagy (29 July 1913, Kolozsvár – 21 December 1998, Szeged) was a Hungarian mathematician. His father, Gyula Szőkefalvi-Nagy was also a famed mathematician. Szőkefalvi-Nagy collaborated with Alfréd Haar and Frigyes Riesz, founders of the Szegedian school of mathematics. He contributed to the theory of Fourier series and approximation theory. His most important achievements were made in functional analysis, especially, in the theory of Hilbert space operators. He was editor-in-chief of the Zentralblatt für Mathematik, the Acta Scientiarum Mathematicarum, and the Analysis Mathematica. He was awarded the Kossuth Prize in 1953, along with his co-author F. Riesz, for his book Leçons d'analyse fonctionnelle. He was awarded the Lomonosov Medal in 1979. The Béla Szőkefalvi-Nagy Medal honoring his memory is awarded yearly by Bolyai Institute.
His books
Béla Szőkefalvi-Nagy: Spektraldarstellung linearer Transformationen des Hilbertschen Raumes.(German) Berlin, 1942. 80 p.; 1967. 82 p.
Frederic Riesz, Béla Szőkefalvi-Nagy: Leçons d'analyse fonctionnelle. (French) 2e éd. Akadémiai Kiado, Budapest, 1953, VIII+455 pp.
Ciprian Foiaş, Béla Szőkefalvi-Nagy: Analyse harmonique des opérateurs de l'espace de Hilbert. (French) Masson et Cie, Paris; Akadémiai Kiadó, Budapest 1967 xi+373 pp.
Béla Szőkefalvi-Nagy, Frederic Riesz: Funkcionálanalízis. Budapest, 1988. 534 p. (English: Functional Analysis (1990). Dover. )
His articles
Diagonalization of matrices over H∞. Acta Scientiarum Mathematicarum. Szeged, 1976
On contractions similar to isometries and Toeplitz operators, with Ciprian Foiaş. Ann. Acad. Scient. Fennicae, 1976.
The function model of a contraction and the space L’/H’, with Ciprian Foiaş. Acta Scientiarum Mathematicarum. Szeged, 1979, 1980.
Toeplitz type operators and hyponormality, with Ciprian Foiaş. Operator theory. Advances and appl., 1983.
Factoring compact operator-valued functions, with authors. Acta Scientiarum Mathematicarum. Szeged,
|
https://en.wikipedia.org/wiki/Introduction%20to%20the%20mathematics%20of%20general%20relativity
|
The mathematics of general relativity is complex. In Newton's theories of motion, an object's length and the rate at which time passes remain constant while the object accelerates, meaning that many problems in Newtonian mechanics may be solved by algebra alone. In relativity, however, an object's length and the rate at which time passes both change appreciably as the object's speed approaches the speed of light, meaning that more variables and more complicated mathematics are required to calculate the object's motion. As a result, relativity requires the use of concepts such as vectors, tensors, pseudotensors and curvilinear coordinates.
For an introduction based on the example of particles following circular orbits about a large mass, nonrelativistic and relativistic treatments are given in, respectively, Newtonian motivations for general relativity and Theoretical motivation for general relativity.
Vectors and tensors
Vectors
In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric or spatial vector, or – as here – simply a vector) is a geometric object that has both a magnitude (or length) and direction. A vector is what is needed to "carry" the point to the point ; the Latin word vector means "one who carries". The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from to . Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity.
Tensors
A tensor extends the concept of a vector to additional directions. A scalar, that is, a simple number without a direction, would be shown on a graph as a point, a zero-dimensional object. A vector, which has a magnitude and direction, would appear on a graph as a line, which is a one-dimensional object. A vector is a first-order tensor, s
|
https://en.wikipedia.org/wiki/Powdered%20eggs
|
A powdered egg is a fully dehydrated egg. Most powdered eggs are made using spray drying in the same way that powdered milk is made. The major advantages of powdered eggs over fresh eggs are the reduced weight per volume of whole egg equivalent and the shelf life. Other advantages include smaller usage of storage space, and lack of need for refrigeration. Powdered eggs can be used without rehydration when baking, and can be rehydrated to make dishes such as scrambled eggs and omelettes.
History
Dehydrated eggs advertisements appeared in the late 1890s in the United States. Powdered eggs appear in literature as a staple of camp cooking at least as early as 1912.
Powdered eggs were used in the United Kingdom during World War II for rationing. Powdered eggs are also known as dried eggs, and colloquially during the period of rationing in the UK, as Ersatz eggs.
The modern method of manufacturing powdered eggs was developed in the 1930s by Albert Grant and Co. of the Mile End Road, London. The cake manufacturer was importing liquid egg from China and one of his staff realised that this was 75% water. An experimental freeze-drying plant was built and tried. Then a factory was set up in Singapore to process Chinese egg. As war approached, Grant transferred his dried egg facility to Argentina. The British Government lifted the patent during the war and many other suppliers came into the market, notably in the United States. Early importers to the United States included Vic Henningsen Sr. and others in the United Kingdom.
Quality
Powdered eggs have a storage life of 5 to 10 years when stored without oxygen in a cool storage environment.
The process of spray-drying eggs so as to make powdered eggs oxidizes the cholesterol, which has been shown to be helpful at reducing aortic atherosclerosis in animal trials.
See also
Food powder
List of dried foods
|
https://en.wikipedia.org/wiki/List%20of%20alpha%20emitting%20materials
|
The following are among the principal radioactive materials known to emit alpha particles.
209Bi, 211Bi, 212Bi, 213Bi
210Po, 211Po, 212Po, 214Po, 215Po, 216Po, 218Po
215At, 217At, 218At
218Rn, 219Rn, 220Rn, 222Rn, 226Rn
221Fr
223Ra, 224Ra, 226Ra
225Ac, 227Ac
227Th, 228Th, 229Th, 230Th, 232Th
231Pa
233U, 234U, 235U, 236U, 238U
237Np
238Pu, 239Pu, 240Pu, 244Pu
241Am
244Cm, 245Cm, 248Cm
249Cf, 252Cf
Alpha emitting
Alpha emitting
|
https://en.wikipedia.org/wiki/Vaccine-associated%20sarcoma
|
A vaccine-associated sarcoma (VAS) or feline injection-site sarcoma (FISS) is a type of malignant tumor found in cats (and, often, dogs and ferrets) which has been linked to certain vaccines. VAS has become a concern for veterinarians and cat owners alike and has resulted in changes in recommended vaccine protocols. These sarcomas have been most commonly associated with rabies and feline leukemia virus vaccines, but other vaccines and injected medications have also been implicated.
History
VAS was first recognized at the University of Pennsylvania School of Veterinary Medicine in 1991. An association between highly aggressive fibrosarcomas and typical vaccine location (between the shoulder blades) was made. Two possible factors for the increase of VAS at this time were the introduction in 1985 of vaccines for rabies and feline leukemia virus (FeLV) that contained aluminum adjuvant, and a law in 1987 requiring rabies vaccination in cats in Pennsylvania. In 1993, a causal relationship between VAS and administration of aluminium adjuvanted rabies and FeLV vaccines was established through epidemiologic methods, and in 1996 the Vaccine-Associated Feline Sarcoma Task Force was formed to address the problem.
In 2003, a study of ferret fibrosarcoma indicated that this species also may develop VAS. Several of the tumors were located in common injection sites and had similar histologic features to VAS in cats. Also in 2003, a study in Italy compared fibrosarcoma in dogs from injection sites and non-injection sites to VAS in cats, and found distinct similarities between the injection site tumors in dogs and VAS in cats. This suggests that VAS may occur in dogs.
Pathology
Inflammation in the subcutis following vaccination is considered to be a risk factor in the development of VAS, and vaccines containing aluminum were found to produce more inflammation. Furthermore, particles of aluminum adjuvant have been discovered in tumor macrophages. In addition, individual ge
|
https://en.wikipedia.org/wiki/Falsing
|
In telecommunications, falsing is a signaling error condition when a signal decoder detects a valid input although the implied protocol function was not intended. This is also known as a false decode. Other forms are referred to as talk-off.
Signal detection
Signal decoders used in communication systems, such as telephony and two-way radio systems, detect communication protocol states by recognizing a variety of electrical, optical, or acoustic conditions. Misinterpretation of those conditions leads to communication errors. Proper detection of signaling is a compromise between acceptable error rates and cost of implementation. The engineering problem is to produce the simplest circuit that works reliably. A decoder generally tries to filter audio input to strip off every audio component except a sought-after, specific tone. In part, a decoder is a narrow bandpass filter. A signal that gets through the narrow filter is rectified into a DC voltage which is used to switch something on or off.
Falsing sometimes occurs on a voice circuit when a human voice hits the exact pitch to which the tone decoder is tuned, a condition called talk-off.
For the tone decoder to work reliably, the audio input level must be in the linear range of audio stages, (undistorted). A 1,500 Hz tone fed into an amplifier that distorts the tone could produce a harmonic at 3,000 Hz, falsely triggering a decoder that is tuned to 3,000 Hz.
Examples
Examples of decoder falsing include:
a telephone answering machine detects dial pulses from a rotary dial as ringing voltage, with the result that the answering machine answers in response to dialing.
a two-way radio with an enabled CTCSS decoder turns on the receive audio for one or two syllables of a signal with a close-in-tone-frequency (but wrong) CTCSS tone. The person listening to the radio occasionally hears nonsense partial words from the receiver's speaker: "et"... "up"...
a ringy telephone circuit with SF single-frequency signaling and p
|
https://en.wikipedia.org/wiki/W.%20Roy%20Wheeler%20Medallion
|
The W. Roy Wheeler Medallion for excellence in field ornithology was created on the occasion of the centenary of the founding of Bird Observation & Conservation Australia (BOCA) to commemorate Roy Wheeler MBE, FRAOU, (1905-1988), an amateur ornithologist with a long association with the Club, as well as with the Royal Australasian Ornithologists Union. The purpose of the award is to honour worthy individuals who have been outstanding contributors, innovators and leaders in field ornithology in Australia and its territories. Medals may be awarded posthumously and the award does not distinguish between amateur and professional ornithologists. Ten people were recognised at the inaugural awards made on 12 August 2005. Medallion recipients were:
2005 - Nigel Peter Brothers
2005 - Dr Leslie Christidis
2005 - Dr Robert Geoffrey Hewett Green AM
2005 - Laurence Nathan Levy
2005 - Dr Graham Martin Pizzey AM
2005 - Pauline Neura Reilly OAM, FRAOU
2005 - Len Robinson
2005 - Ian Cecil Robert Rowley CFAOU, FRAOU
2005 - Dr Eleanor M. Rowley
2005 - Professor Patricia Vickers-Rich
See also
List of ornithology awards
|
https://en.wikipedia.org/wiki/Nature%20Medicine
|
Nature Medicine is a monthly peer-reviewed medical journal published by Nature Portfolio covering all aspects of medicine. It was established in 1995. The journal seeks to publish research papers that "demonstrate novel insight into disease processes, with direct evidence of the physiological relevance of the results". As with other Nature journals, there is no external editorial board, with editorial decisions being made by an in-house team, although peer review by external expert referees forms a part of the review process. The editor-in-chief is João Monteiro.
According to the Journal Citation Reports, the journal has a 2021 impact factor of 87.241, ranking it 1st out of 296 journals in the category "Biochemistry & Molecular Biology".
Abstracting and indexing
Nature Medicine is abstracted and indexed in:
Science Citation Index Expanded
Web of Science
Scopus
UGC
|
https://en.wikipedia.org/wiki/IEBus
|
IEBus (Inter Equipment Bus) is a communication bus specification "between equipments within a vehicle or a chassis" of Renesas Electronics. It defines OSI model layer 1 and layer 2 specification. IEBus is mainly used for car audio and car navigations, which established de facto standard in Japan, though SAE J1850 is major in United States.
IEBus is also used in some vending machines, which major customer is Fuji Electric.
Each button on the vending machine has an IEBus ID, i.e. has a controller.
Detailed specification is disclosed to licensees only, but protocol analyzers are provided from some test equipment vendors.
Its modulation method is PWM (Pulse-Width Modulation) with 6.00 MHz base clock originally, but most of automotive customers use 6.291 MHz, and physical layer is a pair of differential signalling harness. Its physical layer adopts half-duplex, asynchronous, and multi-master communication with carrier-sense multiple access with collision detection (CSMA/CD) for medium access control. It allows for up to fifty units on one bus over a maximum length of 150 meters. Two differential signalling lines are used with Bus+ / Bus− naming, sometimes labeled as Data(+) / Data(−).
It is sometimes described as "IE-BUS", "IE-Bus," or "IE Bus," but these are incorrect. In formal, it is "IEBus."
IEBus® and Inter Equipment Bus® are registered trademark symbols of Renesas Electronics Corporation, formerly NEC Electronics Corporation, (JPO: Reg. No.2552418
and 2552419, respectively).
History
In the middle of '80s, semiconductor unit of NEC Corporation, currently Renesas Electronics, started the study for increasing demands for automotive audio systems.
IEBus is introduced as a solution for the distributed control system.
In the late 1980s, several similar specifications, including the Domestic Digital Bus (D2B), the Japanese Home Bus (HBS),
and the European Home System (EHS) are proposed by different companies or organizations. These were once discussed as IEC 61030,
b
|
https://en.wikipedia.org/wiki/Phase%20offset%20modulation
|
Phase offset modulation works by overlaying two instances of a periodic waveform on top of each other. (In software synthesis, the waveform is usually generated by using a lookup table.) The two instances of the waveform are kept slightly out of sync with each other, as one is further ahead or further behind in its cycle. The values of both of the waveforms are either multiplied together, or the value of one is subtracted from the other.
This generates an entirely new waveform with a drastically different shape. For example, one sawtooth (ramp) wave subtracted from another will create a pulse wave, with the amount of offset (i.e. the difference between the two waveforms' starting points) dictating the duty cycle. If you slowly change the offset amount, you create pulse-width modulation.
Using this technique, not only can a ramp wave create pulsewidth modulation, but any other waveform can achieve a comparable effect.
Wave mechanics
|
https://en.wikipedia.org/wiki/Surface%20reconstruction
|
Surface reconstruction refers to the process by which atoms at the surface of a crystal assume a different structure than that of the bulk. Surface reconstructions are important in that they help in the understanding of surface chemistry for various materials, especially in the case where another material is adsorbed onto the surface.
Basic principles
In an ideal infinite crystal, the equilibrium position of each individual atom is determined by the forces exerted by all the other atoms in the crystal, resulting in a periodic structure. If a surface is introduced to the surroundings by terminating the crystal along a given plane, then these forces are altered, changing the equilibrium positions of the remaining atoms. This is most noticeable for the atoms at or near the surface plane, as they now only experience inter-atomic forces from one direction. This imbalance results in the atoms near the surface assuming positions with different spacing and/or symmetry from the bulk atoms, creating a different surface structure. This change in equilibrium positions near the surface can be categorized as either a relaxation or a reconstruction.
Relaxation refers to a change in the position of surface atoms relative to the bulk positions, while the bulk unit cell is preserved at the surface. Often this is a purely normal relaxation: that is, the surface atoms move in a direction normal to the surface plane, usually resulting in a smaller-than-usual inter-layer spacing. This makes intuitive sense, as a surface layer that experiences no forces from the open region can be expected to contract towards the bulk. Most metals experience this type of relaxation. Some surfaces also experience relaxations in the lateral direction as well as the normal, so that the upper layers become shifted relative to layers further in, in order to minimize the positional energy.
Reconstruction refers to a change in the two-dimensional structure of the surface layers, in addition to changes in the
|
https://en.wikipedia.org/wiki/Comparison%20of%20antivirus%20software
|
Legend
The term "on-demand scan" refers to the possibility of performing a manual scan (by the user) on the entire computer/device, while "on-access scan" refers to the ability of a product to automatically scan every file at its creation or subsequent modification.
The term "CloudAV" refers to the ability of a product to automatically perform scans on the cloud.
The term "Email Security" refers to the protection of emails from viruses and malware, while "AntiSpam" refers to the protection from spam, scam and phishing attacks.
The term "Web protection" usually includes protection from: infected and malicious URLs, phishing websites, online identity (privacy) protection and online banking protection. Many antivirus products use "third-party antivirus engine". This means that the antivirus engine is made by another producer; however, the malware signature and/or other parts of the product may (or may not) be done from the owner of the product itself.
Desktop computers and servers
Microsoft Windows
Apple macOS
Linux
Solaris
FreeBSD
OS/2, ArcaOS
Mobile (smartphones and tablets)
Google Android
iOS
Windows Mobile
This list excludes Windows Phone 7 and Windows Phone 8 as they do not support running protection programs.
Symbian
BlackBerry
See also
Antivirus software
Comparison of firewalls
International Computer Security Association
Internet Security
List of computer viruses
Virus Bulletin
|
https://en.wikipedia.org/wiki/Software%20Engineering%20Process%20Group
|
A Software Engineering Process Group (SEPG) is an organization's focal point for software process improvement activities. These individuals perform assessments of organizational capability, develop plans to implement needed improvements, coordinate the implementation of those plans, and measure the effectiveness of these efforts. Successful SEPGs require specialized skills and knowledge of many areas outside traditional software engineering.
Following are ongoing activities of the process group:
Obtains and maintains the support of all levels of management.
Facilitates software process assessments.
Works with line managers whose projects are affected by changes in software engineering practice, providing a broad perspective of the improvement effort and helping them set expectations.
Maintains collaborative working relationships with software engineers, especially to obtain, plan for, and install new practices and technologies.
Arranges for any training or continuing education related to process improvements.
Tracks, monitors, and reports on the status of particular improvement efforts.
Facilitates the creation and maintenance of process definitions, in collaboration with managers and engineering staff.
Maintains a process database.
Provides process consultation to development projects and management.
Sorts of SEPGs
Every SEPG has a different approach and mission. Some of the flavors include:
"Working" SEPGs that actually develop and deploy process as a type of internal consulting team.
"Oversight" SEPGs that oversee the process architecture, approve it, manage changes, and prioritize it (sort of a process CCB)
"Deliberative" SEPGs that debate the process approach and develop strategy for a process architecture and deployment
"Virtual" SEPGs that are made up of representatives from throughout the organization that dedicate a certain amount of time to the effort and are responsible for deploying and training everyone else in the organization
See also
|
https://en.wikipedia.org/wiki/Maurice%20Kraitchik
|
Maurice Borisovich Kraitchik (21 April 1882 – 19 August 1957) was a Belgian mathematician and populariser. His main interests were the theory of numbers and recreational mathematics.
He was born to a Jewish family in Minsk. He wrote several books on number theory during 1922–1930 and after the war, and from 1931 to 1939 edited Sphinx, a periodical devoted to recreational mathematics. During World War II, he emigrated to the United States, where he taught a course at the New School for Social Research in New York City on the general topic of "mathematical recreations."
Kraïtchik was agrégé of the Free University of Brussels, engineer at the Société Financière de Transports et d'Entreprises Industrielles (Sofina), and director of the Institut des Hautes Etudes de Belgique. He died in Brussels.
Kraïtchik is famous for having inspired the two envelopes problem in 1953, with the following puzzle in La mathématique des jeux:
Two people, equally rich, meet to compare the contents of their wallets. Each is ignorant of the contents of the two wallets. The game is as follows: whoever has the least money receives the contents of the wallet of the other (in the case where the amounts are equal, nothing happens). One of the two men can reason: "Suppose that I have the amount A in my wallet. That's the maximum that I could lose. If I win (probability 0.5), the amount that I'll have in my possession at the end of the game will be more than 2A. Therefore the game is favourable to me." The other man can reason in exactly the same way. In fact, by symmetry, the game is fair. Where is the mistake in the reasoning of each man?
Among his publications were the following:
Théorie des Nombres, Paris: Gauthier-Villars, 1922
Recherches sur la théorie des nombres, Paris: Gauthier-Villars, 1924
La mathématique des jeux ou Récréations mathématiques, Paris: Vuibert, 1930, 566 pages
Mathematical Recreations, New York: W. W. Norton, 1942 and London: George Allen & Unwin Ltd, 1943, 328 pa
|
https://en.wikipedia.org/wiki/Hysterical%20strength
|
Hysterical strength refers to a display of extreme physical strength by humans, beyond what is believed to be normal, usually occurring when people are in, or perceive themselves to be in life-or-death situations. It was also reported to be present during situations of altered states of consciousness, such as trance and alleged possession. Its description is mostly based on anecdotal evidence.
The name refers to hysteria, a nosological category that included bouts of superhuman strength as one of the possible symptoms, but in Europe this had also been an attribution in previous cases of alleged demonic possession. Charcot imputed to the phase of hysterical attacks called clownism the presence of strength and agility not consistent with the age and sex of the person, which before in the Catholic ritual of exorcism was attributed to demonic force. Thus, the cause of the phenomenon began at that time to be addressed by the investigation of insanity. During that period in the 19th century, the term hysterical strength could also be found in the intersection of such fields, scientific and religious, for instance appearing in a statement by a physician for the Society for Psychical Research.
It was also described in reports of trance or possession in several other cultures, as for example in the New Testament (Mark 5:4) or in shamanic practices.
Unexpected strength is claimed to occur during excited delirium.
Examples
The most common anecdotal examples based on hearsay are of parents lifting vehicles to rescue their children, and when people are in life-and-death situations. Periods of increased strength are short-lived, usually no longer than a few minutes, and might lead to muscle injuries and exhaustion later. It is not known if there are any reliable examples of this phenomenon.
On 18 March 1915, Corporal Seyit Çabuk lifted bombshells that weighed in the Gallipoli Campaign.
Tibetan oracles, such as the Nechung Kuten or Sungma Balung, are reported to display sup
|
https://en.wikipedia.org/wiki/Harry%20Frederick%20Recher
|
Emeritus Professor Harry Frederick Recher RZS (NSW) AM (born 27 March 1938, New York City) is an Australian ecologist, ornithologist and advocate for conservation.
Recher grew up in the United States of America. He studied at the State University of New York College of Forestry and received his B.S. in 1959 from Syracuse University. At Stanford University, ecologist Paul Ehrlich, supervised his PhD on migratory shorebirds that was awarded in 1964. Ehrlich became a lifelong friend and mentor to Recher; also sharing his commitment to a strong sense of social responsibility of science. Recher held an NIH postdoctoral fellowship at the University of Pennsylvania and Princeton University. In his early career, Recher worked with leading American ecologists Eugene Odum and Robert McArthur.
He moved to Australia in 1967. From 1968 he worked for 20 years at the Australian Museum as a Principal Research Scientist, focussing on conservation issues and the biology of forest and woodland birds. In 1988 he moved to the University of New England.He was also a member of the National Parks & Wildlife Service (NPWS) Scientific Advisory Committee.
Recher was co-editor and author of three books, A natural legacy: ecology in Australia ( 1979), Birds of eucalypt forests and woodland: ecology, conservation, management. (1985) and Woodlands of Australia, all of which were awarded the Whitley Medal by the Royal Zoological Society of New South Wales. As an early Australian ecology textbook, A Natural Legacy with co-editors Irina Dunn and Dan Lunney with David Milledge's hand-drawings illustrating the principles of community ecology and succession, Recher influenced a generation in an era of resurgent environmentalism.
Recher is heralded for his long-term field studies, especially of bird communities. In the 1980s, Recher and his colleagues applied these studies to identify the conservation requirements for native birds and animals in their specific habitats.In 2003 the statutory managem
|
https://en.wikipedia.org/wiki/Target-site%20overlap
|
In a zinc finger protein, certain sequences of amino acid residues are able to recognise and bind to an extended target-site of four or even five nucleotides When this occurs in a ZFP in which the three-nucleotide subsites are contiguous, one zinc finger interferes with the target-site of the zinc finger adjacent to it, a situation known as target-site overlap. For example, a zinc finger containing arginine at position -1 and aspartic acid at position 2 along its alpha-helix will recognise an extended sequence of four nucleotides of the sequence 5'-NNG(G/T)-3'. The hydrogen bond between Asp2 and the N4 of either a cytosine or adenine base paired to the guanine or thymine, respectively defines these two nucleotides at the 3' position, defining a sequence that overlaps into the subsite of any zinc finger that may be attached N-terminally.
Target-site overlap limits the modularity of those zinc fingers which exhibit it, by restricting the number of situations to which they can be applied. If some of the zinc fingers are restricted in this way, then a larger repertoire is required to address the situations in which those zinc fingers cannot be used. Target-site overlap may also affect the selection of zinc fingers during by display, in cases where amino acids on a non-randomised finger, and the bases of its associated subsite, influence the binding of residues on the adjacent finger which contains the randomised residues. Indeed, attempts to derive zinc finger proteins targeting the 5'-(A/T)NN-3' family of sequences by site-directed mutagenesis of finger two of the C7 protein were unsuccessful due to the Asp2 of the third finger of said protein.
The extent to which target-site overlap occurs is largely unknown, with a variety of amino acids having shown involvement in such interactions. When interpreting the zinc finger repertoires presented by investigations using ZFP phage display, it is important to appreciate the effects that the rest of the zinc finger framework
|
https://en.wikipedia.org/wiki/Endoglin
|
Endoglin (ENG) is a type I membrane glycoprotein located on cell surfaces and is part of the TGF beta receptor complex. It is also commonly referred to as CD105, END, FLJ41744, HHT1, ORW and ORW1. It has a crucial role in angiogenesis, therefore, making it an important protein for tumor growth, survival and metastasis of cancer cells to other locations in the body.
Gene and expression
The human endoglin gene is located on human chromosome 9 with location of the cytogenic band at 9q34.11. Endoglin glycoprotein is encoded by 39,757 bp and translates into 658 amino acids.
The expression of the endoglin gene is usually low in resting endothelial cells. This, however, changes once neoangiogenesis begins and endothelial cells become active in places like tumor vessels, inflamed tissues, skin with psoriasis, vascular injury and during embryogenesis. The expression of the vascular system begins at about 4 weeks and continues after that. Other cells in which endoglin is expressed consist of monocytes, especially those transitioning into macrophages, low expression in normal smooth muscle cells, high expression vascular smooth muscle cells and in kidney and liver tissues undergoing fibrosis.
Structure
The glycoprotein consists of a homodimer of 180 kDA stabilized by intermolecular disulfide bonds. It has a large extracellular domain of about 561 amino acids, a hydrophobic transmembrane domain and a short cytoplasmic tail domain composed of 45 amino acids. The 260 amino acid region closest to the extracellular membrane is referred to as the ZP domain (or, more correctly, ZP module). The outermost extracellular region is termed as the orphan domain (or orphan region) and it is the part that binds ligands such as BMP-9.
There are two isoforms of endoglin created by alternative splicing: the long isoform (L-endoglin) and the short isoform (S-endoglin). However, the L-isoform is expressed to a greater extent than the S-isoform. A soluble form of endoglin can be produced
|
https://en.wikipedia.org/wiki/Gibbs%E2%80%93Donnan%20effect
|
The Gibbs–Donnan effect (also known as the Donnan's effect, Donnan law, Donnan equilibrium, or Gibbs–Donnan equilibrium) is a name for the behaviour of charged particles near a semi-permeable membrane that sometimes fail to distribute evenly across the two sides of the membrane. The usual cause is the presence of a different charged substance that is unable to pass through the membrane and thus creates an uneven electrical charge. For example, the large anionic proteins in blood plasma are not permeable to capillary walls. Because small cations are attracted, but are not bound to the proteins, small anions will cross capillary walls away from the anionic proteins more readily than small cations.
Thus, some ionic species can pass through the barrier while others cannot. The solutions may be gels or colloids as well as solutions of electrolytes, and as such the phase boundary between gels, or a gel and a liquid, can also act as a selective barrier. The electric potential arising between two such solutions is called the Donnan potential.
The effect is named after the American physicist Josiah Willard Gibbs who proposed it in 1878 and the British chemist Frederick G. Donnan who studied it experimentally in 1911.
The Donnan equilibrium is prominent in the triphasic model for articular cartilage proposed by Mow and Lai, as well as in electrochemical fuel cells and dialysis.
The Donnan effect is tactic pressure attributable to cations (Na+ and K+) attached to dissolved plasma proteins.
Example
The presence of a charged impermeant ion (for example, a protein) on one side of a membrane will result in an asymmetric distribution of permeant charged ions. The Gibbs–Donnan equation at equilibrium states (assuming permeant ions are Na+ and Cl−):Equivalently,
Double Donnan
Note that Sides 1 and 2 are no longer in osmotic equilibrium (i.e. the total osmolytes on each side are not the same)
In vivo, ion balance does equilibriate at the proportions that would be predicted
|
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Biological%20Cybernetics
|
The Max Planck Institute for Biological Cybernetics is located in Tübingen, Baden-Württemberg, Germany. It is one of 80 institutes in the Max Planck Society (Max Planck Gesellschaft).
The institute is studying signal and information processing in the brain. We know that our brain is constantly processing a vast amount of sensory and intrinsic information by which our behavior is coordinated accordingly. How the brain actually achieves these tasks is less well understood, for example, how it perceives, recognizes, and learns new objects. The scientists at the Max Planck Institute for Biological Cybernetics aim to determine which signals and processes are responsible for creating a coherent percept of our environment and for eliciting the appropriate behavior. Scientists of three departments and seven research groups are working towards answering fundamental questions about processing in the brain, using different approaches and methods.
Departments
Department for Sensory and Sensorimotor Systems (Zhaoping Li)
Department for High-field Magnetic Resonance (Klaus Scheffler)
Department for Computational Neuroscience (Peter Dayan)
Research groups
Dynamic Cognition Group (Assaf Breska)
Translational Sensory and Circadian Neuroscience (Manuel Spitschan)
Computational Principles of Intelligence (Eric Schulz)
Systems Neuroscience & Neuroengineering (Jennifer Li & Drew Robson)
Former departments
Department for Physiology of Cognitive Processes (Nikos Logothetis)
Department for Human Perception, Cognition and Action (Heinrich H. Bülthoff)
Empirical Inference (Bernhard Schölkopf)
Information Processing in Insects (Werner E. Reichardt)
Structure & Function of Natural Nerve-Net (Valentin von Braitenberg)
External links
Homepage of the Max Planck Institute for Biological Cybernetics
Biological Cybernetics
Biological research institutes
Systems science institutes
Organisations based in Tübingen
Education in Tübingen
Cybernetics
|
https://en.wikipedia.org/wiki/StopBadware
|
StopBadware was an anti-malware nonprofit organization focused on making the Web safer through the prevention, mitigation, and remediation of badware websites. It is the successor to StopBadware.org, a project started in 2006 at the Berkman Center for Internet and Society at Harvard University. It spun off to become a standalone organization, and dropped the ".org" in its name, in January 2010.
Its website stopped working around 2021 because of copyright restrictions.
People
The founders of StopBadware.org were John Palfrey, then Executive Director of the Berkman Center, and Jonathan Zittrain, then at the Oxford Internet Institute. Both are now Professors of Law at Harvard University and faculty co-directors of the Berkman Center.
Board members of StopBadware include Vint Cerf (Chair), Esther Dyson, Philippe Courtot, Alex Eckelberry, Michael Barrett, Brett McDowell, Eric Davis, and Maxim Weinstein, StopBadware's former executive director. John Palfrey, Ari Schwartz, John Morris, Paul Mockapetris, and Mike Shaver formerly served on the Board.
Supporters
StopBadware was funded by corporate and individual donations. Some of its current partners include Google, Mozilla, PayPal, Qualys, Verisign, Verizon, and Yandex.
Google, GFI Software, and NSFocus participate as data providers in the organization's Badware Website Clearinghouse (see below). Previous supporters include AOL, Lenovo, Sun Microsystems, Trend Micro, and MySpace. Consumer Reports WebWatch, a now-defunct part of Consumers Union, served as an unpaid special advisor while StopBadware.org was a project at the Berkman Center.
Activities
StopBadware's focus was on fighting "badware by working to strengthen the entire Web ecosystem." In pursuit of this some of the organization's activities include maintaining a badware website clearinghouse, acting as an independent reviewer of blacklisted sites, website owner and user education, and a "We Stop Badware" program for Web hosts. In June 2012 StopBadware launc
|
https://en.wikipedia.org/wiki/Molecular%20medicine
|
Molecular medicine is a broad field, where physical, chemical, biological, bioinformatics and medical techniques are used to describe molecular structures and mechanisms, identify fundamental molecular and genetic errors of disease, and to develop molecular interventions to correct them. The molecular medicine perspective emphasizes cellular and molecular phenomena and interventions rather than the previous conceptual and observational focus on patients and their organs.
History
In November 1949, with the seminal paper, "Sickle Cell Anemia, a Molecular Disease", in Science magazine, Linus Pauling, Harvey Itano and their collaborators laid the groundwork for establishing the field of molecular medicine. In 1956, Roger J. Williams wrote Biochemical Individuality, a prescient book about genetics, prevention and treatment of disease on a molecular basis, and nutrition which is now variously referred to as individualized medicine and orthomolecular medicine. Another paper in Science by Pauling in 1968, introduced and defined this view of molecular medicine that focuses on natural and nutritional substances used for treatment and prevention.
Published research and progress was slow until the 1970s' "biological revolution" that introduced many new techniques and commercial applications.
Molecular surgery
Some researchers separate molecular surgery as a compartment of molecular medicine.
Education
Molecular medicine is a new scientific discipline in European universities. Combining contemporary medical studies with the field of biochemistry, it offers a bridge between the two subjects. At present only a handful of universities offer the course to undergraduates. With a degree in this discipline, the graduate is able to pursue a career in medical sciences, scientific research, laboratory work, and postgraduate medical degrees.
Subjects
Core subjects are similar to biochemistry courses and typically include gene expression, research methods, proteins, cancer research,
|
https://en.wikipedia.org/wiki/Heliophysics
|
Heliophysics (from the prefix "helio", from Attic Greek hḗlios, meaning Sun, and the noun "physics": the science of matter and energy and their interactions) is the physics of the Sun and its connection with the Solar System. NASA defines heliophysics as "(1) the comprehensive new term for the science of the Sun - Solar System Connection, (2) the exploration, discovery, and understanding of Earth's space environment, and (3) the system science that unites all of the linked phenomena in the region of the cosmos influenced by a star like our Sun."
Heliophysics concentrates on the Sun's effects on Earth and other bodies within the Solar System, as well as the changing conditions in space. It is primarily concerned with the magnetosphere, ionosphere, thermosphere, mesosphere, and upper atmosphere of the Earth and other planets. Heliophysics combines the science of the Sun, corona, heliosphere and geospace, and encompasses a wide variety of astronomical phenomena, including "cosmic rays and particle acceleration, space weather and radiation, dust and magnetic reconnection, nuclear energy generation and internal solar dynamics, solar activity and stellar magnetic fields, aeronomy and space plasmas, magnetic fields and global change", and the interactions of the Solar System with the Milky Way Galaxy.
Term “heliophysics” (Russian: “гелиофизика”) was widely used in Russian-language scientific literature. The Great Soviet Encyclopedia third edition (1969—1978) defines “Heliophysics” as “[…] a division of astrophysics that studies physics of the Sun". In 1990, the Higher Attestation Commission, responsible for the advanced academic degrees in Soviet Union and later in Russia and the Former Soviet Union, established a new specialty “Heliophysics and physics of solar system”. In English-language scientific literature prior to about 2002, the term heliophysics was sporadically used to describe the study of the "physics of the Sun". As such it was a direct translation from th
|
https://en.wikipedia.org/wiki/Photobioreactor
|
A photobioreactor (PBR) refers to any cultivation system designed for growing photoautotrophic organisms using artificial light sources or solar light to facilitate photosynthesis. PBRs are typically used to cultivate microalgae, cyanobacteria, and some mosses. PBRs can be open systems, such as raceway ponds, which rely upon natural sources of light and carbon dioxide. Closed PBRs are flexible systems that can be controlled to the physiological requirements of the cultured organism, resulting in optimal growth rates and purity levels. PBRs are typically used for the cultivation of bioactive compounds for biofuels, pharmaceuticals, and other industrial uses.
Open systems
The first approach for the controlled production of phototrophic organisms was a natural open pond or artificial raceway pond. Therein, the culture suspension, which contains all necessary nutrients and carbon dioxide, is pumped around in a cycle, being directly illuminated from sunlight via the liquid's surface. Raceway ponds are still commonly used in industry due to their low operational cost in comparison to closed PBRs. However, they offer an insufficient control of reaction conditions due to their reliance on environmental light supply and carbon dioxide, as well as possible contamination from other microorganisms. Using open technologies also result in losses of water due to evaporation into the atmosphere.
Closed systems
The construction of closed PBRs avoids system-related water losses and minimises contamination. Though closed systems have better productivity compared to open systems due to the advantages mentioned, they still need to be improved to make them suitable for production of low price commodities as cell density remains low due to several limiting factors. All modern photobioreactors have tried to balance between a thin layer of culture suspension, optimized light application, low pumping energy consumption, capital expenditure and microbial purity. However, light attenuation a
|
https://en.wikipedia.org/wiki/Faulhaber%27s%20formula
|
In mathematics, Faulhaber's formula, named after the early 17th century mathematician Johann Faulhaber, expresses the sum of the p-th powers of the first n positive integers
as a polynomial in n. In modern notation, Faulhaber's formula is
Here, is the binomial coefficient "p + 1 choose k", and the Bj are the Bernoulli numbers with the convention that .
The result: Faulhaber's formula
Faulhaber's formula concerns expressing the sum of the p-th powers of the first n positive integers
as a (p + 1)th-degree polynomial function of n.
The first few examples are well known. For p = 0, we have
For p = 1, we have the triangular numbers
For p = 2, we have the square pyramidal numbers
The coefficients of Faulhaber's formula in its general form involve the Bernoulli numbers Bj. The Bernoulli numbers begin
where here we use the convention that . The Bernoulli numbers have various definitions (see Bernoulli_number#Definitions), such as that they are the coefficients of the exponential generating function
Then Faulhaber's formula is that
Here, the Bj are the Bernoulli numbers as above, and
is the binomial coefficient "p + 1 choose k".
Examples
So, for example, one has for ,
The first seven examples of Faulhaber's formula are
History
Faulhaber's formula is also called Bernoulli's formula. Faulhaber did not know the properties of the coefficients later discovered by Bernoulli. Rather, he knew at least the first 17 cases, as well as the existence of the Faulhaber polynomials for odd powers described below.
In 1713, Jacob Bernoulli published under the title Summae Potestatum an expression of the sum of the powers of the first integers as a ()th-degree polynomial function of , with coefficients involving numbers , now called Bernoulli numbers:
Introducing also the first two Bernoulli numbers (which Bernoulli did not), the previous formula becomes
using the Bernoulli number of the second kind for which , or
using the Bernoulli number of the first kind for wh
|
https://en.wikipedia.org/wiki/Karl%20Rubin
|
Karl Cooper Rubin (born January 27, 1956) is an American mathematician at University of California, Irvine as Thorp Professor of Mathematics. Between 1997 and 2006, he was a professor at Stanford, and before that worked at Ohio State University between 1987 and 1999. His research interest is in elliptic curves. He was the first mathematician (1986) to show that some elliptic curves over the rationals have finite Tate–Shafarevich groups. It is widely believed that these groups are always finite.
Education and career
Rubin graduated from Princeton University in 1976, and obtained his Ph.D. from Harvard in 1981. His thesis advisor was Andrew Wiles. He was a Putnam Fellow in 1974, and a Sloan Research Fellow in 1985.
In 1988, Rubin received a National Science Foundation Presidential Young Investigator award, and in 1992 won the American Mathematical Society Cole Prize in number theory. In 2012 he became a fellow of the American Mathematical Society. Rubin's parents were mathematician Robert Joshua Rubin and astronomer Vera Rubin. Rubin is brother to astronomer and physicist Judith Young.
See also
CEILIDH
Torus-based cryptography
Euler system
Stark conjectures
|
https://en.wikipedia.org/wiki/Helly%27s%20selection%20theorem
|
In mathematics, Helly's selection theorem (also called the Helly selection principle) states that a uniformly bounded sequence of monotone real functions admits a convergent subsequence.
In other words, it is a sequential compactness theorem for the space of uniformly bounded monotone functions.
It is named for the Austrian mathematician Eduard Helly.
A more general version of the theorem asserts compactness of the space BVloc of functions locally of bounded total variation that are uniformly bounded at a point.
The theorem has applications throughout mathematical analysis. In probability theory, the result implies compactness of a tight family of measures.
Statement of the theorem
Let (fn)n ∈ N be a sequence of increasing functions mapping the real line R into itself,
and suppose that it is uniformly bounded: there are a,b ∈ R such that a ≤ fn ≤ b for every n ∈ N.
Then the sequence (fn)n ∈ N admits a pointwise convergent subsequence.
Generalisation to BVloc
Let U be an open subset of the real line and let fn : U → R, n ∈ N, be a sequence of functions. Suppose that
(fn) has uniformly bounded total variation on any W that is compactly embedded in U. That is, for all sets W ⊆ U with compact closure W̄ ⊆ U,
where the derivative is taken in the sense of tempered distributions;
and (fn) is uniformly bounded at a point. That is, for some t ∈ U, { fn(t) | n ∈ N } ⊆ R is a bounded set.
Then there exists a subsequence fnk, k ∈ N, of fn and a function f : U → R, locally of bounded variation, such that
fnk converges to f pointwise;
and fnk converges to f locally in L1 (see locally integrable function), i.e., for all W compactly embedded in U,
and, for W compactly embedded in U,
Further generalizations
There are many generalizations and refinements of Helly's theorem. The following theorem, for BV functions taking values in Banach spaces, is due to Barbu and Precupanu:
Let X be a reflexive, separable Hilbert space and let E be a closed, convex subset of X. Le
|
https://en.wikipedia.org/wiki/International%20Bryozoology%20Association
|
The International Bryozoology Association (IBA) is a professional association with international membership specialising in research of the phylum Bryozoa.
History
The International Bryozoology Association was founded in May 1965 in Stockholm, Sweden. The first conference was held in August 1968 in Milan, Italy.
Since then the IBA's conferences have been held every three years in a different city:
1st Conference: 1968, Milan, Italy (Proceedings published 1968)
2nd Conference: 1971, Durham, UK (Proceedings published 1973)
3rd Conference: 1974, Lyon, France (Proceedings published 1975)
4th Conference: 1977, Woods Hole, MA, US (Proceedings published 1979)
5th Conference: 1980, Durham, UK (Proceedings published 1981)
6th Conference: 1983, Vienna, Austria (Proceedings published 1985)
7th Conference: 1986, Bellingham, Washington, US (Proceedings published 1987)
8th Conference: 1989, Paris, France (Proceedings published 1991)
9th Conference: 1992, Swansea, UK (Proceedings published 1994)
10th Conference: 1995, Wellington, New Zealand (proceedings published 1996)
11th Conference: 1998, Panama (Proceedings published 2000)
12th Conference: 2001, Dublin, Ireland (Proceedings published 2002)
13th Conference: 2004, Concepción, Chile (Proceedings published 2005)
14th Conference: 2007, Boone, NC, US (proceedings published 2008)
15th Conference: 2010, Kiel, Germany (Proceedings published 2012)
16th Conference: 2013, Catania, Italy (Proceedings published 2014)
17th Conference: 2016, Melbourne, Australia
18th Conference: 2019, Liberec, Czech Republic
19th Conference: 2022, Dublin, Ireland
There are approximately 250 registered members of the IBA from across the world.
|
https://en.wikipedia.org/wiki/Keyword-driven%20testing
|
Keyword-driven testing, also known as action word based testing (not to be confused with action driven testing), is a software testing methodology suitable for both manual and automated testing. This method separates the documentation of test casesincluding both the data and functionality to usefrom the prescription of the way the test cases are executed. As a result, it separates the test creation process into two distinct stages: a design and development stage, and an execution stage. The design substage covers the requirement analysis and assessment and the data analysis, definition, and population.
Overview
This methodology uses keywords (or action words) to symbolize a functionality to be tested, such as Enter Client. The keyword Enter Client is defined as the set of actions that must be executed to enter a new client in the database. Its keyword documentation would contain:
the starting state of the system under test (SUT)
the window or menu to start from
the keys or mouse clicks to get to the correct data entry window
the names of the fields to find and which arguments to enter
the actions to perform in case additional dialogs pop up (like confirmations)
the button to click to submit
an assertion about what the state of the SUT should be after completion of the actions
Keyword-driven testing syntax lists test cases (data and action words) using a table format (see example below). The first column (column A) holds the keyword, Enter Client, which is the functionality being tested. Then the remaining columns, B-E, contain the data needed to execute the keyword: Name, Address, Postcode and City.
To enter another client, the tester would create another row in the table with Enter Client as the keyword and the new client's data in the following columns. There is no need to relist all the actions included.
In it, you can design your test cases by:
Indicating the high-level steps needed to interact with the application and the system in order to perform
|
https://en.wikipedia.org/wiki/Live%20crown
|
The live crown is the top part of a tree, the part that has green leaves (as opposed to the bare trunk, bare branches, and dead leaves). The ratio of the size of a tree's live crown to its total height is used in estimating its health and its level of competition with neighboring trees.
Trees
Biology terminology
Sustainable forest management
|
https://en.wikipedia.org/wiki/Indian%20Computing%20Olympiad
|
The Indian Computing Olympiad is an annual computer programming competition that selects four participants to represent India at the International Olympiad in Informatics. ICO is conducted by the Indian Association for Research in Computing Science. The competition is held in three stages. For the first stage, students may compete in the Zonal Computing Olympiad (a programming contest), or the Zonal Informatics Olympiad (a paper-based algorithmic test). The following two rounds are the Indian National Olympiad in Informatics and the International Olympiad in Informatics Training Camp.
Stages of competition
Students first attempt the Zonal Informatics Olympiad (ZIO), which is a written paper. Most of the questions can be solved with the use of algorithmic techniques, although logic is usually sufficient. Alternatively, students can attempt the Zonal Computing Olympiad (ZCO), an online programming contest. In 2017, students can attempt both the ZIO and ZCO.
The second round of competition is the Indian National Olympiad in Informatics (INOI), a programming competition round. Students are expected to solve two algorithmic problems in 3 hours in C++. Questions in this round are similar to those in the International Olympiad in Informatics.
Based on the results of these competitions, about 30 students are selected for the International Olympiad in Informatics Training Camp, at which students are selected and trained to represent India at the International Olympiad in Informatics. The training camp is usually held at The International School, Bangalore. In 2017, the training camp was held at Chennai Mathematical Institute, Chennai.
Incentives
The top students in the Indian Computing Olympiad receive several incentives for admission to undergraduate institutions in both India and abroad. A list of some of the incentives in India is as follows:
Amrita University offers a fully funded BTech Honors CSE program for students who perform well in ICO.
Chennai Mathematical
|
https://en.wikipedia.org/wiki/Steven%20Zucker
|
Steven Mark Zucker (12 September 1949 – 13 September 2019) was an American mathematician who introduced the Zucker conjecture, proved in different ways by Eduard Looijenga (1988) and by Leslie Saper and Mark Stern (1990).
Zucker completed his Ph.D. in 1974 at Princeton University under the supervision of Spencer Bloch. His work with David A. Cox led to the creation of the Cox–Zucker machine, an algorithm for determining if a given set of sections provides a basis (up to torsion) for the Mordell–Weil group of an elliptic surface , where is isomorphic to the projective line.
He was part of the mathematics faculty at the Johns Hopkins University. In 2012, he became a fellow of the American Mathematical Society.
Bibliography
Saper, Leslie; Stern, Mark L2-cohomology of arithmetic varieties, Annals of Mathematics (2) 132 (1990), no. 1, 1–69.
|
https://en.wikipedia.org/wiki/Sack%E2%80%93Barabas%20syndrome
|
Sack–Barabas syndrome is an older name for the medical condition vascular Ehlers–Danlos syndrome (vEDS). It affects the body's blood vessels and organs, making them prone to rupture.
Signs and symptoms
Patients with Sack–Barabas syndrome have thin, fragile skin, especially in the chest and abdomen, that bruises easily; hands and feet may have an aged appearance. Skin is soft but not overly stretchy.
Facial features are often distinctive, including protruding eyes, a thin nose and lips, sunken cheeks, and a small chin.
Other signs of the disorder include hypermobility of joints, tearing of tendons and muscles, painfully swollen veins in the legs, lung collapse, and slow wound healing following injury or surgery.Infants with the condition may be born with hip dislocations and clubfeet.
Unpredictable ruptures of arteries and organs are serious complications of SBS. Ruptured arteries can cause internal bleeding, stroke, or shock, the most common cause of death in patients with this disorder.
Rupture of the intestine is seen in 25 to 30 percent of affected individuals and tearing of the uterus during pregnancy affects 2 to 3 percent of women. Although these symptoms are rare in childhood, more than 80 percent of patients experience severe complications by the age of 40. Teenage boys are at high risk for arterial rupture, often being fatal.
Causes
Sack–Barabas is caused by mutations in the COL3A1 gene.
About half of all cases are inherited from a parent who has the condition. The condition is inherited in an autosomal dominant pattern, which means only one copy of the altered gene is necessary to cause the disorder.
The other half of cases occurs in patients whose families have no history of the disorder. These sporadic cases are caused by new mutations in one copy of the COL3A1 gene.
The protein determined by the COL3A1 gene is used to assemble larger type III collagen molecules, found mostly in skin, blood vessels, and internal organs.
When the structure or pr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.