source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/List%20of%20foodborne%20illness%20outbreaks
This is a list of foodborne illness outbreaks. A foodborne illness may be from an infectious disease, heavy metals, chemical contamination, or from natural toxins, such as those found in poisonous mushrooms. Deadliest List of foodborne illness outbreaks by death toll Canada 2008 Canada listeriosis outbreak China 2008 Chinese milk scandal Germany 2011 Escherichia coli O104:H4 outbreak Japan Minamata disease Niigata Minamata disease 1996 Japan E. coli O157:H7 Spain 1981 Toxic oil syndrome United Kingdom 2005 outbreak of E.coli O157 in South Wales 1996 outbreak of E. coli O157 in Lanarkshire, Scotland Loch Maree Hotel botulism poisoning United States In 1999, an estimated 5,000 deaths, 325,000 hospitalizations, and 76 million illnesses were caused by foodborne illnesses within the US. Illness outbreaks lead to food recalls. See also List of foodborne illness outbreaks by death toll 1984 Rajneeshee bioterror attack Eosinophilia–myalgia syndrome List of food contamination incidents List of medicine contamination incidents
https://en.wikipedia.org/wiki/Pierre%20Rosenstiehl
Pierre Rosenstiehl (5 December 1933 – 28 October 2020) was a French mathematician recognized for his work in graph theory, planar graphs, and graph drawing. The Fraysseix-Rosenstiehl's planarity criterion is at the origin of the left-right planarity algorithm implemented in Pigale software, which is considered the fastest implemented planarity testing algorithm. Rosenstiehl was directeur d’études at the École des Hautes Études en Sciences Sociales in Paris, before his retirement. He was a founding co-editor in chief of the European Journal of Combinatorics. Rosenstiehl, Giuseppe Di Battista, Peter Eades and Roberto Tamassia organized in 1992 at Marino (Italy) a meeting devoted to graph drawing which initiated a long series of international conferences, the International Symposia on Graph Drawing. He has been a member of the French literary group Oulipo since 1992. He married the French author and illustrator Agnès Rosenstiehl.
https://en.wikipedia.org/wiki/Arctic%20haze
Arctic haze is the phenomenon of a visible reddish-brown springtime haze in the atmosphere at high latitudes in the Arctic due to anthropogenic air pollution. A major distinguishing factor of Arctic haze is the ability of its chemical ingredients to persist in the atmosphere for significantly longer than other pollutants. Due to limited amounts of snow, rain, or turbulent air to displace pollutants from the polar air mass in spring, Arctic haze can linger for more than a month in the northern atmosphere. History Arctic haze was first noticed in 1750 when the Industrial Revolution began. Explorers and whalers could not figure out where the foggy layer was coming from. "Poo-jok" was the term the Inuit used for it. Another hint towards clarifying this issue was relayed in notes approximately a century ago by Norwegian explorer Fridtjof Nansen. After trekking through the Arctic he found dark stains on the ice. The term "Arctic haze" was coined in 1956 by J. Murray Mitchell, a US Air Force officer stationed in Alaska, to describe an unusual reduction in visibility observed by North American weather reconnaissance planes. From his investigations, Mitchell thought the haze had come from industrial areas in Europe and China. He went on to become an eminent climatologist. The haze is seasonal, reaching a peak in late winter and spring. When an aircraft is within a layer of Arctic haze, pilots report that horizontal visibility can drop to one tenth that of normally clear sky. At this time it was unknown whether the haze was natural or was formed by pollutants. In 1972, Glenn Edmond Shaw attributed this smog to transboundary anthropogenic pollution, whereby the Arctic is the recipient of contaminants whose sources are thousands of miles away. Further research continues with the aim of understanding the impact of this pollution on global warming. Origin of pollutants Coal-burning in northern mid-latitudes contribute aerosols containing about 90% sulfur and the remainder car
https://en.wikipedia.org/wiki/F%C3%A1inne
(; pl. Fáinní but often Fáinnes in English) is the name of a pin badge worn to show fluency in, or a willingness to speak, the Irish language. The three modern versions of the pin as relaunched in 2014 by Conradh na Gaeilge are the Fáinne Óir (gold circle), Fáinne Mór Óir (large gold circle – 9ct) and Fáinne Airgid (silver circle). In other contexts, fáinne simply means "ring" or "circle" and is also used to give such terms as fáinne pósta (wedding ring), fáinne an lae (daybreak), Tiarna na bhFáinní (The Lord of the Rings), and fáinne cluaise (earring). An Fáinne Úr An Fáinne Úr ('úr' meaning 'new') is the modernised rendition of the Fáinne, having been updated in 2014 by Conradh na Gaeilge. There are three versions presently available from <www.cnag.ie/fainne>, none requiring test or certification: Fáinne Óir (Gold Fáinne) – for fluent speakers; Fáinne Mór Óir (literally, "Large Gold Fáinne") – traditional larger, old style solid 9ct Gold (Colour), the style worn by Liam Neeson in his film portrayal of Michael Collins; Fáinne Airgid (Silver Fáinne) – for speakers with a basic working knowledge of the language. An Fáinne (The Original Organisation)Two Irish language organisations, An Fáinne (est. 1916) ("The Ring" or "The Circle" in Irish) and the Society of Gaelic Writers (est. 1911), were founded by Piaras Béaslaí (1881–1965). They were intended to work together to a certain extent, the former promoting the language and awarding those fluent in its speaking with a Fáinne Óir (Gold Ring) lapel pin, and the latter would promote and create a pool of quality literary works in the language. All the personnel actively involved in promoting the concept of An Fáinne were associated with Conradh na Gaeilge, and from an early time, An Fáinne used the Dublin postal address of 25 Cearnóg Pharnell / Parnell Square, the then HQ of Conradh na Gaeilge though the organisations were officially separate, at least at first. The effectiveness of the organisation was
https://en.wikipedia.org/wiki/Business%20Roundtable
The Business Roundtable (BRT) is a nonprofit lobbyist association based in Washington, D.C. whose members are chief executive officers of major United States companies. Unlike the U.S. Chamber of Commerce, whose members are entire businesses, BRT members are exclusively CEOs. The BRT lobbies for public policy that is favorable to business interests, such as lowering corporate taxes in the United States and internationally, as well as international trade policy, like NAFTA. In 2019, BRT redefined its definition of the purpose of a corporation as participating in stakeholder capitalism, putting the interests of employees, customers, suppliers and communities on par with shareholders. BRT board members include, in 2021, chair Doug McMillon of Walmart, president and CEO, former White House Chief of Staff Joshua Bolten, Mary Barra of General Motors, Tim Cook of Apple, and Chuck Robbins of Cisco. History On October 13, 1972, the March Group, co-founded by Alcoa chairman John D. Harper and General Electric CEO Fred Borch, the Construction Users Anti-Inflation Roundtable, founded by retired U.S. Steel CEO Roger Blough, and the Labor Law Study Group (LLSG) merged to form the Business Roundtable. The March Group consisted of chief executive officers who met informally to consider public policy issues; the Construction Users Anti-Inflation Roundtable was devoted to containing construction costs; and the Labor Law Study Committee was largely made up of labor relations executives of major companies. Harper was the newly founded group's first president, followed by Thomas Murphy of General Motors, Irving Shapiro of DuPont, then Clifford Garvin of Exxon. In 2010, The Washington Post characterized the group as President Barack Obama's "closest ally in the business community." On August 19, 2019, BRT redefined its decades-old definition of the purpose of a corporation, replacing its bedrock principle that shareholder interests must be placed above all else, as defined in 1970 b
https://en.wikipedia.org/wiki/Citizendium
Citizendium ( ; "the citizens' compendium of everything") is an English-language wiki-based free online encyclopedia launched by Larry Sanger, co-founder of Nupedia and Wikipedia. It was first announced in September 2006 as a fork of the English Wikipedia, but instead launched in March 2007 with an emphasis on original content. The project's aim was to improve on the Wikipedia model by providing increased reliability. It planned to achieve this by requiring virtually all contributors to use their real names, by strictly moderating the project for unprofessional behavior, by providing "gentle expert oversight" of everyday contributors, and through "approved articles" which have undergone a form of peer-review by topic experts with credentials. Active contributors increased through the first quarter of 2008 and then declined; by 27 October 2011, the site had fewer than 100 active members. The last managing editor was Anthony Sebastian, until the office was vacated in 2016. , it had 17,956 "live" and 6,322 "lemma" articles (lemmas are undeveloped articles which contain little more than a definition). Founder viewpoints Sanger said in a 17 October 2006 press release that Citizendium "will soon attempt to unseat Wikipedia as the go-to destination for general information online". In August 2007, he captioned its pages: "The world needs a more credible free encyclopedia." The project began its pilot phase in October and November 2006. On 18 January 2007, a change of plans was announced. Sanger announced on the CZ (Citizendium) mailing list that only articles marked "CZ Live", those which have been or will soon be worked on by Citizendium contributors, would remain on the site, and all other articles forked from Wikipedia would be deleted. Not all Citizendium contributors were supportive of this change, but Sanger emphasized that this deletion was "an experiment" and a new set of Wikipedia articles could be uploaded if the experiment were deemed unsuccessful. Planning
https://en.wikipedia.org/wiki/Hengzhi%20chip
The Hengzhi chip (, 联想"恒智"安全芯片) is a microcontroller that can store secured information, designed by the People's Republic of China government and manufactured in China. Its functionalities should be similar to those offered by a Trusted Platform Module but, unlike the TPM, it does not follow Trusted Computing Group specifications. Lenovo is selling PCs installed with Hengzhi security chips. The chip could be a development of the IBM ESS (Embedded security subsystem) chip, which was a public key smart card placed directly on the motherboard's system management bus. As of September 2006, no public specifications about the chip are available. See also Trusted Computing Trusted Platform Module
https://en.wikipedia.org/wiki/Hudson%20Foods%20Company
Hudson Foods Company of Rogers, Arkansas, was a beef processor that was involved in what was then the largest recall of food in United States. The plant was in Columbus, Nebraska. The company recalled over 25 million pounds of ground beef. Tyson Foods bought Hudson Foods out in the 1990s. See also Pilgrim's Pride poultry recall Topps Meat Company second largest beef recall
https://en.wikipedia.org/wiki/Leftover%20hash%20lemma
The leftover hash lemma is a lemma in cryptography first stated by Russell Impagliazzo, Leonid Levin, and Michael Luby. Imagine that you have a secret key that has uniform random bits, and you would like to use this secret key to encrypt a message. Unfortunately, you were a bit careless with the key, and know that an adversary was able to learn the values of some bits of that key, but you do not know which bits. Can you still use your key, or do you have to throw it away and choose a new key? The leftover hash lemma tells us that we can produce a key of about bits, over which the adversary has almost no knowledge. Since the adversary knows all but bits, this is almost optimal. More precisely, the leftover hash lemma tells us that we can extract a length asymptotic to (the min-entropy of ) bits from a random variable that are almost uniformly distributed. In other words, an adversary who has some partial knowledge about , will have almost no knowledge about the extracted value. That is why this is also called privacy amplification (see privacy amplification section in the article Quantum key distribution). Randomness extractors achieve the same result, but use (normally) less randomness. Let be a random variable over and let . Let be a 2-universal hash function. If then for uniform over and independent of , we have: where is uniform over and independent of . is the min-entropy of , which measures the amount of randomness has. The min-entropy is always less than or equal to the Shannon entropy. Note that is the probability of correctly guessing . (The best guess is to guess the most probable value.) Therefore, the min-entropy measures how difficult it is to guess . is a statistical distance between and . See also Universal hashing Min-entropy Rényi entropy Information-theoretic security
https://en.wikipedia.org/wiki/Matrox%20Graphics%20eXpansion%20Modules
Matrox Graphics eXpansion Module (GXM) supports the use of multiple monitors over a single video source by splitting the output of a video source, providing an enlarged workspace or gaming environment. GXM is not a graphics card itself, and in fact requires a fairly powerful graphics card for playing games on multiple monitors. While most modern graphics cards have support for dual monitors and can expand a desktop across three screens, 3D games are generally limited to a single monitor. The GXM uses the standard Extended Display Identification Data (EDID) structure to communicate its capabilities to the graphics card just as a monitor does. However, the GXM's resolution includes all pixels in the three monitors. If the TripleHead2Go were hooked up to three monitors with 800×600 resolution, the TripleHead2Go would report itself as a single monitor with 2400×600 resolution. The graphics card then sends out a 2400×600 signal which the TripleHead2Go divides and distributes to the appropriate monitors. TripleHead2Go The TripleHead2Go supports 3 displays at the output, and has maximum resolution of 3840×1024@60 Hz (1280×1024 each). The low 60 Hz refresh rate is used in all except 3072x768 resolution which makes it unsuitable for CRT monitors (however, the severity of flickering perceived by viewers depend on other factors). TripleHead2Go Digital Edition The TripleHead2Go Digital Edition supports DVI-I output. The maximum resolution was originally 3840×1024@60 Hz (1280×1024 each), but on August 5, 2008 Matrox announced the support for new widescreen resolution, with a maximum resolution of 5040×1050@57 Hz (1680×1050 each). DualHead2Go It is similar to TripleHead2Go, except it supports only 2 displays. When setting output to 1024x768 per display, the refresh rate can be set up to 85 Hz. It has maximum resolution of 2560×1024@60 Hz (1280×1024 each). It was later renamed to DualHead2Go Analog Edition. DualHead2Go Digital Edition Compared to original DualHead2Go, this
https://en.wikipedia.org/wiki/Green%E2%80%93Tao%20theorem
In number theory, the Green–Tao theorem, proved by Ben Green and Terence Tao in 2004, states that the sequence of prime numbers contains arbitrarily long arithmetic progressions. In other words, for every natural number k, there exist arithmetic progressions of primes with k terms. The proof is an extension of Szemerédi's theorem. The problem can be traced back to investigations of Lagrange and Waring from around 1770. Statement Let denote the number of primes less than or equal to . If is a subset of the prime numbers such that then for all positive integers , the set contains infinitely many arithmetic progressions of length . In particular, the entire set of prime numbers contains arbitrarily long arithmetic progressions. In their later work on the generalized Hardy–Littlewood conjecture, Green and Tao stated and conditionally proved the asymptotic formula for the number of k tuples of primes in arithmetic progression. Here, is the constant The result was made unconditional by Green–Tao and Green–Tao–Ziegler. Overview of the proof Green and Tao's proof has three main components: Szemerédi's theorem, which asserts that subsets of the integers with positive upper density have arbitrarily long arithmetic progressions. It does not a priori apply to the primes because the primes have density zero in the integers. A transference principle that extends Szemerédi's theorem to subsets of the integers which are pseudorandom in a suitable sense. Such a result is now called a relative Szemerédi theorem. A pseudorandom subset of the integers containing the primes as a dense subset. To construct this set, Green and Tao used ideas from Goldston, Pintz, and Yıldırım's work on prime gaps. Once the pseudorandomness of the set is established, the transference principle may be applied, completing the proof. Numerous simplifications to the argument in the original paper have been found. provide a modern exposition of the proof. Numerical work The proof
https://en.wikipedia.org/wiki/CALDIC
CALDIC (the California Digital Computer) was an electronic digital computer built with the assistance of the Office of Naval Research at the University of California, Berkeley between 1951 and 1955 to assist and enhance research being conducted at the university with a platform for high-speed computing. CALDIC was designed to be constructed at a low cost and simple to operate, by standards of the time, note that in a pre-1965 context there is no interactive user IO or human readable output in printed characters in most computers. And a "computer" is exclusively a device taking several large buildings worth of electrical devices and constructed of thousands of vacuum tubes. Thus, a "low-cost computer" in 1951 - 55 is by modern standards either a failure or simply contextualized as costing less than an entire mass manufacturing plant, and not something economically accessible to consumers. This applications funding was geared by a need to investigate how to make one that could be installed in a battle cruiser. "Easy to understand" means, easy to understand if you are familiar with contemporary mechanical targeting computers / calculators. Hence there is no human readable user interface. It was a serial decimal machine with an , 10,000-word magnetic drum memory. (As CALDIC's decimal words were 10 digits each, the magnetic memory could store about 400,000 bits.) It contained 1,300 vacuum tubes, 1,000 crystal diodes, 100 magnetic elements (for the recording heads), and 12 relays (in the power supply). It weighed about . It was capable of speeds of 50 iterations per second. CALDIC was a stored program computer with a six-digit instruction format (two digits for the opcode and four digits for the memory address). The computer was initially planned by Paul Morton, Leland Cunningham, and Dick Lehmer; the latter two had been involved with the ENIAC at the University of Pennsylvania, and Lehmer had given one of the Moore School Lectures. Morton oversaw the design and con
https://en.wikipedia.org/wiki/Prefix%20order
In mathematics, especially order theory, a prefix ordered set generalizes the intuitive concept of a tree by introducing the possibility of continuous progress and continuous branching. Natural prefix orders often occur when considering dynamical systems as a set of functions from time (a totally-ordered set) to some phase space. In this case, the elements of the set are usually referred to as executions of the system. The name prefix order stems from the prefix order on words, which is a special kind of substring relation and, because of its discrete character, a tree. Formal definition A prefix order is a binary relation "≤" over a set P which is antisymmetric, transitive, reflexive, and downward total, i.e., for all a, b, and c in P, we have that: a ≤ a (reflexivity); if a ≤ b and b ≤ a then a = b (antisymmetry); if a ≤ b and b ≤ c then a ≤ c (transitivity); if a ≤ c and b ≤ c then a ≤ b or b ≤ a (downward totality). Functions between prefix orders While between partial orders it is usual to consider order-preserving functions, the most important type of functions between prefix orders are so-called history preserving functions. Given a prefix ordered set P, a history of a point p∈P is the (by definition totally ordered) set p− = {q | q ≤ p}. A function f: P → Q between prefix orders P and Q is then history preserving if and only if for every p∈P we find f(p−) = f(p)−. Similarly, a future of a point p∈P is the (prefix ordered) set p+ = {q | p ≤ q} and f is future preserving if for all p∈P we find f(p+) = f(p)+. Every history preserving function and every future preserving function is also order preserving, but not vice versa. In the theory of dynamical systems, history preserving maps capture the intuition that the behavior in one system is a refinement of the behavior in another. Furthermore, functions that are history and future preserving surjections capture the notion of bisimulation between systems, and thus the intuition that a given refinement is
https://en.wikipedia.org/wiki/Inverted%20bell
The inverted bell is a metaphorical name for a geometric shape that resembles a bell upside-down. By context In architecture, the term is applied to describe the shape of the capitals of Corinthian columns. The inverted bell is used in shape classification in pottery, often featured in archaeology as well as in modern times. In statistics, a bimodial distribution is sometimes called an inverted bell curve.
https://en.wikipedia.org/wiki/Minnesota%20Internet%20Users%20Essential%20Tool
Minnesota Internet Users Essential Tool (Minuet) is an integrated Internet package for DOS operating systems on IBM-compatible PCs. Background Minuet was created at the University of Minnesota, in the early days of the World Wide Web (1994–1996). At that time, Internet software for the PC was immature — the only programs available were NCSA Telnet and NCSA FTP. Both are glitchy, hard to configure, and TTY-oriented. The microcomputer support department at the university decided to come up with something better. Their design goals were: Runnable on any PC with at least 384 KiB of RAM, even an original 4.77 MHz PC. GUI interface Would run under DOS; not requiring Windows Easy to use Little or no configuration needed Multi-tasking The result was "Minuet". Minuet was quite successful at its time, being used at many colleges and institutions. Its usage peaked around 1996, going down as Windows 95 and its free e-mail reader and web browser proliferated. Implementation The program was written in Turbo Pascal, using the Turbo Vision GUI. This base is a good match for the PCs of that time. Turbo Vision in its early incarnations uses the 80×25 character text mode, meaning very speedy screen updates, even on slow PCs. Later Minuet versions - including the last one 1.0 Beta 18A - also support graphical modes up to 1600 × 1200 pixels (UXGA) while displaying up to 16.7 million colors, depending on the capabilities and VESA compatibility of the hardware used. A homebrew multi-tasking kernel allows users to have several Minuet windows active at the same time. An FTP session could be transferring files, while in another window, the user could be composing an e-mail. All the parts of Minuet use multi-tasking, the user does not have to wait for a slow operation to complete. Features Email Email in Minuet resembles most standard email programs — From:, To:, cc:, Bcc:, and Message body fields. Attachments use the BinHex and UUCP encoding schemes, which predated MIME
https://en.wikipedia.org/wiki/Decrement%20table
Decrement tables, also called life table methods, are used to calculate the probability of certain events. Birth control Life table methods are often used to study birth control effectiveness. In this role, they are an alternative to the Pearl Index. As used in birth control studies, a decrement table calculates a separate effectiveness rate for each month of the study, as well as for a standard period of time (usually 12 months). Use of life table methods eliminates time-related biases (i.e. the most fertile couples getting pregnant and dropping out of the study early, and couples becoming more skilled at using the method as time goes on), and in this way is superior to the Pearl Index. Two kinds of decrement tables are used to evaluate birth control methods. Multiple-decrement (or competing) tables report net effectiveness rates. These are useful for comparing competing reasons for couples dropping out of a study. Single-decrement (or noncompeting) tables report gross effectiveness rates, which can be used to accurately compare one study to another. See also Survival analysis Footnotes Birth control Actuarial science
https://en.wikipedia.org/wiki/Nav1.4
{{DISPLAYTITLE:Nav1.4}} Sodium channel protein type 4 subunit alpha is a protein that in humans is encoded by the SCN4A gene. The Nav1.4 voltage-gated sodium channel is encoded by the gene. Mutations in the gene are associated with hypokalemic periodic paralysis, hyperkalemic periodic paralysis, paramyotonia congenita, and potassium-aggravated myotonia. Function Voltage-gated sodium channels are transmembrane glycoprotein complexes composed of a large alpha subunit with 24 transmembrane domains and one or more regulatory beta subunits. They are responsible for the generation and propagation of action potentials in neurons and muscle. This gene encodes one member of the sodium channel alpha subunit gene family. It is expressed in skeletal muscle, and mutations in this gene have been linked to several myotonia and periodic paralysis disorders. Clinical significance Periodic paralysis In hypokalemic periodic paralysis, arginine residues making up the voltage sensor of Nav1.4 are mutated. The voltage sensor comprises the S4 alpha helix of each of the four transmembrane domains (I-IV) of the protein, and contains basic residues that only allow entry of the positive sodium ions at appropriate membrane voltages by blocking or opening the channel pore. In patients with these mutations, the channel has a reduced excitability and signals from the central nervous system are unable to depolarise muscle. As a result, the muscle cannot contract efficiently, causing paralysis. The condition is hypokalemic because a low extracellular potassium ion concentration will cause the muscle to repolarise to the resting potential more quickly, so even if calcium conductance does occur it cannot be sustained. It becomes more difficult to reach the calcium threshold at which the muscle can contract, and even if this is reached then the muscle is more likely to relax. Because of this, the severity would be reduced if potassium ion concentrations are kept high. In hyperkalemic periodic
https://en.wikipedia.org/wiki/SCN5A
Sodium channel protein type 5 subunit alpha, also known as NaV1.5 is an integral membrane protein and tetrodotoxin-resistant voltage-gated sodium channel subunit. NaV1.5 is found primarily in cardiac muscle, where it mediates the fast influx of Na+-ions (INa) across the cell membrane, resulting in the fast depolarization phase of the cardiac action potential. As such, it plays a major role in impulse propagation through the heart. A vast number of cardiac diseases is associated with mutations in NaV1.5 (see paragraph genetics). SCN5A is the gene that encodes the cardiac sodium channel NaV1.5. Gene structure SCN5A is a highly conserved gene located on human chromosome 3, where it spans more than 100 kb. The gene consists of 28 exons, of which exon 1 and in part exon 2 form the 5' untranslated region (5’UTR) and exon 28 the 3' untranslated region (3’UTR) of the RNA. SCN5A is part of a family of 10 genes that encode different types of sodium channels, i.e. brain-type (NaV1.1, NaV1.2, NaV1.3, NaV1.6), neuronal channels (NaV1.7, NaV1.8 and NaV1.9), skeletal muscle channels (NaV1.4) and the cardiac sodium channel NaV1.5. Expression pattern SCN5A is mainly expressed in the heart, where expression is abundant in working myocardium and conduction tissue. In contrast, expression is low in the sinoatrial node and atrioventricular node. Within the heart, a transmural expression gradient from subendocardium to subsepicardium is present, with higher expression of SCN5A in the endocardium as compared to the epicardium. SCN5A is also expressed in the gastrointestinal tract. Splice variants More than 10 different splice isoforms have been described for SCN5A, of which several harbour different functional properties. In the heart, two isoforms are mainly expressed (ratio 1:2), of which the least predominant one contains an extra glutamine at position 1077 (1077Q). Moreover, different isoforms are expressed during fetal life and adult, differing in the inclusion of an alternati
https://en.wikipedia.org/wiki/Biotechnology%20in%20pharmaceutical%20manufacturing
Biotechnology is the use of living organisms to develop useful products. Biotechnology is often used in pharmaceutical manufacturing. Notable examples include the use of bacteria to produce things such as insulin or human growth hormone. Other examples include the use of transgenic pigs for the creation of hemoglobin in use of humans. Human Insulin Amongst the earliest uses of biotechnology in pharmaceutical manufacturing is the use of recombinant DNA technology to modify Escherichia coli bacteria to produce human insulin, which was performed at Genentech in 1978. Prior to the development of this technique, insulin was extracted from the pancreas glands of cattle, pigs, and other farm animals. While generally efficacious in the treatment of diabetes, animal-derived insulin is not indistinguishable from human insulin, and may therefore produce allergic reactions. Genentech researchers produced artificial genes for each of the two protein chains that comprise the insulin molecule. The artificial genes were "then inserted... into plasmids... among a group of genes that" are activated by lactose. Thus, the insulin-producing genes were also activated by lactose. The recombinant plasmids were inserted into Escherichia coli bacteria, which were "induced to produce 100,000 molecules of either chain A or chain B human insulin." The two protein chains were then combined to produce insulin molecules. Human growth hormone Prior to the use of recombinant DNA technology to modify bacteria to produce human growth hormone, the hormone was manufactured by extraction from the pituitary glands of cadavers, as animal growth hormones have no therapeutic value in humans. Production of a single year's supply of human growth hormone required up to fifty pituitary glands, creating significant shortages of the hormone. In 1979, scientists at Genentech produced human growth hormone by inserting DNA coding for human growth hormone into a plasmid that was implanted in Escherichia coli bacter
https://en.wikipedia.org/wiki/Group%20communication%20system
The term Group Communication System (GCS) refers to a software platform that implements some form of group communication. Examples of group communication systems include IS-IS, Spread Toolkit, Appia framework, QuickSilver, and IBM's group services component. Message queue systems are somewhat similar. Group communication systems commonly provide specific guarantees about the total ordering of messages, such as if the sender of a message receives it back from the GCS, then it is certain that it has been delivered to all other nodes in the system. This property is useful when constructing data replication systems. Inter-process communication
https://en.wikipedia.org/wiki/Electroantennography
Electroantennography or EAG is a technique for measuring the average output of an insect antenna to its brain for a given odor. It is commonly used in electrophysiology while studying the function of the olfactory pathway in insects. The technique was invented in 1957 by German biologist Dietrich Schneider and shares similarities with electro-olfactography. Method Electroantennography is usually performed either by removing an antenna from the animal and inserting two chlorided silver wires for contact onto the two ends and amplifying the voltage between them while applying an odor puff to see a deflection as in the figure, or by leaving the animal intact and inserting a ground wire (silver/silver chloride) or a glass electrode filled with a buffer solution to some part of the body, usually inserted into an eye, and another to the tip of the antenna. A large bore glass electrode can also be placed directly over the tip of the antenna, such as in Drosophila melanogaster (fruit fly) antenna recordings. The latter method is useful if one is doing an experiment on the animal as a whole while doing the antennogram. The technique is widely applied in screening of insect pheromones by examining the responses to fractions of a compound mixture separated using chromatography. Usually, the wire inserted into the antenna is a thin silver wire that is chlorided in bleach. This is an older practice. Commonly, tungsten wires that have been chemically sharpened are inserted into a single neuron in the antenna. Further detailed examination of the odor response at the olfactory sensory level can be done by sensilla recording.
https://en.wikipedia.org/wiki/Cav1.2
{{DISPLAYTITLE:Cav1.2}} Calcium channel, voltage-dependent, L type, alpha 1C subunit (also known as Cav1.2) is a protein that in humans is encoded by the CACNA1C gene. Cav1.2 is a subunit of L-type voltage-dependent calcium channel. Structure and function This gene encodes an alpha-1 subunit of a voltage-dependent calcium channel. Calcium channels mediate the influx of calcium ions (Ca2+) into the cell upon membrane polarization (see membrane potential and calcium in biology). The alpha-1 subunit consists of 24 transmembrane segments and forms the pore through which ions pass into the cell. The calcium channel consists of a complex of alpha-1, alpha-2/delta and beta subunits in a 1:1:1 ratio. The S3-S4 linkers of Cav1.2 determine the gating phenotype and modulated gating kinetics of the channel. Cav1.2 is widely expressed in the smooth muscle, pancreatic cells, fibroblasts, and neurons. However, it is particularly important and well known for its expression in the heart where it mediates L-type currents, which causes calcium-induced calcium release from the ER Stores via ryanodine receptors. It depolarizes at -30mV and helps define the shape of the action potential in cardiac and smooth muscle. The protein encoded by this gene binds to and is inhibited by dihydropyridine. In the arteries of the brain, high levels of calcium in mitochondria elevates activity of nuclear factor kappa B NF-κB and transcription of CACNA1c and functional Cav1.2 expression increases. Cav1.2 also regulates levels of osteoprotegerin. CaV1.2 is inhibited by the action of STIM1. Regulation The activity of CaV1.2 channels is tightly regulated by the Ca2+ signals they produce. An increase in intracellular Ca2+ concentration implicated in Cav1.2 facilitation, a form of positive feedback called Ca2+-dependent facilitation, that amplifies Ca2+ influx. In addition, increasing influx intracellular Ca2+ concentration has implicated to exert the opposite effect Ca2+ dependent inactivation. These
https://en.wikipedia.org/wiki/Kir2.1
{{DISPLAYTITLE:Kir2.1}} The Kir2.1 inward-rectifier potassium channel is a lipid-gated ion channel encoded by the gene. Clinical significance A defect in this gene is associated with Andersen-Tawil syndrome. A mutation in the KCNJ2 gene has also been shown to cause short QT syndrome. In research In neurogenetics, Kir2.1 is used in Drosophila research to inhibit neurons, as overexpression of this channel will hyperpolarize cells. In optogenetics, a trafficking sequence from Kir2.1 has been added to halorhodopsin to improve its membrane localization. The resulting protein eNpHR3.0 is used in optogenetic research to inhibit neurons with light. Expression of Kir2.1 gene in human HEK293 cells induce a transient outward current, creating a steady membrane potential close to the reversal potential of potassium. Interactions Kir2.1 has been shown to interact with: DLG4, Interleukin 16, and TRAK2
https://en.wikipedia.org/wiki/Cadalene
Cadalene or cadalin (4-isopropyl-1,6-dimethylnaphthalene) is a polycyclic aromatic hydrocarbon with a chemical formula C15H18 and a cadinane skeleton. It is derived from generic sesquiterpenes, and ubiquitous in essential oils of many higher plants. Cadalene, together with retene, simonellite and ip-iHMN, is a biomarker of higher plants, which makes it useful for paleobotanic analysis of rock sediments. The ratio of retene to cadalene in sediments can reveal the ratio of the genus Pinaceae in the biosphere.
https://en.wikipedia.org/wiki/Simonellite
Simonellite (1,1-dimethyl-1,2,3,4-tetrahydro-7-isopropyl phenanthrene) is a polycyclic aromatic hydrocarbon with a chemical formula C19H24. It is similar to retene. Simonellite occurs naturally as an organic mineral derived from diterpenes present in conifer resins. It is named after its discoverer, Vittorio Simonelli (1860–1929), an Italian geologist. It forms colorless to white orthorhombic crystals. It occurs in Fognano, Tuscany, Italy. Simonellite, together with cadalene, retene and ip-iHMN, is a biomarker of higher plants, which makes it useful for paleobotanic analysis of rock sediments. See also Fichtelite Retene
https://en.wikipedia.org/wiki/Cappella%20Sansevero
The Cappella Sansevero (also known as the Cappella Sansevero de' Sangri or Pietatella) is a chapel located on Via Francesco de Sanctis 19, just northwest of the church of San Domenico Maggiore, in the historic center of Naples, Italy. The chapel is more properly named the Chapel of Santa Maria della Pietà. It contains works of Rococo art by some of the leading Italian artists of the 18th century. History Its origin dates to 1590 when John Francesco di Sangro, Duke of Torremaggiore, after recovering from a serious illness, had a private chapel built in what were then the gardens of the nearby Sansevero family residence, the Palazzo Sansevero. The building was converted into a family burial chapel by Alessandro di Sangro in 1613 (as inscribed on the marble plinth over the entrance to the chapel). Definitive form was given to the chapel by Raimondo di Sangro, Prince of Sansevero, who also included Masonic symbols in its reconstruction. Until 1888 a passageway connected the Sansevero palace with the chapel. The chapel received its alternative name of Pietatella from a painting of the Virgin Mary (La Pietà), spotted there by an unjustly arrested prisoner, as reported in the book Napoli Sacra by Cesare d'Engenio Caracciolo in 1623. When the chapel was constructed it was originally dedicated to Santa Maria della Pietà, after the painting. Works of art The chapel houses almost thirty works of art, among which are three particular sculptures of note. These marble statues are emblematic of the love of decoration in the Rococo period and their depiction of translucent veils and a fisherman's net represent remarkable artistic achievement. The Veiled Truth (Pudicizia, also called Modesty or Chastity) was completed by Antonio Corradini in 1752 as a tomb monument dedicated to Cecilia Gaetani dell'Aquila d'Aragona, mother of Raimondo. The 1753 Christ Veiled under a Shroud (also called Veiled Christ), by Giuseppe Sanmartino, shows the influence of the veiled Modesty. The Rel
https://en.wikipedia.org/wiki/Glycosynthase
The term glycosynthase refers to a class of proteins that have been engineered to catalyze the formation of a glycosidic bond. Glycosynthase are derived from glycosidase enzymes, which catalyze the hydrolysis of glycosidic bonds. They were traditionally formed from retaining glycosidase by mutating the active site nucleophilic amino acid (usually an aspartate or glutamate) to a small non-nucleophilic amino acid (usually alanine or glycine). More modern approaches use directed evolution to screen for amino acid substitutions that enhance glycosynthase activity. The first glycosynthase Two discoveries led to the development of glycosynthase enzymes. The first was that a change of the active site nucleophile of a glycosidase from a carboxylate to another amino acid resulted in a properly folded protein that had no hydrolase activity. The second discovery was that some glycosidase enzymes were able to catalyze the hydrolysis of glycosyl fluorides that had the incorrect anomeric configuration. The enzymes underwent a transglycosidation reaction to form a disaccharide, which was then a substrate for hydrolase activity. The first reported glycosynthase was a mutant of the Agrobacterium sp. β-glucosidase / galactosidase in which the nucleophile glutamate 358 was mutated to an alanine by site directed mutagenesis. When incubated with α-glycosyl fluorides and an acceptor sugar it was found to catalyze the transglycosidation reaction without any hydrolysis. This glycosynthase was used to synthesize a series of di- and trisaccharide products with yields between 64% and 92%. Reaction mechanism The mechanism of a glycosynthase is similar to the hydrolysis reaction of retaining glycosidases except no covalent-enzyme intermediate is formed. Mutation of the active site nucleophile to a non-nucleophilic amino acid prevents the formation of a covalent intermediate. An activated glycosyl donor with a good anomeric-leaving group (often a fluorine) is required. The leavin
https://en.wikipedia.org/wiki/Glycoside%20hydrolase
In biochemistry, glycoside hydrolases (also called glycosidases or glycosyl hydrolases) are a class of enzymes which catalyze the hydrolysis of glycosidic bonds in complex sugars. They are extremely common enzymes, with roles in nature including degradation of biomass such as cellulose (cellulase), hemicellulose, and starch (amylase), in anti-bacterial defense strategies (e.g., lysozyme), in pathogenesis mechanisms (e.g., viral neuraminidases) and in normal cellular function (e.g., trimming mannosidases involved in N-linked glycoprotein biosynthesis). Together with glycosyltransferases, glycosidases form the major catalytic machinery for the synthesis and breakage of glycosidic bonds. Occurrence and importance Glycoside hydrolases are found in essentially all domains of life. In prokaryotes, they are found both as intracellular and extracellular enzymes that are largely involved in nutrient acquisition. One of the important occurrences of glycoside hydrolases in bacteria is the enzyme beta-galactosidase (LacZ), which is involved in regulation of expression of the lac operon in E. coli. In higher organisms glycoside hydrolases are found within the endoplasmic reticulum and Golgi apparatus where they are involved in processing of N-linked glycoproteins, and in the lysosome as enzymes involved in the degradation of carbohydrate structures. Deficiency in specific lysosomal glycoside hydrolases can lead to a range of lysosomal storage disorders that result in developmental problems or death. Glycoside hydrolases are found in the intestinal tract and in saliva where they degrade complex carbohydrates such as lactose, starch, sucrose and trehalose. In the gut they are found as glycosylphosphatidyl anchored enzymes on endothelial cells. The enzyme lactase is required for degradation of the milk sugar lactose and is present at high levels in infants, but in most populations will decrease after weaning or during infancy, potentially leading to lactose intolerance in adul
https://en.wikipedia.org/wiki/Posterior%20talofibular%20ligament
The posterior talofibular ligament is a ligament that connects the fibula to the talus bone. It runs almost horizontally from the malleolar fossa of the lateral malleolus of the fibula to the lateral tubercle on the posterior surface of the talus. This insertion lies immediately lateral to the groove for the tendon of the flexor hallucis longus.
https://en.wikipedia.org/wiki/Anterior%20talofibular%20ligament
The anterior talofibular ligament is a ligament in the ankle. It passes from the anterior margin of the fibular malleolus, passing anteromedially to insert at the lateral aspect of the talus at the talar neck , in front of its lateral articular facet. It is one of the lateral ligaments of the ankle and prevents the foot from sliding forward in relation to the shin. It is the most commonly injured ligament in a sprained ankle—from an inversion injury—and will allow a positive anterior drawer test of the ankle if completely torn. See also Sprained ankle Posterior talofibular ligament
https://en.wikipedia.org/wiki/Olfactory%20trigone
The olfactory trigone is a small triangular area in front of the anterior perforated substance. Its apex, directed forward, occupies the posterior part of the olfactory sulcus, and is brought into view by throwing back the olfactory tract. It is part of the olfactory pathway.
https://en.wikipedia.org/wiki/Cubebol
Cubebol is a natural sesquiterpene alcohol first identified in cubeb oil. It is also found in basil. It was patented as a cooling agent in 2001 by Firmenich, an international flavor company. The taste of cubebol is cooling and refreshing. The patent describes application of cubebol as a refreshing agent in various products, ranging from chewing gum to sorbets, drinks, toothpaste, and gelatin-based confectioneries.
https://en.wikipedia.org/wiki/Hardy%27s%20inequality
Hardy's inequality is an inequality in mathematics, named after G. H. Hardy. It states that if is a sequence of non-negative real numbers, then for every real number p > 1 one has If the right-hand side is finite, equality holds if and only if for all n. An integral version of Hardy's inequality states the following: if f is a measurable function with non-negative values, then If the right-hand side is finite, equality holds if and only if f(x) = 0 almost everywhere. Hardy's inequality was first published and proved (at least the discrete version with a worse constant) in 1920 in a note by Hardy. The original formulation was in an integral form slightly different from the above. General one-dimensional version The general weighted one dimensional version reads as follows: If , then If , then Multidimensional version In the multidimensional case, Hardy's inequality can be extended to -spaces, taking the form where , and where the constant is known to be sharp. Fractional Hardy inequality If and , , there exists a constant such that for every satisfying , one has Proof of the inequality Integral version A change of variables gives , which is less or equal than by Minkowski's integral inequality. Finally, by another change of variables, the last expression equals . Discrete version: from the continuous version Assuming the right-hand side to be finite, we must have as . Hence, for any positive integer j, there are only finitely many terms bigger than . This allows us to construct a decreasing sequence containing the same positive terms as the original sequence (but possibly no zero terms). Since for every n, it suffices to show the inequality for the new sequence. This follows directly from the integral form, defining if and otherwise. Indeed, one has and, for , there holds (the last inequality is equivalent to , which is true as the new sequence is decreasing) and thus . Discrete version: Direct proof Let and let be
https://en.wikipedia.org/wiki/Token%20Ring
Token Ring is a physical and data link layer computer networking technology used to build local area networks. It was introduced by IBM in 1984, and standardized in 1989 as IEEE 802.5. It uses a special three-byte frame called a token that is passed around a logical ring of workstations or servers. This token passing is a channel access method providing fair access for all stations, and eliminating the collisions of contention-based access methods. Token Ring was a successful technology, particularly in corporate environments, but was gradually eclipsed by the later versions of Ethernet. Gigabit Token Ring was standardized in 2001, but development has stopped since. History A wide range of different local area network technologies were developed in the early 1970s, of which one, the Cambridge Ring, had demonstrated the potential of a token passing ring topology, and many teams worldwide began working on their own implementations. At the IBM Zurich Research Laboratory Werner Bux and Hans Müller, in particular, worked on the design and development of IBM's Token Ring technology, while early work at MIT led to the Proteon 10 Mbit/s ProNet-10 Token Ring network in 1981the same year that workstation vendor Apollo Computer introduced their proprietary 12 Mbit/s Apollo Token Ring (ATR) network running over 75-ohm RG-6U coaxial cabling. Proteon later evolved a 16 Mbit/s version that ran on unshielded twisted pair cable. 1985 IBM launch IBM launched their own proprietary Token Ring product on October 15, 1985. It ran at 4 Mbit/s, and attachment was possible from IBM PCs, midrange computers and mainframes. It used a convenient star-wired physical topology and ran over shielded twisted-pair cabling. Shortly thereafter it became the basis for the IEEE 802.5 standard. During this time, IBM argued that Token Ring LANs were superior to Ethernet, especially under load, but these claims were debated. In 1988 the faster 16 Mbit/s Token Ring was standardized by the 802.5 worki
https://en.wikipedia.org/wiki/Photosynthetic%20picoplankton
Photosynthetic picoplankton or picophytoplankton is the fraction of the phytoplankton performing photosynthesis composed of cells between 0.2 and 2 µm in size (picoplankton). It is especially important in the central oligotrophic regions of the world oceans that have very low concentration of nutrients. History 1952: Description of the first truly picoplanktonic species, Chromulina pusilla, by Butcher. This species was renamed in 1960 to Micromonas pusilla and a few studies have found it to be abundant in temperate oceanic waters, although very little such quantification data exists for eukaryotic picophytoplankton. 1979: Discovery of marine Synechococcus by Waterbury and confirmation with electron microscopy by Johnson and Sieburth. 1982: The same Johnson and Sieburth demonstrate the importance of small eukaryotes by electron microscopy. 1983: W.K.W Li and colleagues, including Trevor Platt show that a large fraction of marine primary production is due to organisms smaller than 2 µm. 1986: Discovery of "prochlorophytes" by Chisholm and Olson in the Sargasso Sea, named in 1992 as Prochlorococcus marinus. 1994: Discovery in the Thau lagoon in France of the smallest photosynthetic eukaryote known to date, Ostreococcus tauri, by Courties. 2001: Through sequencing of the ribosomal RNA gene extracted from marine samples, several European teams discover that eukaryotic picoplankton are highly diverse. This finding followed on the first discovery of such eukaryotic diversity in 1998 by Rappe and colleagues at Oregon State University, who were the first to apply rRNA sequencing to eukaryotic plankton in the open-ocean, where they discovered sequences that seemed distant from known phytoplankton The cells containing DNA matching one of these novel sequences were recently visualized and further analyzed using specific probes and found to be broadly distributed. Methods of study Because of its very small size, picoplankton is difficult to study by classic methods
https://en.wikipedia.org/wiki/Heterotrophic%20picoplankton
Heterotrophic picoplankton is the fraction of plankton composed by cells between 0.2 and 2 μm that do not perform photosynthesis. They form an important component of many biogeochemical cycles. Cells can be either: prokaryotes Archaea form a major part of the picoplankton in the Antarctic and are abundant in other regions of the ocean. Archaea have also been found in freshwater picoplankton, but do not appear to be so abundant in these environments. eukaryotes Cell structure Nucleic acid content in cells Heterotrophic picoplankton can be divided into two broad categories: high nucleic acid (HNA) content cells and low nucleic acid (LNA) content cells. Nucleic acids are large biomolecules that store and express genomic information. HNA picoplankton dominate in waters that are eutrophic to mesotrophic while low LNA picoplankton dominate in stratified oligotrophic environments. The proportion of HNA picoplankton to LNA picoplankton is a defining characteristic of bacterioplankton communities. Addition of glyphosate, a common herbicide that causes increased levels of phosphorus when introduced to aquatic systems, causes an increase in the ratio of HNA to LNA bacteria. Nucleic acids are a costly compound for cells to synthesize and the increased bioavailable phosphorus in the system likely allows HNA bacteria to rapidly synthesize more nucleic acids and divide. HNA bacterioplankton are larger and more active than LNA picoplankton. HNA cells also have higher specific metabolic and growth rates, likely allowing these type of bacterioplankton to better utilize and exploit sudden increases in nutrients within the water column. The relative abundance of HNA to LNA cells is related to overall system productivity, specifically chlorophyll concentration, though other factors likely also contribute to bacterioplankton distribution. Biogeochemical cycling Dissolved organic matter Heterotrophic picoplankton play a critical role in nutrient and carbon recycling in ecologic
https://en.wikipedia.org/wiki/Door%20Door
is a single-screen puzzle-platform game developed by Enix and published in Japan in 1983. Originally released for the NEC PC-8801, it was ported to other platforms, including the Family Computer. Controlling a small character named Chun, the player is tasked with completing each stage by trapping different kinds of aliens behind sliding doors. Chun can jump over the aliens and climb ladders, and must also avoid obstacles such as large nails and bombs. Door Door was designed and programmed by Koichi Nakamura, known as one of the creators of Dragon Quest. The game was the runner-up in the Enix-sponsored "First Game and Hobby Program Contest" in 1982, winning the Outstanding Program Award with a prize of 500,000 yen. Enix was given the rights to the game and ported it to several Japanese home computers. Chun, the name of the protagonist, was a nickname given to Nakamura by one of his friends. Door Door was a critical and commercial success— the PC-8801 port alone had sold 200,000 copies, and is considered a classic title for the Famicom. Gameplay Players control Chun, a small, egg-shaped creature outfitted with a baseball cap. Chun is relentlessly pursued by a quartet of aliens traveling in deterministic algorithm paths. The most predictable aliens Namegon and Amechan follow Chun in the most direct path possible, Invekun deviates and follows roundabout paths using ladders, and Otapyon shadows Chun's jumps. The player's objective is to trap the aliens behind sliding doors positioned throughout each level, courses composed of platforms conjoined by assorted ladders. To trap the aliens, players approach the door from the side its handle is on, open it by running across it, lure the advancing villains inside, and shut the door before they escape. Trapped doors cannot be opened again. Chun can jump to avoid the aliens, who can kill him on touch. Bombs and nails, which sometimes appear on the screen, are also lethal. When the player dies (provided they have continues) th
https://en.wikipedia.org/wiki/Thin%20layers%20%28oceanography%29
Thin layers are concentrated aggregations of phytoplankton and zooplankton in coastal and offshore waters that are vertically compressed to thicknesses ranging from several centimeters up to a few meters and are horizontally extensive, sometimes for kilometers. Generally, thin layers have three basic criteria: 1) they must be horizontally and temporally persistent; 2) they must not exceed a critical threshold of vertical thickness; and 3) they must exceed a critical threshold of maximum concentration. The precise values for critical thresholds of thin layers has been debated for a long time due to the vast diversity of plankton, instrumentation, and environmental conditions. Thin layers have distinct biological, chemical, optical, and acoustical signatures which are difficult to measure with traditional sampling techniques such as nets and bottles. However, there has been a surge in studies of thin layers within the past two decades due to major advances in technology and instrumentation. Phytoplankton are often measured by optical instruments that can detect fluorescence such as LIDAR, and zooplankton are often measured by acoustic instruments that can detect acoustic backscattering such as ABS. These extraordinary concentrations of plankton have important implications for many aspects of marine ecology (e.g., phytoplankton growth dynamics, zooplankton grazing, behaviour, environmental effects, harmful algal blooms), as well as for ocean optics and acoustics. Zooplankton thin layers are often found slightly under phytoplankton layers because many feed on them. Thin layers occur in a wide variety of ocean environments, including estuaries, coastal shelves, fjords, bays, and the open ocean, and they are often associated with some form of vertical structure in the water column, such as pycnoclines, and in zones of reduced flow. Criteria Persistence Thin layers persist from hours to weeks while other small-scale patches of plankton exist for minutes. The presence o
https://en.wikipedia.org/wiki/Anonymous%20function
In computer programming, an anonymous function (function literal, lambda abstraction, lambda function, lambda expression or block) is a function definition that is not bound to an identifier. Anonymous functions are often arguments being passed to higher-order functions or used for constructing the result of a higher-order function that needs to return a function. If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function. Anonymous functions are ubiquitous in functional programming languages and other languages with first-class functions, where they fulfil the same role for the function type as literals do for other data types. Anonymous functions originate in the work of Alonzo Church in his invention of the lambda calculus, in which all functions are anonymous, in 1936, before electronic computers. In several programming languages, anonymous functions are introduced using the keyword lambda, and anonymous functions are often referred to as lambdas or lambda abstractions. Anonymous functions have been a feature of programming languages since Lisp in 1958, and a growing number of modern programming languages support anonymous functions. Names The names "lambda abstraction", "lambda function", and "lambda expression" refer to the notation of function abstraction in lambda calculus, where the usual function would be written ( is an expression that uses ). Compare to the Python syntax of lambda x: M. The name "arrow function" refers to the mathematical "maps to" symbol, . Compare to the JavaScript syntax of x => M. Uses Anonymous functions can be used for containing functionality that need not be named and possibly for short-term use. Some notable examples include closures and currying. The use of anonymous functions is a matter of style. Using them is never the only way to solve a problem; each anonymous function could instead be defined as a named function and called by nam
https://en.wikipedia.org/wiki/AlgaeBase
AlgaeBase is a global species database of information on all groups of algae, both marine and freshwater, as well as sea-grass. History AlgaeBase began in March 1996, founded by Michael Guiry. By 2005, the database contained about 65,000 names. In 2013, AlgaeBase and the Flanders Marine Institute (VLIZ) signed an end-user license agreement regarding the Electronic Intellectual Property of AlgaeBase. This allows the World Register of Marine Species (WoRMS) to include taxonomic names of algae in WoRMS, thereby allowing WoRMS, as part of the Aphia database, to make its overview of all described marine species more complete. Synchronisation of the AlgaeBase data with Aphia and WoRMS was undertaken manually until March 2015, but this was very time-consuming, so an online application was developed to semi-automate the synchronisation, launching in 2015 in conjunction with Michael Guiry and the chief programmer of AlgaeBase, Pier Kuipers. After a long phase of further development and testing, the AlgaeBase harvester tool was implemented by the WoRMS data management team in early 2019. Since then, newly-added species in AlgaeBase are added to Aphia and, if marine, to WoRMS as well. Description The database is hosted at the National University of Ireland's Ryan Institute, in Galway. It includes about all types of algae, as well as one group of flowering plants, the sea-grasses. Information about each species' taxonomy, nomenclature and distribution is included, and the algae covered include terrestrial as well as marine and freshwater species, such as seaweeds, phytoplankton, and freshwater algae. marine species have the best coverage, and sea-grass is included, although it is a flowering plant rather than an alga. As of 2014 there were nearly 17,000 images, and the database was being used by 2,000–3,000 individual visitors each day. As of 2023, there were about 170,000 species and infraspecies in AlgaeBase. Support and funding The compilation of the data was funde
https://en.wikipedia.org/wiki/American%20Megafauna
American Megafauna is a board game on the topic of evolution designed by Phil Eklund, and published by Sierra Madre Games in 1997. While the game is not an attempt to be a simulation, a variety of genuine evolutionary factors are incorporated in the game, ranging from Milankovich cycles to dentition. The game can be played in a solitaire mode as well as multi-player. It has subsequently gone out of print. In late 2011, Bios: Megafauna was released by Sierra Madre Games, essentially as a replacement for American Megafauna. This is a more streamlined version of American Megafauna, using some of the same design concepts, and being much more accessible. Playing time was reduced also, yet this new release retains its strong solitaire mode, as well as accommodating 2-4 players. It is a semi-scientific simulation in the same vein as Sierra Madre Games' 2010 popular release, High Frontier (board game). The game begins after the Permian-Triassic extinction event of 250 million years ago, which killed off 95% of all living species on Earth. Players can choose to begin the game as one of 4 Archetype species that start as herbivores, but can mutate into predators. Both mammals and dinosaurs are represented. There is a huge (almost endless) variety of species that can evolve via DNA chains, and the random tile map board setup is constantly changing due to certain random events (e.g., Erosion, Greenhouse and icehouse Earth, and Milankovich cycles), Catastrophes (e.g., an Impact event or a solar flare), and the addition of new Biome tiles during play. The map itself (reprented by tiles that are turned over and revealed) consists of a wide variety of land, sea, mountain, and other Biomes in four different latitudes. The goal of the game is to adapt your DNA, speciate, and survive amidst both environmental and living animal competition (represented by other players and wandering Immigrant animals that encroach upon the continent). Play time varies between 60–180 minutes depending
https://en.wikipedia.org/wiki/Lax%20equivalence%20theorem
In numerical analysis, the Lax equivalence theorem is a fundamental theorem in the analysis of finite difference methods for the numerical solution of partial differential equations. It states that for a consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable. The importance of the theorem is that while the convergence of the solution of the finite difference method to the solution of the partial differential equation is what is desired, it is ordinarily difficult to establish because the numerical method is defined by a recurrence relation while the differential equation involves a differentiable function. However, consistency—the requirement that the finite difference method approximates the correct partial differential equation—is straightforward to verify, and stability is typically much easier to show than convergence (and would be needed in any event to show that round-off error will not destroy the computation). Hence convergence is usually shown via the Lax equivalence theorem. Stability in this context means that a matrix norm of the matrix used in the iteration is at most unity, called (practical) Lax–Richtmyer stability. Often a von Neumann stability analysis is substituted for convenience, although von Neumann stability only implies Lax–Richtmyer stability in certain cases. This theorem is due to Peter Lax. It is sometimes called the Lax–Richtmyer theorem, after Peter Lax and Robert D. Richtmyer.
https://en.wikipedia.org/wiki/Air%20data%20inertial%20reference%20unit
An Air Data Inertial Reference Unit (ADIRU) is a key component of the integrated Air Data Inertial Reference System (ADIRS), which supplies air data (airspeed, angle of attack and altitude) and inertial reference (position and attitude) information to the pilots' electronic flight instrument system displays as well as other systems on the aircraft such as the engines, autopilot, aircraft flight control system and landing gear systems. An ADIRU acts as a single, fault tolerant source of navigational data for both pilots of an aircraft. It may be complemented by a secondary attitude air data reference unit (SAARU), as in the Boeing 777 design. This device is used on various military aircraft as well as civilian airliners starting with the Airbus A320 and Boeing 777. Description An ADIRS consists of up to three fault tolerant ADIRUs located in the aircraft electronic rack, an associated control and display unit (CDU) in the cockpit and remotely mounted air data modules (ADMs). The No 3 ADIRU is a redundant unit that may be selected to supply data to either the commander's or the co-pilot's displays in the event of a partial or complete failure of either the No 1 or No 2 ADIRU. There is no cross-channel redundancy between the Nos 1 and 2 ADIRUs, as No 3 ADIRU is the only alternate source of air and inertial reference data. An inertial reference (IR) fault in ADIRU No 1 or 2 will cause a loss of attitude and navigation information on their associated primary flight display (PFD) and navigation display (ND) screens. An air data reference (ADR) fault will cause the loss of airspeed and altitude information on the affected display. In either case the information can only be restored by selecting the No 3 ADIRU. Each ADIRU comprises an ADR and an inertial reference (IR) component. Air data reference The air data reference (ADR) component of an ADIRU provides airspeed, Mach number, angle of attack, temperature and barometric altitude data. Ram air pressure and static pre
https://en.wikipedia.org/wiki/International%20Mathematical%20Olympiad%20selection%20process
This article describes the selection process, by country, for entrance into the International Mathematical Olympiad. The International Mathematical Olympiad (IMO) is an annual mathematics olympiad for students younger than 20 who have not started at university. Each year, participating countries send at most 6 students. The selection process varies between countries, but typically involves several rounds of competition, each progressively more difficult, after which the number of candidates is repeatedly reduced until the final 6 are chosen. Many countries also run training events for IMO potentials, with the aim of improving performance as well as assisting with team selection. IMO Selection process by country Argentina In Argentina, the Olimpíada Matemática Argentina is organized each year by Fundación Olimpíada Matemática Argentina. All students that took and passed the National Finals (fifth and last round of the competition) exams, usually held in November; and were born before July 1 21 years ago, are allowed to take two new written tests to be selected for IMO, usually in May. From the results of that tests, six titular students and a number of substitutes are selected to represent Argentina at the International Mathematical Olympiad. Australia In Australia, selection into the IMO team is determined by the Australian Mathematics Trust and is based on the results from four exams: The Australian Mathematics Olympiad The Asian Pacific Mathematics Olympiad two IMO selection exams The Australian Mathematics Olympiad (AMO) is held annually in the second week of February. It is composed of two four-hour papers held over two consecutive days. There are four questions in each exam for a total of eight questions. Entry is by invitation only with approximately 100 candidates per year. A month after the AMO, the Asian Pacific Mathematics Olympiad is held (APMO) and the top 25 from the AMO are invited to sit the exam. It is a four and a half hour exam with fi
https://en.wikipedia.org/wiki/Clenshaw%E2%80%93Curtis%20quadrature
Clenshaw–Curtis quadrature and Fejér quadrature are methods for numerical integration, or "quadrature", that are based on an expansion of the integrand in terms of Chebyshev polynomials. Equivalently, they employ a change of variables and use a discrete cosine transform (DCT) approximation for the cosine series. Besides having fast-converging accuracy comparable to Gaussian quadrature rules, Clenshaw–Curtis quadrature naturally leads to nested quadrature rules (where different accuracy orders share points), which is important for both adaptive quadrature and multidimensional quadrature (cubature). Briefly, the function to be integrated is evaluated at the extrema or roots of a Chebyshev polynomial and these values are used to construct a polynomial approximation for the function. This polynomial is then integrated exactly. In practice, the integration weights for the value of the function at each node are precomputed, and this computation can be performed in time by means of fast Fourier transform-related algorithms for the DCT. General method A simple way of understanding the algorithm is to realize that Clenshaw–Curtis quadrature (proposed by those authors in 1960) amounts to integrating via a change of variable . The algorithm is normally expressed for integration of a function over the interval [−1,1] (any other interval can be obtained by appropriate rescaling). For this integral, we can write: That is, we have transformed the problem from integrating to one of integrating . This can be performed if we know the cosine series for : in which case the integral becomes: Of course, in order to calculate the cosine series coefficients one must again perform a numeric integration, so at first this may not seem to have simplified the problem. Unlike computation of arbitrary integrals, however, Fourier-series integrations for periodic functions (like , by construction), up to the Nyquist frequency , are accurately computed by the equally spaced and
https://en.wikipedia.org/wiki/Spectral%20Genomics
Spectral Genomics, Inc. was a technology spin-off company from Baylor College of Medicine, selling aCGH microarrays and related software. History The company was founded in February 2000 by BCM technologies. Spectral licensed technology invented by its founders Alan Bradley, Ph.D., Wei-wen Cai, Ph.D.. The company raised $3.0 million in the first financing round in August 2001. In March 2004 the company raised additional $9.4 million in its second financing round. In March 2005, GE Healthcare became the exclusive distributor for Spectral Genomics's products outside of North America. Spectral Genomics was acquired by PerkinElmer in May 2006, ending GE's distribution agreement. External links Corporate website Defunct technology companies of the United States Microarrays
https://en.wikipedia.org/wiki/Swan%20band
Swan bands are a characteristic of the spectra of carbon stars, comets and of burning hydrocarbon fuels. They are named for the Scottish physicist William Swan, who first studied the spectral analysis of radical diatomic carbon (C2) in 1856. Swan bands consist of several sequences of vibrational bands scattered throughout the visible spectrum. See also Spectroscopy
https://en.wikipedia.org/wiki/ER%20oxidoreductin
ER oxidoreductin 1 (Ero1) is an oxidoreductase enzyme that catalyses the formation and isomerization of protein disulfide bonds in the endoplasmic reticulum (ER) of eukaryotes. ER Oxidoreductin 1 (Ero1) is a conserved, luminal, glycoprotein that is tightly associated with the ER membrane, and is essential for the oxidation of protein dithiols. Since disulfide bond formation is an oxidative process, the major pathway of its catalysis has evolved to utilise oxidoreductases, which become reduced during the thiol-disulfide exchange reactions that oxidise the cysteine thiol groups of nascent polypeptides. Ero1 is required for the introduction of oxidising equivalents into the ER and their direct transfer to protein disulfide isomerase (PDI), thereby ensuring the correct folding and assembly of proteins that contain disulfide bonds in their native state. Ero1 exists in two isoforms: Ero1-α and Ero1-β. Ero1-α is mainly induced by hypoxia (HIF-1), whereas Ero1-β is mainly induced by the unfolded protein response (UPR). During endoplasmic reticulum stress (such as occurs in beta cells of the pancreas or in macrophages causing atherosclerosis), CHOP can induce activation of Ero1, causing calcium release from the endoplasmic reticulum into the cytoplasm, resulting in apoptosis. Homologues of the Saccharomyces cerevisiae Ero1 proteins have been found in all eukaryotic organisms examined, and contain seven cysteine residues that are absolutely conserved, including three that form the sequence Cys–X–X–Cys–X–X–Cys (where X can be any residue). The mechanism of thiol–disulfide exchange between oxidoreductases The mechanism of thiol–disulfide exchange between oxidoreductases is understood to begin with the nucleophilic attack on the sulfur atoms of a disulfide bond in the oxidised partner, by a thiolate anion derived from a reactive cysteine in a reduced partner. This generates mixed disulfide intermediates, and is followed by a second, this time intramolecular, nucleophilic
https://en.wikipedia.org/wiki/Mark%20Barr
James Mark McGinnis Barr (May 18, 1871December 15, 1950) was an electrical engineer, physicist, inventor, and polymath known for proposing the standard notation for the golden ratio. Born in America, but with English citizenship, Barr lived in both London and New York City at different times of his life. Though remembered primarily for his contributions to abstract mathematics, Barr put much of his efforts over the years into the design of machines, including calculating machines. He won a gold medal at the 1900 Paris Exposition Universelle for an extremely accurate engraving machine. Life Barr was born in Pennsylvania, the son of Charles B. Barr and Ann M'Ginnis. He was educated in London, then worked for the Westinghouse Electric Company in Pittsburgh from 1887 to 1890. He started there as a draughtsman before becoming a laboratory assistant, and later an erection engineer. For two years in the early 1890s, he worked in New York City at the journal Electrical World as an assistant editor, at the same time studying chemistry at the New York City College of Technology, and by 1900, he had worked with both Nikola Tesla and Mihajlo Pupin in New York. However, he was known among acquaintances for his low opinion of Thomas Edison. Returning to London in 1892, he studied physics and electrical engineering at the City and Guilds of London Technical College for three years. From 1896 to 1900, he worked for Linotype in England, and from 1900 to 1904, he worked as a technical advisor to Trevor Williams in London. Beginning in 1902, he was elected to the Small Screw Gauge Committee of the British Association for the Advancement of Science. The committee was set up to put into practice the system of British Association screw threads, which had been settled on but not implemented in 1884. More broadly, it was tasked with considering "the whole question of standardisation of engineering materials, tools, and machinery". In January 1916, Barr was given charge of a school for
https://en.wikipedia.org/wiki/Flag%20of%20Hezbollah
The flag of Hezbollah is the flag of the Shi'a political and military organization Hezbollah. The flag depicts a stylized representation of the Arabic words (ḥizbu-llāh, meaning "Party of God") in Kufic script. The first letter of "Allah" reaches up to grasp a stylized assault rifle. The flag incorporates several other symbols, namely a globe, a book, a sword, and a seven-leafed branch. The text above the logo reads (fa-inna ḥizba llāhi humu al-ġālibūna) and means "Then surely the party of God are they that shall be triumphant" (Quran 5:56), which is a reference to the name of the party. Underneath the logo are the words (al-muqāwamah al-islāmīyah fī lubnān), meaning "The Islamic Resistance in Lebanon". Symbolism Colours Yellow is the predominant colour on the flag and forms the basis for its field. Whilst there is no reason given for this choice of colour, vexillologist Tim Marshall has theorised that it may have been chosen to signify Hezbollah's willingness to fight for the sake of Allah and the Shi'a faith. The reasoning behind the usage of the colour green in the flag is more obvious. Green in Islamic symbolism is inextricably linked to the Islamic prophet Muhammad and the colour is mentioned numerous times within the Quran. As a traditionally Islamic colour, the usage of green on the flag is an overtly political and explicitly religious statement. Sometimes the text on the flag appears in red lettering. The usage of red in this case is used to symbolise the blood shed by those who fell for the cause of the organization. Imagery The symbolism of the fist grasping an assault rifle, which combines elements of the H&K G3 battle rifle and the AK-47, is twofold. The rifle is simultaneously symbolic of the party's left-wing ideological routes, sharing in the iconography of similar armed socialist movements. The rifle also represents Hezbollah's belief in an armed struggle against oppressive forces. However, opponents of Hezbollah interpret the inclusion o
https://en.wikipedia.org/wiki/Adenophostin
Adenophostin A is a potent inositol trisphosphate (IP3) receptor agonist, but is much more potent than IP3. IP3R is a ligand-gated intracellular Ca2+ release channel that plays a central role in modulating cytoplasmic free Ca2+ concentration (Ca2+i). Adenophostin A is structurally different from IP3 but could elicit distinct calcium signals in cells.
https://en.wikipedia.org/wiki/Monolithic%20application
In software engineering, a monolithic application is a single unified software application which is self-contained and independent from other applications, but typically lacks flexibility. There are advantages and disadvantages of building applications in a monolithic style of software architecture, depending on requirements. Alternative styles to monolithic applications include multitier architectures, distributed computing and microservices. The design philosophy is that the application is responsible not just for a particular task, but can perform every step needed to complete a particular function. Some personal finance applications are monolithic in the sense that they help the user carry out a complete task, end to end, and are private data silos rather than parts of a larger system of applications that work together. Some word processors are monolithic applications. These applications are sometimes associated with mainframe computers. In software engineering, a monolithic application describes a software application that is designed as a single service. Multiple services can be desirable in certain scenarios as it can facilitate maintenance by allowing repair or replacement of parts of the application without requiring wholesale replacement. Modularity is achieved to various extents by different modular programming approaches. Code-based modularity allows developers to reuse and repair parts of the application, but development tools are required to perform these maintenance functions (e.g. the application may need to be recompiled). Object-based modularity provides the application as a collection of separate executable files that may be independently maintained and replaced without redeploying the entire application (e.g. Microsoft's Dynamic-link library (DLL); Sun/UNIX shared object files). Some object messaging capabilities allow object-based applications to be distributed across multiple computers (e.g. Microsoft's Component Object Model (COM)). Service
https://en.wikipedia.org/wiki/Adjusted%20Peak%20Performance
Adjusted Peak Performance (APP) is a metric introduced by the U.S. Department of Commerce's Bureau of Industry and Security (BIS) to more accurately predict the suitability of a computing system to complex computational problems, specifically those used in simulating nuclear weapons. This is used to determine the export limitations placed on certain computer systems under the Export Administration Regulations 15 CFR. Further details can be found in the document "Practitioner's Guide To Adjusted Peak Performance". The (simplified) algorithm used to calculate APP consists of the following steps: Determine how many 64 bit (or better) floating point operations every processor in the system can perform per clock cycle (best case). This is FPO(i). Determine the clock frequency of every processor. This is F(i). Choose the weighting factor for each processor: 0.9 for vector processors and 0.3 for non-vector processors. This is W(i). Calculate the APP for the system as follows: APP = FPO(1) * F(1) * W(1) + ... + FPO(n) * F(n) * W(n). The metric was introduced in April 2006 to replace the Composite Theoretical Performance (CTP) metric which was introduced in 1993. APP was itself replaced in November 2007 when the BIS amended 15 CFR to include the December 2006 Wassenaar Arrangement Plenary Agreement Implementation's new metric - Gigaflops (GFLOPS), one billion floating point operations per second, or TeraFLOPS, one trillion floating point operations per second. The unit of measurement is Weighted TeraFLOPS (WT) to specify Adjusted Peak Performance (APP). The weighting factor is 0.3 for non-vector processors and 0.9 for vector processors. For example, a PowerPC 750 running at 800 MHz would be rated at 0.00024 WT due to being able to execute one floating point instruction per cycle and not having a vector unit. Note that only 64 bit (or wider) floating point instructions count. Notes: Processors without 64 bit (or better) floating point support have an FPO of zero
https://en.wikipedia.org/wiki/Quantum%20cloning
Quantum cloning is a process that takes an arbitrary, unknown quantum state and makes an exact copy without altering the original state in any way. Quantum cloning is forbidden by the laws of quantum mechanics as shown by the no cloning theorem, which states that there is no operation for cloning any arbitrary state perfectly. In Dirac notation, the process of quantum cloning is described by: where is the actual cloning operation, is the state to be cloned, and is the initial state of the copy. Though perfect quantum cloning is not possible, it is possible to perform imperfect cloning, where the copies have a non-unit (i.e. non-perfect) fidelity. The possibility of approximate quantum computing was first addressed by Buzek and Hillery, and theoretical bounds were derived on the fidelity of cloned quantum states. One of the applications of quantum cloning is to analyse the security of quantum key distribution protocols. Teleportation, nuclear magnetic resonance, quantum amplification, and superior phase conjugation are examples of some methods utilized to realize a quantum cloning machine. Ion trapping techniques have been applied to cloning quantum states of ions. Types of Quantum Cloning Machines It may be possible to clone a quantum state to arbitrary accuracy in the presence of closed timelike curves. Universal Quantum Cloning Universal quantum cloning (UQC) implies that the quality of the output (cloned state) is not dependent on the input, thus the process is "universal" to any input state. The output state produced is governed by the Hamiltonian of the system. One of the first cloning machines, a 1 to 2 UQC machine, was proposed in 1996 by Buzek and Hillery. As the name implies, the machine produces two identical copies of a single input qubit with a fidelity of 5/6 when comparing only one output qubit, and global fidelity of 2/3 when comparing both qubits. This idea was expanded to more general cases such as an arbitrary number of inputs and copies,
https://en.wikipedia.org/wiki/Models%20of%20DNA%20evolution
A number of different Markov models of DNA sequence evolution have been proposed. These substitution models differ in terms of the parameters used to describe the rates at which one nucleotide replaces another during evolution. These models are frequently used in molecular phylogenetic analyses. In particular, they are used during the calculation of likelihood of a tree (in Bayesian and maximum likelihood approaches to tree estimation) and they are used to estimate the evolutionary distance between sequences from the observed differences between the sequences. Introduction These models are phenomenological descriptions of the evolution of DNA as a string of four discrete states. These Markov models do not explicitly depict the mechanism of mutation nor the action of natural selection. Rather they describe the relative rates of different changes. For example, mutational biases and purifying selection favoring conservative changes are probably both responsible for the relatively high rate of transitions compared to transversions in evolving sequences. However, the Kimura (K80) model described below only attempts to capture the effect of both forces in a parameter that reflects the relative rate of transitions to transversions. Evolutionary analyses of sequences are conducted on a wide variety of time scales. Thus, it is convenient to express these models in terms of the instantaneous rates of change between different states (the Q matrices below). If we are given a starting (ancestral) state at one position, the model's Q matrix and a branch length expressing the expected number of changes to have occurred since the ancestor, then we can derive the probability of the descendant sequence having each of the four states. The mathematical details of this transformation from rate-matrix to probability matrix are described in the mathematics of substitution models section of the substitution model page. By expressing models in terms of the instantaneous rates of cha
https://en.wikipedia.org/wiki/Homology%20modeling
Homology modeling, also known as comparative modeling of protein, refers to constructing an atomic-resolution model of the "target" protein from its amino acid sequence and an experimental three-dimensional structure of a related homologous protein (the "template"). Homology modeling relies on the identification of one or more known protein structures likely to resemble the structure of the query sequence, and on the production of an alignment that maps residues in the query sequence to residues in the template sequence. It has been seen that protein structures are more conserved than protein sequences amongst homologues, but sequences falling below a 20% sequence identity can have very different structure. Evolutionarily related proteins have similar sequences and naturally occurring homologous proteins have similar protein structure. It has been shown that three-dimensional protein structure is evolutionarily more conserved than would be expected on the basis of sequence conservation alone. The sequence alignment and template structure are then used to produce a structural model of the target. Because protein structures are more conserved than DNA sequences, and detectable levels of sequence similarity usually imply significant structural similarity. The quality of the homology model is dependent on the quality of the sequence alignment and template structure. The approach can be complicated by the presence of alignment gaps (commonly called indels) that indicate a structural region present in the target but not in the template, and by structure gaps in the template that arise from poor resolution in the experimental procedure (usually X-ray crystallography) used to solve the structure. Model quality declines with decreasing sequence identity; a typical model has ~1–2 Å root mean square deviation between the matched Cα atoms at 70% sequence identity but only 2–4 Å agreement at 25% sequence identity. However, the errors are significantly higher in the loop reg
https://en.wikipedia.org/wiki/High%20endothelial%20venules
High endothelial venules (HEV) are specialized post-capillary venules characterized by plump endothelial cells as opposed to the usual flatter endothelial cells found in regular venules. HEVs enable lymphocytes circulating in the blood to directly enter a lymph node (by crossing through the HEV). In humans, HEVs are found in all secondary lymphoid organs (with the exception of spleen, where blood exits through open arterioles and enters the red pulp), including hundreds of lymph nodes dispersed in the body, tonsils and adenoids in the pharynx, Peyer's patches (PIs) in the small intestine, appendix, and small aggregates of lymphoid tissue in the stomach and large intestine. In contrast to the endothelial cells from other vessels, the high endothelial cells of HEVs have a distinctive appearance, consisting of a cuboidal morphology and with various receptors to interact with leukocytes (express specialized ligands for lymphocytes and are able to support high levels of lymphocyte extravasation). HEVs enable naïve lymphocytes to move in and out of the lymph nodes from the circulatory system. HEV cells express addressins, which are specific adhesion molecules that attach to the L-selectins on lymphocytes and anchor them to the HEV wall in preparation for crossing the endothelium. The endothelial cells of HEVs have a 'plump' appearance different from the flat morphology of endothelial cells that line other vessels, and are therefore called high endothelial cells by reference to their thickness. Another characteristic of HEVs, revealed by light-microscopic examination, is the presence of a large number of lymphocytes within their walls. This illustrates the function of HEVs in lymphocyte recruitment and explains why these vessels were implicated in lymphocyte traffic from the time of their initial description. The need for HEV In order to have an adaptive immune response occur, T cells need to be activated. T cells become activated by recognising foreign antigens bound
https://en.wikipedia.org/wiki/Stratified%20columnar%20epithelium
Stratified columnar epithelium is a rare type of epithelial tissue composed of column-shaped cells arranged in multiple layers. It is found in the conjunctiva, pharynx, anus, and male urethra. It also occurs in embryo. Location Stratified columnar epithelia are found in a variety of locations, including: parts of the conjunctiva of the eye parts of the pharynx anus male urethra and vas deferens excretory duct of mammary gland and major salivary glands Embryology Stratified columnar epithelium is initially present in parts of the gastrointestinal tract in utero, before being replaced with other types of epithelium. For example, by 8 weeks, it covers the lining of the stomach. By 17 weeks, it is replaced by simple columnar epithelium. This is also found in the fetal esophagus. Function The cells function in secretion and protection. See also Pseudostratified columnar epithelium
https://en.wikipedia.org/wiki/Acinus
An acinus (; : acini; adjective, acinar or acinous) refers to any cluster of cells that resembles a many-lobed "berry," such as a raspberry (acinus is Latin for "berry"). The berry-shaped termination of an exocrine gland, where the secretion is produced, is acinar in form, as is the alveolar sac containing multiple alveoli in the lungs. Exocrine glands Acinar exocrine glands are found in many organs, including: the stomach the sebaceous gland of the scalp the salivary glands of the tongue the liver the lacrimal glands the mammary glands the pancreas the bulbourethral (Cowper's) glands The thyroid follicles can also be considered of acinar formation but in this case the follicles, being part of an endocrine gland, act as a hormonal deposit rather than to facilitate secretion. Mucous acini usually stain pale, while serous acini usually stain dark. Lungs The end of the terminal bronchioles in the lungs mark the beginning of a pulmonary acinus that includes the respiratory bronchioles, alveolar ducts, alveolar sacs, and alveoli. See also Alveolar gland Intercalated duct
https://en.wikipedia.org/wiki/Double%20grave%20accent
The double grave accent () is a diacritic used in scholarly discussions of the Serbo-Croatian and sometimes Slovene languages. It is also used in the International Phonetic Alphabet. In Serbo-Croatian and Slovenian, double grave accent is used to indicate a short falling tone, though in discussion of Slovenian, a single grave accent is also often used for this purpose. The double grave accent is found in both Latin and Cyrillic; however, it is not used in the everyday orthography of any language, only in discussions of phonology. In the International Phonetic Alphabet, the double grave accent is used to indicate extra-low tone. The letters a e i o r u and their Cyrillic equivalents а е и о р у can all be found with the double grave accent. Unicode provides precomposed characters for the uppercase and the lowercase Latin letters but not the Cyrillic letters. The Cyrillic letters can be formed using the combining character for the double grave, which is located at U+030F. The combining character can also be used with IPA vowel symbols, if necessary. Letters with double grave Unicode See also Grave accent Double acute accent Inverted breve Izhitsa, a Cyrillic letter with a form that visually resembles a double grave accent IPA
https://en.wikipedia.org/wiki/List%20of%20South%20Korean%20flags
This is a list of flags used in South Korea, from 1945 to the present. National flags National government flags Military flags Political flags Flags of subdivisions Provincial-level division flags Flags of other cities Historical flags Historical flags of other cities North Korean provincial flags As the South Korean government claims the territory of North Korea as its own, provincial flags also exist for the North Korean provinces that are claimed by South Korea. The following are flags of the five Korean provinces located entirely north of the Military Demarcation Line as according to the South Korean government, as it formally claims to be the sole legitimate government of the entire Korean Peninsula. See also Flag of South Korea List of Korean flags Emblem of South Korea
https://en.wikipedia.org/wiki/National%20Botanical%20Research%20Institute
The National Botanical Research Institute (NBRI) is a research institute of the Council of Scientific and Industrial Research (CSIR) located in Lucknow, Uttar Pradesh, India. It is engaged in the field of taxonomy and modern biology. History Originally conceptualised and set up as the National Botanic Gardens (NBG) by Professor Kailas Nath Kaul on behalf of the State Government of Uttar Pradesh, it was taken over by the CSIR in 1953. Dr Triloki Nath Khoshoo joined in 1964 as the Assistant Director, shortly afterwards becoming the Director. Initially engaged in research work in the classical botanical disciplines, the NBG went on laying an increasing emphasis in keeping with the national needs and priorities in the field of plant sciences, on its applied and developmental research activities. Due to the untiring efforts of Dr Khoshoo, the institute rose to the stature of being the National Botanical Research Institute in 1978, reflecting the correct nature and extent of its aims and objectives, functions and R & D activities. Sikandar Bagh is a famous and historic pleasure garden, located in the grounds of the Institute. Achievements NBRI developed a new variety of bougainvillea, named Los Banos Variegata-Jayanthi. In a move to fight against whiteflies National Botanical Research Institute (NBRI) Lucknow has developed a pest resistant variety of cotton. A group of innovators developed first indigenous transgenic cotton variety expressing bt protein. South Africa National Botanical Research Institute (NBRI) is also the state botanical research institute of South Africa.
https://en.wikipedia.org/wiki/Central%20Institute%20of%20Medicinal%20and%20Aromatic%20Plants
Central Institute of Medicinal and Aromatic Plants, popularly known as CIMAP, is a frontier plant research laboratory of Council of Scientific and Industrial Research (CSIR). Established originally as Central Indian Medicinal Plants Organisation (CIMPO) in 1959, CIMAP is steering multidisciplinary high quality research in biological and chemical sciences and extending technologies and services to the farmers and entrepreneurs of medicinal and aromatic plants (MAPs) with its research headquarter at Lucknow and Research Centres at Bangalore, Hyderabad, Pantnagar and Purara. CIMAP Research Centres are aptly situated in different agro-climatic zones of the country to facilitate multi-location field trials and research. A little more than 50 years since its inception, today, CIMAP has extended its wings overseas with scientific collaboration agreements with Malaysia. CSIR-CIMAP has signed two agreements to promote bilateral cooperation between India and Malaysia in research, development and commercialization of MAP related technologies. CIMAP's contribution to the Indian economy through its MAPs research is well known. Mint varieties released and agro-packages developed and popularised by CIMAP has made India the global leader in mints and related industrial products. CIMAP has released several varieties of the MAPs, their complete agro-technology and post harvest packages which have revolutionised MAPs cultivation and business scenario of the country. Recognizing the urgent need for stimulating research on medicinal plants in the country and for coordinating and consolidating some work already done by organizations like the Indian council of Agricultural Research, Indian Council of Medical Research, Tropical School of Medicine of Culcutta and various States Governments and Individual workers, the Council Scientific and Industrial Research approved in 1957 the establishment of the Central Indian Medicinal Plants Organization (CIMPO) with the following objectives. ‘To co-
https://en.wikipedia.org/wiki/Rake%20%28cellular%20automaton%29
A rake, in the lexicon of cellular automata, is a type of puffer train, which is an automaton that leaves behind a trail of debris. In the case of a rake, however, the debris left behind is a stream of spaceships, which are automata that "travel" by looping through a short series of iterations and end up in a new location after each cycle returns to the original configuration. In Conway's Game of Life, the discovery of rakes was one of the key components needed to form the breeder, the first known pattern in Life in which the number of live cells exhibits quadratic growth. A breeder is formed by arranging several rakes so that the gliders—the smallest possible spaceships—they generate interact to form a sequence of glider guns, patterns which emit gliders. The emitted gliders fill a growing triangle of the plane of the game. More generally, when a rake exists for a cellular automaton rule (a mathematical function defining the next iteration to be derived from a particular configuration of live and dead cells), one can often construct puffers which leave trails of many other kinds of objects, by colliding the streams of spaceships emitted by multiple rakes moving in parallel. As David Bell writes: The first rake to be discovered, in the early 1970s, was the "space rake", which moves with speed c/2 (or one unit every two steps), emitting a glider every twenty steps. For Life, rakes are now known that move orthogonally with speeds c/2, c/3, c/4, c/5, 2c/5, 2c/7, c/10 and 17c/45, and diagonally with speeds c/4 and c/12, with many different periods. Rakes are also known for some other cellular automata, including Highlife, Day & Night, and Seeds. Gotts (1980) shows that the space rake in Life can be formed by a "standard collision sequence" in which a single glider interacts with a widely separated set of 3-cell initial seeds (blinkers and blocks). As a consequence, he finds lower bounds on the probability that these patterns form in any sufficiently sparse and suf
https://en.wikipedia.org/wiki/Reflector%20%28cellular%20automaton%29
In cellular automata such as Conway's Game of Life, a reflector is a pattern that can interact with a spaceship to change its direction of motion, without damage to the reflector pattern. In Life, many oscillators can reflect the glider; there also exist stable reflectors composed of still life patterns that, when they interact with a glider, reflect the glider and return to their stable state. External links New stable 180-degree glider reflector, Game of Life News, May 30, 2009 New stable 90-degree glider reflector, Game of Life News, May 29, 2013 Cellular automaton patterns
https://en.wikipedia.org/wiki/Schreier%20conjecture
In finite group theory, the Schreier conjecture asserts that the outer automorphism group of every finite simple group is solvable. It was proposed by Otto Schreier in 1926, and is now known to be true as a result of the classification of finite simple groups, but no simpler proof is known.
https://en.wikipedia.org/wiki/Netocracy
Netocracy was a term invented by the editorial board of the American technology magazine Wired in the early 1990s. A portmanteau of Internet and aristocracy, netocracy refers to a perceived global upper-class that bases its power on a technological advantage and networking skills, in comparison to what is portrayed as a bourgeoisie of a gradually diminishing importance. The concept was later picked up and redefined by Alexander Bard and Jan Söderqvist for their book Netocracy — The New Power Elite and Life After Capitalism (originally published in Swedish in 2000 as Nätokraterna –t boken om det elektroniska klassamhället, published in English by Reuters/Pearsall UK in 2002). The netocracy concept has been compared with Richard Florida's concept of the creative class. Bard and Söderqvist have also defined an underclass in opposition to the netocracy, which they refer to as the consumtariat. The consumtariat Alexander Bard describes a new underclass called the consumtariat, a portmanteau of consumer and proletariat, whose main activity is consumption, regulated from above. It is kept occupied with private problems, its desires provoked with the use of advertisements and its active participation is limited to things like product choice, product customization, engaging with interactive products and life-style choice. Cyberdeutocracy Similar to netocracy, is the concept of cyberdeutocracy. Karl W. Deutsch in his book The Nerves of Government: Models of Political Communication and Control hypothesized about "information elites, controlling means of mass communication and, accordingly, power institutions, the functioning of which is based on the use of information in their activities." Thus Deutsch introduced the concept of deutocracy, combining the words 'Deutsch' and 'autocracy' to get the new term. Cyberdeutocracy combines 'deutocracy' with the prefix 'cyber-' and is defined as a political regime based on the control by the political and corporate elites of the info
https://en.wikipedia.org/wiki/Rokhlin%27s%20theorem
In 4-dimensional topology, a branch of mathematics, Rokhlin's theorem states that if a smooth, orientable, closed 4-manifold M has a spin structure (or, equivalently, the second Stiefel–Whitney class vanishes), then the signature of its intersection form, a quadratic form on the second cohomology group , is divisible by 16. The theorem is named for Vladimir Rokhlin, who proved it in 1952. Examples The intersection form on M is unimodular on by Poincaré duality, and the vanishing of implies that the intersection form is even. By a theorem of Cahit Arf, any even unimodular lattice has signature divisible by 8, so Rokhlin's theorem forces one extra factor of 2 to divide the signature. A K3 surface is compact, 4 dimensional, and vanishes, and the signature is −16, so 16 is the best possible number in Rokhlin's theorem. A complex surface in of degree is spin if and only if is even. It has signature , which can be seen from Friedrich Hirzebruch's signature theorem. The case gives back the last example of a K3 surface. Michael Freedman's E8 manifold is a simply connected compact topological manifold with vanishing and intersection form of signature 8. Rokhlin's theorem implies that this manifold has no smooth structure. This manifold shows that Rokhlin's theorem fails for the set of merely topological (rather than smooth) manifolds. If the manifold M is simply connected (or more generally if the first homology group has no 2-torsion), then the vanishing of is equivalent to the intersection form being even. This is not true in general: an Enriques surface is a compact smooth 4 manifold and has even intersection form II1,9 of signature −8 (not divisible by 16), but the class does not vanish and is represented by a torsion element in the second cohomology group. Proofs Rokhlin's theorem can be deduced from the fact that the third stable homotopy group of spheres is cyclic of order 24; this is Rokhlin's original approach. It can also be deduced from the Atiyah
https://en.wikipedia.org/wiki/Speech%E2%80%93language%20pathology
Speech–language pathology (or speech and language pathology) is a field of healthcare expertise practiced globally. Speech–language pathology (SLP) specializes in the evaluation, diagnosis, treatment, and prevention of communication disorders (speech and language impairments), cognitive-communication disorders, voice disorders, pragmatic disorders, social communication difficulties and swallowing disorder across the lifespan. It is an independent profession considered an "allied health profession" or allied health profession by professional bodies like the American Speech-Language-Hearing Association (ASHA) and Speech Pathology Australia. Allied health professions include audiology, optometry, occupational therapy, rehabilitation psychology, physical therapy and others. The field of speech-language pathology is practiced by a clinician known as a speech-language pathologist (SLP) or a speech and language therapist (SALT), and sometimes as a speech therapist. An SLP is a university-trained individual who provides professional services in the areas of communication and swallowing. SLPs also play an important role in the screening, diagnosis and treatment of autism spectrum disorder often in collaboration with pediatricians and psychologists. SLPs specialize in the evaluation, diagnosis, treatment, and prevention of communication disorders (speech and language impairments), cognitive-communication disorders, voice disorders, and swallowing disorders. SLPs also play an important role in the diagnosis and treatment of autism spectrum disorder (often in a team with pediatricians and psychologists). History Early history In the 18th century, speech problems were viewed as symptoms of disease. Speech therapy was therefore provided to treat the diseases using a medical framework. Jean-Marc Itard, one of the known physicians from this era, practiced the medical model. In his 1817 writings he theorized that stuttering was as a result of a problem of the nerves that contro
https://en.wikipedia.org/wiki/Dodecahedral%20conjecture
The dodecahedral conjecture in geometry is intimately related to sphere packing. László Fejes Tóth, a 20th-century Hungarian geometer, considered the Voronoi decomposition of any given packing of unit spheres. He conjectured in 1943 that the minimal volume of any cell in the resulting Voronoi decomposition was at least as large as the volume of a regular dodecahedron circumscribed to a unit sphere. Thomas Callister Hales and Sean McLaughlin proved the conjecture in 1998, following the same strategy that led Hales to his proof of the Kepler conjecture. The proofs rely on extensive computations. McLaughlin was awarded the 1999 Morgan Prize for his contribution to this proof.
https://en.wikipedia.org/wiki/Memory%20module
In computing, a memory module or RAM (random-access memory) stick is a printed circuit board on which memory integrated circuits are mounted. Memory modules permit easy installation and replacement in electronic systems, especially computers such as personal computers, workstations, and servers. The first memory modules were proprietary designs that were specific to a model of computer from a specific manufacturer. Later, memory modules were standardized by organizations such as JEDEC and could be used in any system designed to use them. Types of memory module include: TransFlash Memory Module SIMM, a single in-line memory module DIMM, dual in-line memory module Rambus memory modules are a subset of DIMMs, but are normally referred to as RIMMs SO-DIMM, small outline DIMM, a smaller version of the DIMM, used in laptops Compression Attached Memory Module, thinner than SO-DIMM Distinguishing characteristics of computer memory modules include voltage, capacity, speed (i.e., bit rate), and form factor. For economic reasons, the large (main) memories found in personal computers, workstations, and non-handheld game-consoles (such as PlayStation and Xbox) normally consist of dynamic RAM (DRAM). Other parts of the computer, such as cache memories normally use static RAM (SRAM). Small amounts of SRAM are sometimes used in the same package as DRAM. However, since SRAM has high leakage power and low density, die-stacked DRAM has recently been used for designing multi-megabyte sized processor caches. Physically, most DRAM is packaged in black epoxy resin. General DRAM formats Dynamic random access memory is produced as integrated circuits (ICs) bonded and mounted into plastic packages with metal pins for connection to control signals and buses. In early use individual DRAM ICs were usually either installed directly to the motherboard or on ISA expansion cards; later they were assembled into multi-chip plug-in modules (DIMMs, SIMMs, etc.). Some standard module types ar
https://en.wikipedia.org/wiki/SYBYL%20line%20notation
The SYBYL line notation or SLN is a specification for unambiguously describing the structure of chemical molecules using short ASCII strings. SLN differs from SMILES in several significant ways. SLN can specify molecules, molecular queries, and reactions in a single line notation whereas SMILES handles these through language extensions. SLN has support for relative stereochemistry, it can distinguish mixtures of enantiomers from pure molecules with pure but unresolved stereochemistry. In SMILES aromaticity is considered to be a property of both atoms and bonds whereas in SLN it is a property of bonds. Description Like SMILES, SLN is a linear language that describes molecules. This provides a lot of similarity with SMILES despite SLN's many differences from SMILES, and as a result this description will heavily compare SLN to SMILES and its extensions. Attributes Attributes, bracketed strings with additional data like [key1=value1, key2...], is a core feature of SLN. Attributes can be applied to atoms and bonds. Attributes not defined officially are available to users for private extensions. When searching for molecules, comparison operators such as fcharge>-0.125 can be used in place of the usual equal sign. A ! preceding a key/value group inverts the result of the comparison. Entire molecules or reactions can too have attributes. The square brackets are changed to a pair of <> signs. Atoms Anything that starts with an uppercase letter identifies an atom in SLN. Hydrogens are not automatically added, but the single bonds with hydrogen can be abbreviated for organic compounds, resulting in CH4 instead of C(H)(H)(H)H for methane. The author argues that explicit hydrogens allow for more robust parsing. Attributes defined for atoms include I= for isotope mass number, charge= for formal charge, fcharge for partial charge, s= for stereochemistry, and spin= for radicals (s, d, t respectively for singlet, doublet, triplet). A formal charge of charge=2 can be abbrevi
https://en.wikipedia.org/wiki/Astroparticle%20physics
Astroparticle physics, also called particle astrophysics, is a branch of particle physics that studies elementary particles of astronomical origin and their relation to astrophysics and cosmology. It is a relatively new field of research emerging at the intersection of particle physics, astronomy, astrophysics, detector physics, relativity, solid state physics, and cosmology. Partly motivated by the discovery of neutrino oscillation, the field has undergone rapid development, both theoretically and experimentally, since the early 2000s. History The field of astroparticle physics is evolved out of optical astronomy. With the growth of detector technology came the more mature astrophysics, which involved multiple physics subtopics, such as mechanics, electrodynamics, thermodynamics, plasma physics, nuclear physics, relativity, and particle physics. Particle physicists found astrophysics necessary due to difficulty in producing particles with comparable energy to those found in space. For example, the cosmic ray spectrum contains particles with energies as high as 1020 eV, where a proton–proton collision at the Large Hadron Collider occurs at an energy of ~1012 eV. The field can be said to have begun in 1910, when a German physicist named Theodor Wulf measured the ionization in the air, an indicator of gamma radiation, at the bottom and top of the Eiffel Tower. He found that there was far more ionization at the top than what was expected if only terrestrial sources were attributed for this radiation. The Austrian physicist Victor Francis Hess hypothesized that some of the ionization was caused by radiation from the sky. In order to defend this hypothesis, Hess designed instruments capable of operating at high altitudes and performed observations on ionization up to an altitude of 5.3 km. From 1911 to 1913, Hess made ten flights to meticulously measure ionization levels. Through prior calculations, he did not expect there to be any ionization above an altitude of 50
https://en.wikipedia.org/wiki/Lumbar%20enlargement
The lumbar enlargement (or lumbosacral enlargement) is a widened area of the spinal cord that gives attachment to the nerves which supply the lower limbs. It commences about the level of T11 and ends at L2, and reaches its maximum circumference, of about 33 mm. Inferior to the lumbar enlargement is the conus medullaris. An analogous region for the upper limbs exists at the cervical enlargement. Additional images
https://en.wikipedia.org/wiki/Cervical%20enlargement
The cervical enlargement corresponds with the attachments of the large nerves which supply the upper limbs. Located just above the brachial plexus, it extends from about the fifth cervical to the first thoracic vertebra, its maximum circumference (about 38 mm.) being on a level with the attachment of the sixth pair of cervical nerves. The reason behind the enlargement of the cervical region is because of the increased neural input and output to the upper limbs. An analogous region in the lower limbs occurs at the lumbar enlargement.
https://en.wikipedia.org/wiki/Nudity%20and%20sexuality
Nudity is one of the physiological characteristics of humans, who alone among primates evolved to be effectively hairless. Human sexuality includes the physiological, psychological, and social aspects of sexual feelings and behaviors. In many societies, a strong link between nudity and sexuality is taken for granted. Other societies maintain their traditional practices of being completely or partially naked in social as well as private situations, such as going to a beach or spa. The meaning of nudity and sexuality remains ambivalent, often leading to cultural misunderstandings and psychological problems. Sexual response to social nudity The link between the nude body and a sexual response is reflected in the legal prohibition of indecent exposure in the majority of societies. Worldwide, some societies recognize certain places and activities that, although public, are appropriate for partial or complete nudity. These include societies that maintain traditional norms regarding nudity, as well as modern societies that have large numbers of people who have adopted naturism in recreational activities. Naturists typically adopt a number of behaviors, such as refraining from touch, in order to avoid sexual responses while participating in nude activities. Many nude beaches serve as an example of this. Naturism and sex Some naturists do not maintain this non-sexual atmosphere. In a 1991 article in Off Our Backs, Nina Silver presents an account of mainstream sexual culture's intrusion into some American naturist groups. Nudist resorts may attract misogynists or pedophiles who are not always dealt with properly, and some resorts may cater to "swingers" or have sexually provocative events to generate revenue or attract members. Breasts and sexuality In many societies, the breast continues to be associated with both nurturing babies and sexuality. The "topfreedom" movement promotes equal rights for women to be naked above the waist in public under the same circumstan
https://en.wikipedia.org/wiki/Recombination-activating%20gene
The recombination-activating genes (RAGs) encode parts of a protein complex that plays important roles in the rearrangement and recombination of the genes encoding immunoglobulin and T cell receptor molecules. There are two recombination-activating genes RAG1 and RAG2, whose cellular expression is restricted to lymphocytes during their developmental stages. The enzymes encoded by these genes, RAG-1 and RAG-2, are essential to the generation of mature B cells and T cells, two types of lymphocyte that are crucial components of the adaptive immune system. Function In the vertebrate immune system, each antibody is customized to attack one particular antigen (foreign proteins and carbohydrates) without attacking the body itself. The human genome has at most 30,000 genes, and yet it generates millions of different antibodies, which allows it to be able to respond to invasion from millions of different antigens. The immune system generates this diversity of antibodies by shuffling, cutting and recombining a few hundred genes (the VDJ genes) to create millions of permutations, in a process called V(D)J recombination. RAG-1 and RAG-2 are proteins at the ends of VDJ genes that separate, shuffle, and rejoin the VDJ genes. This shuffling takes place inside B cells and T cells during their maturation. RAG enzymes work as a multi-subunit complex to induce cleavage of a single double stranded DNA (dsDNA) molecule between the antigen receptor coding segment and a flanking recombination signal sequence (RSS). They do this in two steps. They initially introduce a ‘nick’ in the 5' (upstream) end of the RSS heptamer (a conserved region of 7 nucleotides) that is adjacent to the coding sequence, leaving behind a specific biochemical structure on this region of DNA: a 3'-hydroxyl (OH) group at the coding end and a 5'-phosphate (PO4) group at the RSS end. The next step couples these chemical groups, binding the OH-group (on the coding end) to the PO4-group (that is sitting between t
https://en.wikipedia.org/wiki/Test%20Anything%20Protocol
The Test Anything Protocol (TAP) is a protocol to allow communication between unit tests and a test harness. It allows individual tests (TAP producers) to communicate test results to the testing harness in a language-agnostic way. Originally developed for unit testing of the Perl interpreter in 1987, producers and parsers are now available for many development platforms. History TAP was created for the first version of the Perl programming language (released in 1987), as part of the Perl's core test harness (t/TEST). The Test::Harness module was written by Tim Bunce and Andreas König to allow Perl module authors to take advantage of TAP. It became the de facto standard for Perl testing. Development of TAP, including standardization of the protocol, writing of test producers and consumers, and evangelizing the language is coordinated at the TestAnything website. As a protocol which is agnostic of programming language, TAP unit testing libraries expanded beyond their Perl roots and have been developed for various languages and systems such as PostgreSQL, MySQL, JavaScript and other implementations listed on the project site. A TAP C library is included as part of the FreeBSD Unix distribution and is used in the system's regression test suite. Specification A formal specification for this protocol exists in the TAP::Spec::Parser and TAP::Parser::Grammar modules. The behavior of the Test::Harness module is the de facto TAP standard implementation, along with a writeup of the specification on https://testanything.org. A project to produce an IETF standard for TAP was initiated in August 2008, at YAPC::Europe 2008. Usage examples Here's an example of TAP's general format: 1..48 ok 1 Description # Directive # Diagnostic .... ok 47 Description ok 48 Description For example, a test file's output might look like: 1..4 ok 1 - Input file opened not ok 2 - First line of the input valid. More output from test 2. There can be arbitrary number of lines for any outp
https://en.wikipedia.org/wiki/Inferior%20costal%20facet
The inferior costal facet (or inferior costal fovea) is a site where a rib forms a joint with the inferior aspect of the body of a thoracic vertebra. In the adjacent picture, the arrow points to an inferior costal facet. The facets are named for their location on the vertebral body, not the rib. The inferior costal facet is located on the inferior aspect of the vertebral body, but has a superior location on the rib. Similarly, the superior costal facet is superior on the vertebral body but is inferior on the rib.
https://en.wikipedia.org/wiki/Superior%20costal%20facet
The superior costal facet (or superior costal fovea) is a site where a rib forms a joint with the top of a vertebra. Ribs connect to the thoracic vertebrae at two main points, the inferior and superior costal facets. These connection points are located on two different vertebrae that are located on top of one another. The superior costal facet is located on the inferior thoracic vertebrae. The inferior costal facet is located on the superior vertebrae. While these terms may be confusing, it helps to know that the costal facets are named for their position on the vertebral body itself, not for the part of the rib that they articulate with. Costal facets only apply to ribs 2–9. Ribs 1, 10, 11, and 12 articulate completely onto the thoracic vertebrae rather than in between two of them.
https://en.wikipedia.org/wiki/Transverse%20costal%20facet
The transverse costal facet (or transverse costal fovea) is one of the costal facets, a site where a rib forms a joint with the transverse process of a thoracic vertebra.
https://en.wikipedia.org/wiki/Arthur%20P.%20Dempster
Arthur Pentland Dempster (born 1929) is a Professor Emeritus in the Harvard University Department of Statistics. He was one of four faculty when the department was founded in 1957. Biography Dempster received his B.A. in mathematics and physics (1952) and M.A. in mathematics (1953), both from the University of Toronto. He obtained his Ph.D. in mathematical statistics from Princeton University in 1956. His thesis, titled The two-sample multivariate problem in the degenerate case, was written under the supervision of John Tukey. Academic works Among his contributions to statistics are the Dempster–Shafer theory and the expectation-maximization (EM) algorithm. Selected publications Honors and awards Dempster was a Putnam Fellow in 1951. He was elected as an American Statistical Association Fellow in 1964, an Institute of Mathematical Statistics Fellow in 1963, and an American Academy of Arts and Sciences Fellow in 1997.
https://en.wikipedia.org/wiki/Basidiobolomycosis
Basidiobolomycosis is a fungal disease caused by Basidiobolus ranarum. It may appear as one or more painless firm nodules in the skin which becomes purplish with an edge that appears to be slowly growing outwards. A serious but less common type affects the stomach and intestine, which usually presents with abdominal pain, fever and a mass. B. ranarum, can be found in soil, decaying vegetables and has been isolated from insects, some reptiles, amphibians, and mammals. The disease results from direct entry of the fungus through broken skin such as an insect bite or trauma, or eating contaminated food. It generally affects people who are well. Diagnosis is by medical imaging, biopsy, microscopy, culture and histopathology. Treatment usually involves amphotericin B and surgery. Although B. ranarum is found around the world, the disease Basidiobolomycosis is generally reported in tropical and subtropical areas of Africa, South America, Asia and Southwestern United States. It is rare. The first case in a human was reported from Indonesia in 1956 as a skin infection. Signs and symptoms Basidiobolomycosis may appear as a firm nodule in the skin which becomes purplish with an edge that appears to be slowly growing outwards. It is generally painless but may feel itchy or burning. There can be one lesion or several, and usually on the arms or legs of children. Pus may be present if a bacterial infection also occurs. The infection can spread to nearby structures such as muscles, bones and lymph nodes. A serious but less common type affects the stomach and intestine, which usually presents with tummy ache, fever and a lump. Lymphoedema may occur. Mechanism Basidiobolomycosis is a type of Entomophthoromycosis, the other being conidiobolomycosis, and is caused by Basidiobolus ranarum, a fungus belonging to the order Entomophthorales. B. ranarum has been found in soil, decaying vegetables and has been isolated from insects some reptiles, amphibians, and mammals. The disease
https://en.wikipedia.org/wiki/Glossary%20of%20shapes%20with%20metaphorical%20names
Many shapes have metaphorical names, i.e., their names are metaphors: these shapes are named after a most common object that has it. For example, "U-shape" is a shape that resembles the letter U, a bell-shaped curve has the shape of the vertical cross-section of a bell, etc. These terms may variously refer to objects, their cross sections or projections. Types of shapes Some of these names are "classical terms", i.e., words of Latin or Ancient Greek etymology. Others are English language constructs (although the base words may have non-English etymology). In some disciplines, where shapes of subjects in question are a very important consideration, the shape naming may be quite elaborate, see, e.g., the taxonomy of shapes of plant leaves in botany. Astroid Aquiline, shaped like an eagle's beak (as in a Roman nose) Bell-shaped curve Biconic shape, a shape in a way opposite to the hourglass: it is based on two oppositely oriented cones or truncated cones with their bases joined; the cones are not necessarily the same Bowtie shape, in two dimensions Atmospheric reentry apparatus Centerbody of an inlet cone in ramjets Bow shape Bow curve Bullet Nose an open-ended hourglass Butterfly curve (algebraic) Cocked hat curve, also known as Bicorn Cone (from the Greek word for « pine cone ») Doughnut shape Egg-shaped, see "Oval", below Geoid (From Greek Ge (γη) for "Earth"), the term specifically introduced to denote the approximation of the shape of the Earth, which is approximately spherical, but not exactly so Heart shape, long been used for its varied symbolism Horseshoe-shaped, resembling a horseshoe, cf. horseshoe (disambiguation). In botany, also called lecotropal (see below) Hourglass shape or hourglass figure, the one that resembles an hourglass; nearly symmetric shape wide at its ends and narrow in the middle; some flat shapes may be alternatively compared to the figure eight or hourglass Dog bone shape, an hourglass with rounded ends Hourglass co
https://en.wikipedia.org/wiki/Math%20Rescue
Math Rescue is a 1992 educational platform game created by Karen Crowther of Redwood Games and published by Apogee Software. Its early pre-release title was "Number Rescue". Released in October 1992 for the MS-DOS platform, it is a loose successor to the earlier game Word Rescue, whose game engine was used to power the new game with minor changes. Math Rescue was initially released as shareware but later achieved a retail release. It was followed by Math Rescue Plus. There were plans to have a sequel to the game called "Gruzzle Puzzles" but it was never started. The registered version of Math Rescue remains available for purchase. It was also released on Steam in 2015 with support for Windows and Mac OS. The game can also allow the player interact with a pair of Stereoscopic Vision Glasses. Gameplay Math Rescue revolves around solving math problems. Boxes float in the air. Touching a box transports the player inside it, where they solve a math problem to get points. Math Rescue includes three episodes, called Visit Volcanoes and Ice Caves, Follow the Gruzzles into Space, and See Candy Land. Only the first episode is distributed as shareware, while the others are available commercially. Each episode contains 15 levels. The game also has a learning mode to permanently enable word problems in the game. Plot The world's numbers are stolen by the Gruzzles, which causes problems. The player character spots a Gruzzle removing numbers from the front of a house. Benny the Butterfly comes to the player character's aid and they begin a quest to stop the Gruzzles and retrieve the stolen numbers. It turns out that an alien race called the Glixerians from the planet Glixer II have sent their pets, the Gruzzles to steal Earth's numbers so that the humans will become the new Glixerians' pets. It is revealed that Benny was once a Gruzzle who took refuge on Earth. Benny recognises a Gruzzle named Zorja as his cousin and convinces her that the Gruzzles can be free on Earth away fr
https://en.wikipedia.org/wiki/Generalized%20inverse
In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix . A matrix is a generalized inverse of a matrix if A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse. Motivation Consider the linear system where is an matrix and the column space of . If is nonsingular (which implies ) then will be the solution of the system. Note that, if is nonsingular, then Now suppose is rectangular (), or square and singular. Then we need a right candidate of order such that for all That is, is a solution of the linear system . Equivalently, we need a matrix of order such that Hence we can define the generalized inverse as follows: Given an matrix , an matrix is said to be a generalized inverse of if The matrix has been termed a regular inverse of by some authors. Types Important types of generalized inverse include: One-sided inverse (right inverse or left inverse) Right inverse: If the matrix has dimensions and , then there exists an matrix called the right inverse of such that , where is the identity matrix. Left inverse: If the matrix has dimensions and , then there exists an matrix called the left inverse of such that , where is the identity matrix. Bott–Duffin inverse Drazin inverse Moore–Penrose inverse Some generalized inverses are defined and classified based on the Penrose conditions: where denotes conjugate transpose
https://en.wikipedia.org/wiki/Vestigial%20response
A vestigial response or vestigial reflex in a species is a response that has lost its original function. In humans, vestigial responses include ear perking, goose bumps and the hypnic jerk. In humans Ear perking It has been observed that some people have slight protrusions on the outer ear (also known as the auricle). These protrusions tend towards the top of the auricle. This has been tagged and coined Darwin's tubercle of the auricle. This phenomenon agrees with the accepted scientific explanation: the incidence of tubercles of the auricle among humans, are vestigial structures testifying to our evolutionary past. They are a throwback to the pointed ears of many mammals and just one more vestigial trace of human evolutionary history. The focus on this part of the human anatomy has finally been followed by a much later observation testifying to our evolutionary past. The subsequent observation concerns an automatic ear-perking response seen, for example, in dogs when startled by a sudden noise. This response, though faint, fleeting and hardly discernible in humans nonetheless clearly manifests itself. This phenomenon is an automatic-response mechanism that activates even before a human becomes consciously aware that a startling, unexpected or unknown sound has been "heard". That this vestigial response occurs even before becoming consciously aware of a startling noise would explain why the function of ear-perking had evolved in animals. The mechanism serves to give a split-second advantage to a startled animal – possibly an animal being stalked and hunted. The evolutionary advantage of the ear-perking response could spell the difference between life and death. The perking response serves to gather and focus that much more audible information that is fed into the brain and on its way to being analyzed even before the animal actually becomes aware of the sound. This fraction-of-a-second advantage would explain the evolutionary selection for this response. Goose
https://en.wikipedia.org/wiki/Quantum%20game%20theory
Quantum game theory is an extension of classical game theory to the quantum domain. It differs from classical game theory in three primary ways: Superposed initial states, Quantum entanglement of initial states, Superposition of strategies to be used on the initial states. This theory is based on the physics of information much like quantum computing. History In 1999, a professor in the math department at the University of California at San Diego named David A. Meyer first published Quantum Strategies which details a quantum version of the classical game theory game, matching pennies. In the quantum version, players are allowed access to quantum signals through the phenomenon of quantum entanglement. Superposed initial states The information transfer that occurs during a game can be viewed as a physical process. In the simplest case of a classical game between two players with two strategies each, both the players can use a bit (a '0' or a '1') to convey their choice of strategy. A popular example of such a game is the prisoners' dilemma, where each of the convicts can either cooperate or defect: withholding knowledge or revealing that the other committed the crime. In the quantum version of the game, the bit is replaced by the qubit, which is a quantum superposition of two or more base states. In the case of a two-strategy game this can be physically implemented by the use of an entity like the electron which has a superposed spin state, with the base states being +1/2 (plus half) and −1/2 (minus half). Each of the spin states can be used to represent each of the two strategies available to the players. When a measurement is made on the electron, it collapses to one of the base states, thus conveying the strategy used by the player. Entangled initial states The set of qubits which are initially provided to each of the players (to be used to convey their choice of strategy) may be entangled. For instance, an entangled pair of qubits implies that an operation
https://en.wikipedia.org/wiki/Ideally%20hard%20superconductor
An ideally hard superconductor is a type II superconductor material with an infinite pinning force. In the external magnetic field it behaves like an ideal diamagnet if the field is switched on when the material is in the superconducting state, so-called "zero field cooled" (ZFC) regime. In the field cooled (FC) regime, the ideally hard superconductor screens perfectly the change of the magnetic field rather than the magnetic field itself. Its magnetization behavior can be described by Bean's critical state model. The ideally hard superconductor is a good approximation for the melt-textured high temperature superconductors (HTSC) used in large scale HTSC applications such as flywheels, HTSC bearings, HTSC motors, etc. See also Frozen mirror image method Bean's critical state model
https://en.wikipedia.org/wiki/Electrical%20system%20of%20the%20International%20Space%20Station
The electrical system of the International Space Station is a critical resource for the International Space Station (ISS) because it allows the crew to live comfortably, to safely operate the station, and to perform scientific experiments. The ISS electrical system uses solar cells to directly convert sunlight to electricity. Large numbers of cells are assembled in arrays to produce high power levels. This method of harnessing solar power is called photovoltaics. The process of collecting sunlight, converting it to electricity, and managing and distributing this electricity builds up excess heat that can damage spacecraft equipment. This heat must be eliminated for reliable operation of the space station in orbit. The ISS power system uses radiators to dissipate the heat away from the spacecraft. The radiators are shaded from sunlight and aligned toward the cold void of deep space. Solar array wing Each ISS solar array wing (often abbreviated "SAW") consists of two retractable "blankets" of solar cells with a mast between them. Each wing is the largest ever deployed in space, weighing over 2,400 pounds and using nearly 33,000 solar arrays, each measuring 8-cm square with 4,100 diodes. When fully extended, each is in length and wide. Each SAW is capable of generating nearly 31 Kilowatts (kW) of direct current power. When retracted, each wing folds into a solar array blanket box just high and in length. Altogether, the eight solar array wings can generate about 240 kilowatts in direct sunlight, or about 84 to 120 kilowatts average power (cycling between sunlight and shade). The solar arrays normally track the Sun, with the "alpha gimbal" used as the primary rotation to follow the Sun as the space station moves around the Earth, and the "beta gimbal" used to adjust for the angle of the space station's orbit to the ecliptic. Several different tracking modes are used in operations, ranging from full Sun-tracking, to the drag-reduction mode (night glider and S
https://en.wikipedia.org/wiki/Theorem%20of%20the%20cube
In mathematics, the theorem of the cube is a condition for a line bundle over a product of three complete varieties to be trivial. It was a principle discovered, in the context of linear equivalence, by the Italian school of algebraic geometry. The final version of the theorem of the cube was first published by , who credited it to André Weil. A discussion of the history has been given by . A treatment by means of sheaf cohomology, and description in terms of the Picard functor, was given by . Statement The theorem states that for any complete varieties U, V and W over an algebraically closed field, and given points u, v and w on them, any invertible sheaf L which has a trivial restriction to each of U× V × {w}, U× {v} × W, and {u} × V × W, is itself trivial. (Mumford p. 55; the result there is slightly stronger, in that one of the varieties need not be complete and can be replaced by a connected scheme.) Special cases On a ringed space X, an invertible sheaf L is trivial if isomorphic to OX, as an OX-module. If the base X is a complex manifold, then an invertible sheaf is (the sheaf of sections of) a holomorphic line bundle, and trivial means holomorphically equivalent to a trivial bundle, not just topologically equivalent. Restatement using biextensions Weil's result has been restated in terms of biextensions, a concept now generally used in the duality theory of abelian varieties. Theorem of the square The theorem of the square is a corollary (also due to Weil) applying to an abelian variety A. One version of it states that the function φL taking x∈A to TL⊗L−1 is a group homomorphism from A to Pic(A) (where T is translation by x on line bundles).
https://en.wikipedia.org/wiki/Absolute%20irreducibility
In mathematics, a multivariate polynomial defined over the rational numbers is absolutely irreducible if it is irreducible over the complex field. For example, is absolutely irreducible, but while is irreducible over the integers and the reals, it is reducible over the complex numbers as and thus not absolutely irreducible. More generally, a polynomial defined over a field K is absolutely irreducible if it is irreducible over every algebraic extension of K, and an affine algebraic set defined by equations with coefficients in a field K is absolutely irreducible if it is not the union of two algebraic sets defined by equations in an algebraically closed extension of K. In other words, an absolutely irreducible algebraic set is a synonym of an algebraic variety, which emphasizes that the coefficients of the defining equations may not belong to an algebraically closed field. Absolutely irreducible is also applied, with the same meaning, to linear representations of algebraic groups. In all cases, being absolutely irreducible is the same as being irreducible over the algebraic closure of the ground field. Examples A univariate polynomial of degree greater than or equal to 2 is never absolutely irreducible, due to the fundamental theorem of algebra. The irreducible two-dimensional representation of the symmetric group S3 of order 6, originally defined over the field of rational numbers, is absolutely irreducible. The representation of the circle group by rotations in the plane is irreducible (over the field of real numbers), but is not absolutely irreducible. After extending the field to complex numbers, it splits into two irreducible components. This is to be expected, since the circle group is commutative and it is known that all irreducible representations of commutative groups over an algebraically closed field are one-dimensional. The real algebraic variety defined by the equation is absolutely irreducible. It is the ordinary circle over the reals a
https://en.wikipedia.org/wiki/Station%20biologique%20de%20Roscoff
The Station biologique de Roscoff (SBR) is a French marine biology and oceanography research and teaching center. Founded by Henri de Lacaze-Duthiers (1821–1901) in 1872, it is at the present time affiliated to the Sorbonne University (SU) and the Centre National de la Recherche Scientifique (CNRS). Overview The Station biologique is situated in Roscoff on the northern coast of Brittany (France) about 60 km east of Brest. Its location offers access to exceptional variety of biotopes, most of which are accessible at low tide. These biotopes support a large variety of both plant (700) and animal (3000) marine species. Founded in 1872 by Professor Henri de Lacaze-Duthiers (then Zoology Chair at the Sorbonne University ), the SBR constitutes, since March 1985, the Internal School 937 of the Pierre and Marie Curie University (UPMC). In November 1985, the SBR was given the status of Oceanographic Observatory by the Institut National des Sciences de l'Univers et de l'Environnement (National Institute for the Cosmological and Environmental Sciences; INSU). The SBR is also, since January 2001, a Research Federation within the Life Sciences Department of the CNRS. The personnel of the SBR, which includes about 200 permanent staff, consists of scientists, teaching scientists, technicians, postdoctoral fellows, PhD students and administrative staff. These personnel is organized into various research groups within research units that are recognised by the Life Sciences Department of the CNRS (the current research units have the following codes: FR 2424, UMR 8227, UMR 7144, UMI 3614 and USR 3151). The various research groups work on a wide range of topics, ranging from investigation of the fine structure and function of biological macromolecules to global oceanic studies. Genomic approaches constitute an important part of many of the research programmes, notably via the European Network of Excellence "Marine Genomics" which is coordinated by the SBR. With the accommodation fac
https://en.wikipedia.org/wiki/Dual%20abelian%20variety
In mathematics, a dual abelian variety can be defined from an abelian variety A, defined over a field K. Definition To an abelian variety A over a field k, one associates a dual abelian variety Av (over the same field), which is the solution to the following moduli problem. A family of degree 0 line bundles parametrized by a k-variety T is defined to be a line bundle L on A×T such that for all , the restriction of L to A×{t} is a degree 0 line bundle, the restriction of L to {0}×T is a trivial line bundle (here 0 is the identity of A). Then there is a variety Av and a line bundle ,, called the Poincaré bundle, which is a family of degree 0 line bundles parametrized by Av in the sense of the above definition. Moreover, this family is universal, that is, to any family L parametrized by T is associated a unique morphism f: T → Av so that L is isomorphic to the pullback of P along the morphism 1A×f: A×T → A×Av. Applying this to the case when T is a point, we see that the points of Av correspond to line bundles of degree 0 on A, so there is a natural group operation on Av given by tensor product of line bundles, which makes it into an abelian variety. In the language of representable functors one can state the above result as follows. The contravariant functor, which associates to each k-variety T the set of families of degree 0 line bundles parametrised by T and to each k-morphism f: T → T the mapping induced by the pullback with f, is representable. The universal element representing this functor is the pair (Av, P). This association is a duality in the sense that there is a natural isomorphism between the double dual Avv and A (defined via the Poincaré bundle) and that it is contravariant functorial, i.e. it associates to all morphisms f: A → B dual morphisms fv: Bv → Av in a compatible way. The n-torsion of an abelian variety and the n-torsion of its dual are dual to each other when n is coprime to the characteristic of the base. In general - for all n - the
https://en.wikipedia.org/wiki/E-professional
E-professional or "eprofessional" or even "eProfessional" is a term used in Europe to describe a professional whose work relies on concepts of remote work: working at a distance using information technology and communications technology, as well as online collaboration (i.e. virtual team, mass collaboration, massively distributed collaboration, online community of practice such as the open source community, and open innovation principles. The concept of e-professional, strongly related to the concept of remote work, extends the traditional concept of professional in including any type of expert or knowledge worker intensively using ICT (Information and Communications Technology) environments and tools in their working practices. An eprofessional is a member of at least one community of practice which confers him the title of professional and can be either a freelancer or an employed worker. An e-professional is not working in isolation but actively collaborating with other e-professionals within virtual workspaces called collaborative working environments (CWE).
https://en.wikipedia.org/wiki/Lycoperdon%20umbrinum
Lycoperdon umbrinum, commonly known as the umber-brown puffball, is a type of Puffball mushroom in the genus Lycoperdon. It is found in China, Europe, Africa, and North America. Description This species has a fruit body that is shaped like a top, with a short, partly buried stipe. It is tall and broad. It is approximately the size of a golf ball but may grow to be as big as a tennis ball. Lycoperdon umbrinum is very similar to Lycoperdon molle; they are so similar that scientists used refer to them with the same name. When looking closer, the density of spines on L. umbrinum are sparser and the spores of L. molle are much larger with a greater density of spines. The spores of L. umbrinum are spherical and either smooth or ornamented with spines and appear olive yellow in KOH. The fruit body is initially pale brown then reddish to blackish brown, and the outer wall has slender, persistent spines up to 1 mm long. Spores are roughly spherical, 3.5–5.5 µm in diameter, with fine warts and a pedicel that is 0.5–15 µm long. It is uncommon and found mostly in coniferous woods on sandy soils. The species is considered edible. Be sure to identify properly before eating because it could be confused with the toxic earth ball or deadly Amanita. Ecology and habitat This fungus is saprophytic, commonly growing in forests and under conifers. It has also been seen growing in poor quality soil in hardwood and conifer areas. It has been observed growing by itself, dispersed, or many together. The fruiting period is from June through September. Unlike agarics which have gills that hold spores, when conditions are right, these puffballs will become dry and burst to release their spores. Upon rupturing, they can release trillions of spores. An interesting characteristic about Lycoperdon umbrinum is that it likely has a mycorrhizal relationship with Pinus patula. One study investigated this relationship and found these species were often growing near each other. Additionally, the
https://en.wikipedia.org/wiki/Pinning%20force
Pinning force is a force acting on a pinned object from a pinning center. In solid state physics, this most often refers to the vortex pinning, the pinning of the magnetic vortices (magnetic flux quanta, Abrikosov vortices) by different kinds of the defects in a type II superconductor. Important quantities are the individual maximal pinning force, which defines the depinning of a single vortex, and an average pinning force, which defines the depinning of the correlated vortex structures and can be associated with the critical current density (the maximal density of non-dissipative current). The interaction of the correlated vortex lattice with system of pinning centers forms the magnetic phase diagram of the vortex matter in superconductors. This phase diagram is especially rich for high temperature superconductors (HTSC) where the thermo-activation processes are essential. The pinning mechanism is based on the fact that the amount of grain boundary area is reduced when a particle is located on a grain boundary. It is also assumed that particles are spherical and the particle-matrix interface is incoherent. When a moving grain boundary meets a particle at an angle , the particle exerts a pinning force on the grain boundary that is equal to ; with the particle radius and the energy per unit of grain boundary area.
https://en.wikipedia.org/wiki/Negative-bias%20temperature%20instability
Negative-bias temperature instability (NBTI) is a key reliability issue in MOSFETs, a type of transistor aging. NBTI manifests as an increase in the threshold voltage and consequent decrease in drain current and transconductance of a MOSFET. The degradation is often approximated by a power-law dependence on time. It is of immediate concern in p-channel MOS devices (pMOS), since they almost always operate with negative gate-to-source voltage; however, the very same mechanism also affects nMOS transistors when biased in the accumulation regime, i.e. with a negative bias applied to the gate. More specifically, over time positive charges become trapped at the oxide-semiconductor boundary underneath the gate of a MOSFET. These positive charges partially cancel the negative gate voltage without contributing to conduction through the channel as electron holes in the semiconductor are supposed to. When the gate voltage is removed, the trapped charges dissipate over a time scale of milliseconds to hours. The problem has become more acute as transistors have shrunk, as there is less averaging of the effect over a large gate area. Thus, different transistors experience different amounts of NBTI, defeating standard circuit design techniques for tolerating manufacturing variability which depend on the close matching of adjacent transistors. NBTI has become significant for portable electronics because it interacts badly with two common power-saving techniques: reduced operating voltages and clock gating. With lower operating voltages, the NBTI-induced threshold voltage change is a larger fraction of the logic voltage and disrupts operations. When a clock is gated off, transistors stop switching and NBTI effects accumulate much more rapidly. When the clock is re-enabled, the transistor thresholds have changed and the circuit may not operate. Some low-power designs switch to a low-frequency clock rather than stopping completely in order to mitigate NBTI effects. Physics
https://en.wikipedia.org/wiki/Leabra
Leabra stands for local, error-driven and associative, biologically realistic algorithm. It is a model of learning which is a balance between Hebbian and error-driven learning with other network-derived characteristics. This model is used to mathematically predict outcomes based on inputs and previous learning influences. This model is heavily influenced by and contributes to neural network designs and models. This algorithm is the default algorithm in emergent (successor of PDP++) when making a new project, and is extensively used in various simulations. Hebbian learning is performed using conditional principal components analysis (CPCA) algorithm with correction factor for sparse expected activity levels. Error-driven learning is performed using GeneRec, which is a generalization of the recirculation algorithm, and approximates Almeida–Pineda recurrent backpropagation. The symmetric, midpoint version of GeneRec is used, which is equivalent to the contrastive Hebbian learning algorithm (CHL). See O'Reilly (1996; Neural Computation) for more details. The activation function is a point-neuron approximation with both discrete spiking and continuous rate-code output. Layer or unit-group level inhibition can be computed directly using a k-winners-take-all (KWTA) function, producing sparse distributed representations. The net input is computed as an average, not a sum, over connections, based on normalized, sigmoidally transformed weight values, which are subject to scaling on a connection-group level to alter relative contributions. Automatic scaling is performed to compensate for differences in expected activity level in the different projections. Documentation about this algorithm can be found in the book "Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain" published by MIT press. and in the Emergent Documentation Overview of the leabra algorithm The pseudocode for Leabra is given here, showing exactly how th
https://en.wikipedia.org/wiki/Dose%20fractionation
Dose fractionation effects are utilised in the treatment of cancer with radiation therapy. When the total dose of radiation is divided into several, smaller doses over a period of several days, there are fewer toxic effects on healthy cells. This maximizes the effect of radiation on cancer and minimizes the negative side effects. A typical fractionation scheme divides the dose into 30 units delivered every weekday over six weeks. Background Experiments in radiation biology have found that as the absorbed dose of radiation increases, the number of cells which survive decreases. They have also found that if the radiation is fractionated into smaller doses, with one or more rest periods in between, fewer cells die. This is because of self-repair mechanisms which repair the damage to DNA and other biomolecules such as proteins. These mechanisms can be over expressed in cancer cells, so caution should be used in using results for a cancer cell line to make predictions for healthy cells if the cancer cell line is known to be resistant to cytotoxic drugs such as cisplatin. The DNA self repair processes in some organisms is exceptionally good; for instance, the bacterium Deinococcus radiodurans can tolerate a 15 000 Gy (1.5 MRad) dose. In the graph to the right, called a cell survival curve, the dose vs. surviving fraction have been drawn for a hypothetical group of cells with and without a rest time for the cells to recover. Other than the recovery time partway through the irradiation, the cells would have been treated identically. The human body contains many types of cells, and the human can be killed by the loss of a single type of cell in a vital organ. For many short-term radiation deaths due to what is commonly known as radiation sickness (3 to 30 days after exposure), it is the loss of bone marrow cells (which produce blood cells), and the loss of other cells in the wall of the intestines, that is fatal. Radiation fractionation as cancer treatment Fractionatio