source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Ethyl%20methylphenylglycidate
Ethyl methylphenylglycidate, commonly known as strawberry aldehyde, is an organic compound used in the flavor industry in artificial fruit flavors, in particular strawberry. Uses Because of its pleasant taste and aroma, ethyl methylphenylglycidate finds use in the fragrance industry, in artificial flavors, and in cosmetics. Its end applications include perfumes, soaps, beauty care products, detergents, pharmaceuticals, baked goods, candies, ice cream, and others. Chemistry Ethyl methylphenylglycidate contains ester and epoxide functional groups, despite its common name, lacks presence of an aldehyde. It is a colourless liquid that is insoluble in water. Ethyl methylphenylglycidate is usually prepared by the condensation of acetophenone and the ethyl ester of monochloroacetic acid in the presence of a base, in a reaction known as the Darzens condensation. Safety Long-term, high-dose studies in rats have demonstrated that ethyl methylphenylglycidate has no significant adverse health effects and is not carcinogenic. The US Food and Drug Administration has classified ethyl methylphenylglycidate as generally recognized as safe (GRAS). See also List of strawberry topics
https://en.wikipedia.org/wiki/Mark%20Kryder
Mark Howard Kryder (born October 7, 1943 in Portland, Oregon) was Seagate Corp.'s senior vice president of research and chief technology officer. Kryder holds a Bachelor of Science degree in electrical engineering from Stanford University and a Ph.D. in electrical engineering and physics from the California Institute of Technology. Kryder was elected a member of the National Academy of Engineering in 1994 for contributions to the understanding of magnetic domain behavior and for leadership in information storage research. He is known for "Kryder's law", an observation from the mid-2000s about the increasing capacity of magnetic hard drives. Kryder's law projection A 2005 Scientific American article, titled "Kryder's Law", described Kryder's observation that magnetic disk areal storage density was then increasing at a rate exceeding Moore's Law. The pace was then much faster than the two-year doubling time of semiconductor chip density posited by Moore's law. In 2005, commodity drive density of 110 Gbit/in2 (170 Mbit/mm2) had been reached, up from 100 Mbit/in2 (155 Kbit/mm2) circa 1990. This does not extrapolate back to the initial 2 kilobit/in2 (3.1 bit/mm2) drives introduced in 1956, as growth rates surged during the latter 15-year period. In 2009, Kryder projected that if hard drives were to continue to progress at their then-current pace of about 40% per year, then in 2020 a two-platter, 2.5-inch disk drive would store approximately 40 terabytes (TB) and cost about $40. The validity of the Kryder's law projection of 2009 was questioned halfway into the forecast period, and some called the actual rate of areal density progress the "Kryder rate". As of 2014, the observed Kryder rate had fallen well short of the 2009 forecast of 40% per year. A single 2.5-inch platter stored around 0.3 terabytes in 2009 and this reached 0.6 terabytes in 2014. The Kryder rate over the five years ending in 2014 was around 15% per year. To reach 20 terabytes by 2020, starting in
https://en.wikipedia.org/wiki/Modified%20Chee%27s%20medium
In cellular biology and microbiology, modified Chee's medium (otherwise known as "MCM") is used to cultivate cells and bacteria. It uses various additives (fat acids, albumins and selenium) to facilitate cellular and bacterial growth.
https://en.wikipedia.org/wiki/Dental%20arch
The dental arches are the two arches (crescent arrangements) of teeth, one on each jaw, that together constitute the dentition. In humans and many other species; the superior (maxillary or upper) dental arch is a little larger than the inferior (mandibular or lower) arch, so that in the normal condition the teeth in the maxilla (upper jaw) slightly overlap those of the mandible (lower jaw) both in front and at the sides. The way that the jaws, and thus the dental arches, approach each other when the mouth closes, which is called the occlusion, determines the occlusal relationship of opposing teeth, and it is subject to malocclusion (such as crossbite) if facial or dental development was imperfect. Because the upper central incisors are wider than the lower ones, the other teeth in the upper arch are arrayed somewhat distally, and the two sets do not quite correspond to each other when the mouth is closed: thus the upper canine tooth rests partly on the lower canine and partly on the lower first premolar, and the cusps of the upper molar teeth lie behind the corresponding cusps of the lower molar teeth. The two series, however, end at nearly the same point behind; this is mainly because the molars in the upper arch are the smaller. Since there are a standard number of teeth in humans, the size of the dental arch is of vital importance in determining how the teeth are positioned when they appear. While the arch can expand as a child grows, a small arch will force the teeth to grow close together. This can result in overlapping and improperly positioned teeth. Teeth may tilt at an awkward angle, putting pressure on gums when food is being chewed. This can ultimately lead to compromised gums or infections. Dentists replace missing, damaged, and severely decayed teeth by fixed or removable prostheses to restore or improve mastication function. There is a fundamental question in any treatment plan, namely, the desirable/mandatory length of an occlusal table. There h
https://en.wikipedia.org/wiki/Vizing%27s%20theorem
In graph theory, Vizing's theorem states that every simple undirected graph may be edge colored using a number of colors that is at most one larger than the maximum degree of the graph. At least colors are always necessary, so the undirected graphs may be partitioned into two classes: "class one" graphs for which colors suffice, and "class two" graphs for which colors are necessary. A more general version of Vizing's theorem states that every undirected multigraph without loops can be colored with at most colors, where is the multiplicity of the multigraph. The theorem is named for Vadim G. Vizing who published it in 1964. Discovery The theorem discovered by Russian mathematician Vadim G. Vizing was published in 1964 when Vizing was working in Novosibirsk and became known as Vizing's theorem. Indian mathematician R. P. Gupta independently discovered the theorem, while undertaking his doctorate (1965-1967). Examples When , the graph must itself be a matching, with no two edges adjacent, and its edge chromatic number is one. That is, all graphs with are of class one. When , the graph must be a disjoint union of paths and cycles. If all cycles are even, they can be 2-edge-colored by alternating the two colors around each cycle. However, if there exists at least one odd cycle, then no 2-edge-coloring is possible. That is, a graph with is of class one if and only if it is bipartite. Proof This proof is inspired by . Let be a simple undirected graph. We proceed by induction on , the number of edges. If the graph is empty, the theorem trivially holds. Let and suppose a proper -edge-coloring exists for all where . We say that color } is missing in with respect to proper -edge-coloring if for all . Also, let -path from denote the unique maximal path starting in with -colored edge and alternating the colors of edges (the second edge has color , the third edge has color and so on), its length can be . Note that if is a proper -edge-coloring of then e
https://en.wikipedia.org/wiki/Tom%20Maibaum
Thomas Stephen Edward Maibaum Fellow of the Royal Society of Arts (FRSA) is a computer scientist. Maibaum has a Bachelor of Science (B.Sc.) undergraduate degree in pure mathematics from the University of Toronto, Canada (1970), and a Doctor of Philosophy (Ph.D.) in computer science from Queen Mary and Royal Holloway Colleges, University of London, England (1974). Maibaum has held academic posts at Imperial College, London, King's College London (UK) and McMaster University (Canada). His research interests have concentrated on the theory of specification, together with its application in different contexts, in the general area of software engineering. From 1996 to 2005, he was involved with developing international standards in programming and informatics, as a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68. He is a Fellow of the Institution of Engineering and Technology and the Royal Society of Arts.
https://en.wikipedia.org/wiki/Digital%20credential
Digital credentials are the digital equivalent of paper-based credentials. Just as a paper-based credential could be a passport, a driver's license, a membership certificate or some kind of ticket to obtain some service, such as a cinema ticket or a public transport ticket, a digital credential is a proof of qualification, competence, or clearance that is attached to a person. Also, digital credentials prove something about their owner. Both types of credentials may contain personal information such as the person's name, birthplace, birthdate, and/or biometric information such as a picture or a finger print. Because of the still evolving, and sometimes conflicting, terminologies used in the fields of computer science, computer security, and cryptography, the term "digital credential" is used quite confusingly in these fields. Sometimes passwords or other means of authentication are referred to as credentials. In operating system design, credentials are the properties of a process (such as its effective UID) that is used for determining its access rights. On other occasions, certificates and associated key material such as those stored in PKCS#12 and PKCS#15 are referred to as credentials. Digital badges are a form of digital credential that indicate an accomplishment, skill, quality or interest. Digital badges can be earned in a variety of learning environments. Digital cash Money, in general, is not regarded as a form of qualification that is inherently linked to a specific individual, as the value of token money is perceived to reside independently. However, the emergence of digital assets, such as digital cash, has introduced a new set of challenges due to their susceptibility to replication. Consequently, digital cash protocols have been developed with additional measures to mitigate the issue of double spending, wherein a coin is used for multiple transactions. Credentials, on the other hand, serve as tangible evidence of an individual's qualifications or a
https://en.wikipedia.org/wiki/Front%20Line%20%28video%20game%29
is a military-themed run and gun video game released by Taito for arcades in November 1982. It was one of the first overhead run and gun games, a precursor to many similarly-themed games of the mid-to-late 1980s. Front Line is controlled with a joystick, a single button, and a rotary dial that can be pushed in like a button. The single button is used to throw grenades and to enter and exit tanks, while the rotary dial aims and fires the player's gun. The game was created by Tetsuya Sasaki. It was a commercial success in Japan, where it was the seventh highest-grossing arcade game of 1982. However, it received a mixed critical and commercial reception in Western markets, with praise for its originality but criticism for its difficulty. The game's overhead run and gun formula preceded Capcom's Commando (1985) by several years. The SNK shooters TNK III (1985) and Ikari Warriors (1986) follow conventions established by Front Line, including the vertically scrolling levels, entering/exiting tanks, and not dying when an occupied tank is destroyed. Gameplay Playing as a lone soldier, the player's ultimate objective is to lob a hand grenade into the enemy's fort, first by fighting off infantry units and then battling tanks before finally reaching the opponent's compound. The player begins with two weapons: a pistol and grenades, with no ammo limit. Once the player has advanced far enough into enemy territory, there is a "tank warfare" stage in which the player can hijack a tank to fight off other enemy tanks. There are two types of tanks available: a light tank armed with a machine gun and a heavy tank armed with a cannon. The light tank is more nimble, but can be easily destroyed by the enemy. The heavy tank is slower, but can sustain one hit from a light tank; a second hit from a light tank will destroy it. A single shot from a heavy tank will destroy either type of tank. If a partially damaged tank is evacuated, the player can jump back in and resume its normal oper
https://en.wikipedia.org/wiki/Dember%20effect
In physics, the Dember effect is when the electron current from a cathode subjected to both illumination and a simultaneous electron bombardment is greater than the sum of the photoelectric current and the secondary emission current . History Discovered by Harry Dember (1882–1943) in 1925, this effect is due to the sum of the excitations of an electron by two means: photonic illumination and electron bombardment (i.e. the sum of the two excitations extracts the electron). In Dember’s initial study, he referred only to metals; however, more complex materials have been analyzed since then. Photoelectric effect The photoelectric effect due to the illumination of the metallic surface extracts electrons (if the energy of the photon is greater than the extraction work) and excites the electrons which the photons don’t have the energy to extract. In a similar process, the electron bombardment of the metal both extracts and excites electrons inside the metal. If one considers a constant and increases , it can be observed that has a maximum of about 150 times . On the other hand, considering a constant and increasing the intensity of the illumination the , supplementary current, tends to saturate. This is due to the usage in the photoelectric effect of all the electrons excited (sufficiently) by the primary electrons of . See also Anomalous photovoltaic effect Photo-Dember
https://en.wikipedia.org/wiki/Microbial%20fuel%20cell
Microbial fuel cell (MFC) is a type of bioelectrochemical fuel cell system also known as micro fuel cell that generates electric current by diverting electrons produced from the microbial oxidation of reduced compounds (also known as fuel or electron donor) on the anode to oxidized compounds such as oxygen (also known as oxidizing agent or electron acceptor) on the cathode through an external electrical circuit. MFCs produce electricity by using the electrons derived from biochemical reactions catalyzed by bacteria. MFCs can be grouped into two general categories: mediated and unmediated. The first MFCs, demonstrated in the early 20th century, used a mediator: a chemical that transfers electrons from the bacteria in the cell to the anode. Unmediated MFCs emerged in the 1970s; in this type of MFC the bacteria typically have electrochemically active redox proteins such as cytochromes on their outer membrane that can transfer electrons directly to the anode. In the 21st century MFCs have started to find commercial use in wastewater treatment. History The idea of using microbes to produce electricity was conceived in the early twentieth century. Michael Cressé Potter initiated the subject in 1911. Potter managed to generate electricity from Saccharomyces cerevisiae, but the work received little coverage. In 1931, Barnett Cohen created microbial half fuel cells that, when connected in series, were capable of producing over 35 volts with only a current of 2 milliamps. A study by DelDuca et al. used hydrogen produced by the fermentation of glucose by Clostridium butyricum as the reactant at the anode of a hydrogen and air fuel cell. Though the cell functioned, it was unreliable owing to the unstable nature of hydrogen production by the micro-organisms. This issue was resolved by Suzuki et al. in 1976, who produced a successful MFC design a year later. In the late 1970s, little was understood about how microbial fuel cells functioned. The concept was studied by Robin M
https://en.wikipedia.org/wiki/Law%20of%20attraction%20%28New%20Thought%29
The law of attraction is the New Thought spiritual belief that positive or negative thoughts bring positive or negative experiences into a person's life. The belief is based on the idea that people and their thoughts are made from "pure energy" and that like energy can attract like energy, thereby allowing people to improve their health, wealth, or personal relationships. There is no empirical scientific evidence supporting the law of attraction, and it is widely considered to be pseudoscience. Advocates generally combine cognitive reframing techniques with affirmations and creative visualization to replace limiting or self-destructive ("negative") thoughts with more empowered, adaptive ("positive") thoughts. A key component of the philosophy is the idea that in order to effectively change one's negative thinking patterns, one must also "feel" (through creative visualization) that the desired changes have already occurred. This combination of positive thought and positive emotion is believed to allow one to attract positive experiences and opportunities by achieving resonance with the proposed energetic law. While some supporters of the law of attraction refer to scientific theories and use them as arguments in favor of it, it has no demonstrable scientific basis. A number of researchers have criticized the misuse of scientific concepts by its proponents. History The New Thought movement grew out of the teachings of Phineas Quimby in the early 19th century. Early in his life, Quimby was diagnosed with tuberculosis. Early 19th century medicine had no reliable cure for tuberculosis. Quimby took to horse riding and noted that intense excitement temporarily relieved him from his affliction. This method for relieving his pain and seemingly subsequent recovery prompted Phineas to pursue a study of "Mind over Body". Although he never used the words "Law of Attraction", he explained this in a statement that captured the concept in the field of health: In 1855, the ter
https://en.wikipedia.org/wiki/Daylighting%20%28streams%29
Daylighting is the opening up and restoration of a previously buried watercourse, one which had at some point been diverted below ground. Typically, the rationale behind returning the riparian environment of a stream, wash, or river to a more natural above-ground state is to reduce runoff, create habitat for species in need of it, or improve an area's aesthetics. In the United Kingdom, the practice is also known as deculverting. In addition to its use in urban design and planning the term also refers to the public process of advancing such projects. According to the Planning and Development Department of the City of Berkeley, "A general consensus has developed that protecting and restoring natural creeks' functions is achievable over time in an urban environment while recognizing the importance of property rights." Systems Natural drainage systems Natural drainage systems help manage stormwater by infiltrating and slowing the flow of stormwater, filtering and bioremediating pollutants by soils and plants, reducing impervious surfaces, using porous paving, increasing vegetation, and improving related pedestrian amenities. Natural features—open, vegetated swales, stormwater cascades, and small wetland ponds—mimic the functions of nature lost to urbanization. At the heart are plants, trees, and the deep, healthy soils that support them. All three combine to form a "living infrastructure" that, unlike pipes and vaults, increase in functional value over time. Some efforts to blend urban development with natural systems use innovative drainage design and landscaping instead of traditional curbs and gutters, pipes and vaults. One such demonstration project in the Pipers Creek watershed reduced imperviousness by more than 18 percent. The project built bioswales, landscape elements intended to remove silt and pollution from surface runoff water and planted 100 evergreen trees and 1,100 shrubs. From 2001 to 2003, the project reduced the volume of stormwater leaving the st
https://en.wikipedia.org/wiki/Xybernaut
Xybernaut Corporation was a maker of wearable mobile computing hardware, software, and services. Its products included the Atigo tablet PC, Poma wearable computer, and the MA-V wearable computer. The company was headquartered in Fairfax, Virginia, until 2006, when it moved to Chantilly, Virginia. Although its first wearable computer, the Poma, created an initial stir when introduced in 2002, the slowness and disconcerting appearance of the product would land it on "worst tech fail" lists by the turn of the decade. Although surviving a bankruptcy, by 2017 Xybernaut had collapsed under financial scandal and regulatory and criminal strictures. Early history The company was founded in 1990 as Computer Products & Services Incorporated (CPSI) by Edward G. Newman. In 1994, Newman's brother, Steven A. Newman, became the president of the company. The company had its Initial public offering in 1996 under the new name Xybernaut. It subsequently posted 33 consecutive quarterly losses, despite repeated promises by the Newmans that profitability was right around the corner. In mid-1998, former Virginia governor George Allen joined the company's board of directors. He remained on the board until December 2000, resigning after he was elected a U.S. Senator the month before. In 1998 and 1999, McGuire Woods LLP, the law firm that Allen was a partner of, billed $315,925 to Xybernaut for legal work. Allen remained on the Xybernaut board until December 2000. He was granted 110,000 options of company stock that, at their peak, were worth $1.5 million, but he never exercised those options, which expired 90 days after he left the board. In September 1999, the company's board dismissed its accounting firm, PricewaterhouseCoopers, which had issued a report with a "going concern" paragraph that questioned the company’s financial health. This was just one of many signs that the Newman brothers discouraged transparency in company accounting practices. Fraud charges and bankruptcy In
https://en.wikipedia.org/wiki/Fat%20substitute
A fat substitute is a food product with the same functions, stability, physical, and chemical characteristics as regular fat, with fewer calories per gram than fat. They are utilized in the production of low fat and low calorie foods. Background Fat is present in most foods. It provides a unique texture, flavor, and aroma to the food it is found in. While fat is essential to life, it can be detrimental to health when consumed in excess of physiological requirements. High fat diets increase risk of heart disease, weight gain, and some cancers. High blood cholesterol is more prevalent in those that consume diets high in saturated fats, and it increases risk for coronary heart disease in those individuals. The use of fat substitutes in food products allows for maintenance of the food’s original quality characteristics without the associated risks of fat consumption. In the absence of energy-dense fat molecules, products utilizing fat substitutes are generally lower in calories than their full-fat counterparts. Applications Fat substitutes can be divided into four categories based on the food component from which they are derived, as shown in Figure 1. Figure 1: Categories of fat substitutes based on composition. Like fat itself, such compounds have a variety of functions in food products. Table adapted from the American Dietetic Association’s 2005 report on fat replacers. Potential benefits Consumption of fat substitutes can assist in lowering total overall fat and calorie intake from foods. This has positive implications for those looking to reduce either one of these, especially when in a disease state associated with high fat diets. While fat substitution alone can reduce the percentage of kilocalories ingested from dietary fat, it may not reduce an individual’s total energy intake (in terms of kilocalories) unless the rest of the diet is of high quality and low energy density. Safety Few concerns have been raised about the safety of fat substitutes. Carragee
https://en.wikipedia.org/wiki/Problem%20of%20future%20contingents
Future contingent propositions (or simply, future contingents) are statements about states of affairs in the future that are contingent: neither necessarily true nor necessarily false. The problem of future contingents seems to have been first discussed by Aristotle in chapter 9 of his On Interpretation (De Interpretatione), using the famous sea-battle example. Roughly a generation later, Diodorus Cronus from the Megarian school of philosophy stated a version of the problem in his notorious master argument. The problem was later discussed by Leibniz. The problem can be expressed as follows. Suppose that a sea-battle will not be fought tomorrow. Then it was also true yesterday (and the week before, and last year) that it will not be fought, since any true statement about what will be the case in the future was also true in the past. But all past truths are now necessary truths; therefore it is now necessarily true in the past, prior and up to the original statement "A sea battle will not be fought tomorrow", that the battle will not be fought, and thus the statement that it will be fought is necessarily false. Therefore, it is not possible that the battle will be fought. In general, if something will not be the case, it is not possible for it to be the case. "For a man may predict an event ten thousand years beforehand, and another may predict the reverse; that which was truly predicted at the moment in the past will of necessity take place in the fullness of time" (De Int. 18b35). This conflicts with the idea of our own free choice: that we have the power to determine or control the course of events in the future, which seems impossible if what happens, or does not happen, is necessarily going to happen, or not happen. As Aristotle says, if so there would be no need "to deliberate or to take trouble, on the supposition that if we should adopt a certain course, a certain result would follow, while, if we did not, the result would not follow". Aristotle's solution
https://en.wikipedia.org/wiki/Axilrod%E2%80%93Teller%20potential
The Axilrod–Teller potential in molecular physics, is a three-body potential that results from a third-order perturbation correction to the attractive London dispersion interactions (instantaneous induced dipole-induced dipole) where is the distance between atoms and , and is the angle between the vectors and . The coefficient is positive and of the order , where is the ionization energy and is the mean atomic polarizability; the exact value of depends on the magnitudes of the dipole matrix elements and on the energies of the orbitals.
https://en.wikipedia.org/wiki/Whitney%20extension%20theorem
In mathematics, in particular in mathematical analysis, the Whitney extension theorem is a partial converse to Taylor's theorem. Roughly speaking, the theorem asserts that if A is a closed subset of a Euclidean space, then it is possible to extend a given function of A in such a way as to have prescribed derivatives at the points of A. It is a result of Hassler Whitney. Statement A precise statement of the theorem requires careful consideration of what it means to prescribe the derivative of a function on a closed set. One difficulty, for instance, is that closed subsets of Euclidean space in general lack a differentiable structure. The starting point, then, is an examination of the statement of Taylor's theorem. Given a real-valued Cm function f(x) on Rn, Taylor's theorem asserts that for each a, x, y ∈ Rn, there is a function Rα(x,y) approaching 0 uniformly as x,y → a such that where the sum is over multi-indices α. Let fα = Dαf for each multi-index α. Differentiating (1) with respect to x, and possibly replacing R as needed, yields where Rα is o(|x − y|m−|α|) uniformly as x,y → a. Note that () may be regarded as purely a compatibility condition between the functions fα which must be satisfied in order for these functions to be the coefficients of the Taylor series of the function f. It is this insight which facilitates the following statement: Theorem. Suppose that fα are a collection of functions on a closed subset A of Rn for all multi-indices α with satisfying the compatibility condition () at all points x, y, and a of A. Then there exists a function F(x) of class Cm such that: F = f0 on A. DαF = fα on A. F is real-analytic at every point of Rn − A. Proofs are given in the original paper of , and in , and . Extension in a half space proved a sharpening of the Whitney extension theorem in the special case of a half space. A smooth function on a half space Rn,+ of points where xn ≥ 0 is a smooth function f on the interior xn for which the
https://en.wikipedia.org/wiki/Method%20of%20moments%20%28probability%20theory%29
In probability theory, the method of moments is a way of proving convergence in distribution by proving convergence of a sequence of moment sequences. Suppose X is a random variable and that all of the moments exist. Further suppose the probability distribution of X is completely determined by its moments, i.e., there is no other probability distribution with the same sequence of moments (cf. the problem of moments). If for all values of k, then the sequence {Xn} converges to X in distribution. The method of moments was introduced by Pafnuty Chebyshev for proving the central limit theorem; Chebyshev cited earlier contributions by Irénée-Jules Bienaymé. More recently, it has been applied by Eugene Wigner to prove Wigner's semicircle law, and has since found numerous applications in the theory of random matrices. Notes Moment (mathematics)
https://en.wikipedia.org/wiki/List%20of%20Java%20APIs
There are two types of Java programming language application programming interfaces (APIs): The official core Java API, contained in the Android (Google), SE (OpenJDK and Oracle), MicroEJ. These packages (java.* packages) are the core Java language packages, meaning that programmers using the Java language had to use them in order to make any worthwhile use of the Java language. Optional APIs that can be downloaded separately. The specification of these APIs are defined according to many different organizations in the world (Alljoyn, OSGi, Eclipse, JCP, E-S-R, etc.). The following is a partial list of application programming interfaces (APIs) for Java. APIs Following is a very incomplete list, as the number of APIs available for the Java platform is overwhelming. Rich client platforms Eclipse Rich Client Platform (RCP) NetBeans Platform Office_compliant libraries Apache POI JXL - for Microsoft Excel JExcel - for Microsoft Excel Compression LZMA SDK, the Java implementation of the SDK used by the popular 7-Zip file archive software (available here) JSON Jackson (API) Game engines Slick jMonkey Engine JPCT Engine LWJGL Real-time libraries Real time Java is a catch-all term for a combination of technologies that allows programmers to write programs that meet the demands of real-time systems in the Java programming language. Java's sophisticated memory management, native support for threading and concurrency, type safety, and relative simplicity have created a demand for its use in many domains. Its capabilities have been enhanced to support real time computational needs: Java supports a strict priority based threading model. Because Java threads support priorities, Java locking mechanisms support priority inversion avoidance techniques, such as priority inheritance or the priority ceiling protocol. To overcome typical real time difficulties, the Java Community introduced a specification for real-time Java, JSR001. A number of implementations
https://en.wikipedia.org/wiki/SCVP
The Server-based Certificate Validation Protocol (SCVP) is an Internet protocol for determining the path between an X.509 digital certificate and a trusted root (Delegated Path Discovery) and the validation of that path (Delegated Path Validation) according to a particular validation policy. Overview When a relying party receives a digital certificate and needs to decide whether to trust the certificate, it first needs to determine whether the certificate can be linked to a trusted certificate. This process may involve chaining the certificate back through several issuers, such as the following case: Equifax Secure eBusiness CA-1 ACME Co Certificate Authority Joe User Currently, the creation of this chain of certificates is performed by the application receiving the signed message. The process is termed "path discovery" and the resulting chain is called a "certification path". Many Windows applications, such as Outlook, use Cryptographic Application Programming Interface (CAPI) for path discovery. CAPI is capable of building certification paths using any certificates that are installed in Windows certificate stores or provided by the relying party application. The Equifax CA certificate, for example, comes installed in Windows as a trusted certificate. If CAPI knows about the ACME Co CA certificate or if it is included in a signed email and made available to CAPI by Outlook, CAPI can create the certification path above. However, if CAPI cannot find the ACME Co CA certificate, it has no way to verify that Joe User is trusted. SCVP provides us with a standards-based client-server protocol for solving this problem using Delegated Path Discovery, or DPD. When using DPD, a relying party asks a server for a certification path that meets its needs. The SCVP client's request contains the certificate that it is attempting to trust and a set of trusted certificates. The SCVP server's response contains a set of certificates making up a valid path betwee
https://en.wikipedia.org/wiki/Ka/Ks%20ratio
{{DISPLAYTITLE:Ka/Ks ratio}} In genetics, the Ka/Ks ratio, also known as ω or dN/dS ratio, is used to estimate the balance between neutral mutations, purifying selection and beneficial mutations acting on a set of homologous protein-coding genes. It is calculated as the ratio of the number of nonsynonymous substitutions per non-synonymous site (Ka), in a given period of time, to the number of synonymous substitutions per synonymous site (Ks), in the same period. The latter are assumed to be neutral, so that the ratio indicates the net balance between deleterious and beneficial mutations. Values of Ka/Ks significantly above 1 are unlikely to occur without at least some of the mutations being advantageous. If beneficial mutations are assumed to make little contribution, then Ka/Ks estimates the degree of evolutionary constraint. Context Selection acts on variation in phenotypes, which are often the result of mutations in protein-coding genes. The genetic code is written in DNA sequences as codons, groups of three nucleotides. Each codon represents a single amino acid in a protein chain. However, there are more codons (64) than amino acids found in proteins (20), so many codons are effectively synonyms. For example, the DNA codons TTT and TTC both code for the amino acid Phenylalanine, so a change from the third T to C makes no difference to the resulting protein. On the other hand, the codon GAG codes for Glutamic acid while the codon GTG codes for Valine, so a change from the middle A to T does change the resulting protein, for better or (more likely) worse, so the change is not a synonym. These changes are illustrated in the tables below. The Ka/Ks ratio measures the relative rates of synonymous and nonsynonymous substitutions at a particular site. Methods Methods for estimating Ka and Ks use a sequence alignment of two or more nucleotide sequences of homologous genes that code for proteins (rather than being genetic switches, controlling development or the rate
https://en.wikipedia.org/wiki/Biomagnetism
Biomagnetism is the phenomenon of magnetic fields produced by living organisms; it is a subset of bioelectromagnetism. In contrast, organisms' use of magnetism in navigation is magnetoception and the study of the magnetic fields' effects on organisms is magnetobiology. (The word biomagnetism has also been used loosely to include magnetobiology, further encompassing almost any combination of the words magnetism, cosmology, and biology, such as "magnetoastrobiology".) The origin of the word biomagnetism is unclear, but seems to have appeared several hundred years ago, linked to the expression "animal magnetism". The present scientific definition took form in the 1970s, when an increasing number of researchers began to measure the magnetic fields produced by the human body. The first valid measurement was actually made in 1963, but the field of research began to expand only after a low-noise technique was developed in 1970. Today the community of biomagnetic researchers does not have a formal organization, but international conferences are held every two years, with about 600 attendees. Most conference activity centers on the MEG (magnetoencephalogram), the measurement of the magnetic field of the brain. Prominent researchers David Cohen John Wikswo Samuel Williamson See also Bioelectrochemistry Human magnetism Magnetite Magnetocardiography Magnetoception - sensing of magnetic fields by organisms Magnetoelectrochemistry Magnetoencephalography Magnetogastrography Magnetomyography SQUID Notes Further reading Williamson SH, Romani GL, Kaufman L, Modena I, editors. Biomagnetism: An Interdisciplinary Approach. 1983. NATO ASI series. New York: Plenum Press. Cohen, D. Boston and the history of biomagnetism. Neurology and Clinical Neurophysiology 2004; 30: 1. History of Biomagnetism Bioelectromagnetics Magnetism
https://en.wikipedia.org/wiki/Kirigami
is a variation of origami, the Japanese art of folding paper. In , the paper is cut as well as being folded, resulting in a three-dimensional design that stands away from the page. typically does not use glue. Overview In the United States, the term was coined by Florence Temko from Japanese , , and , , in the title of her 1962 book, , the Creative Art of Paper cutting. The book achieved enough success that the word was accepted as the Western name for the art of paper cutting. Typically, starts with a folded base, which is then unfolded; cuts are then opened and flattened to make the finished design. Simple are usually symmetrical, such as snowflakes, pentagrams, or orchid blossoms. A difference between and the art of "full base", or 180-degree opening structures, is that is made out of a single piece of paper that has then been cut. Notable artists (born 1924–), a renowned () artist known for his colourful , which have also been published as a book. Nahoko Kojima (born 1981–), a professional contemporary Japanese artist, who pioneered sculptural, three-dimensional . See also History of origami Origamic architecture Paper cutting Paper model
https://en.wikipedia.org/wiki/Soil%20thermal%20properties
The thermal properties of soil are a component of soil physics that has found important uses in engineering, climatology and agriculture. These properties influence how energy is partitioned in the soil profile. While related to soil temperature, it is more accurately associated with the transfer of energy (mostly in the form of heat) throughout the soil, by radiation, conduction and convection. The main soil thermal properties are Volumetric heat capacity, SI Units: J∙m−3∙K−1 Thermal conductivity, SI Units: W∙m−1∙K−1 Thermal diffusivity, SI Units: m2∙s−1 Measurement It is hard to say something general about the soil thermal properties at a certain location because these are in a constant state of flux from diurnal and seasonal variations. Apart from the basic soil composition, which is constant at one location, soil thermal properties are strongly influenced by the soil volumetric water content, volume fraction of solids and volume fraction of air. Air is a poor thermal conductor and reduces the effectiveness of the solid and liquid phases to conduct heat. While the solid phase has the highest conductivity it is the variability of soil moisture that largely determines thermal conductivity. As such soil moisture properties and soil thermal properties are very closely linked and are often measured and reported together. Temperature variations are most extreme at the surface of the soil and these variations are transferred to sub surface layers but at reduced rates as depth increases. Additionally there is a time delay as to when maximum and minimum temperatures are achieved at increasing soil depth (sometimes referred to as thermal lag). One possible way of assessing soil thermal properties is the analysis of soil temperature variations versus depth Fourier's law, where Q is heat flux or rate of heat transfer per unit area J·m−2∙s−1 or W·m−2, λ is thermal conductivity W·m−1∙K−1; dT/dz is the gradient of temperature (change in temp/change in depth) K·m−1. The mo
https://en.wikipedia.org/wiki/Teva%20Pharmaceuticals
Teva Pharmaceutical Industries Ltd. (also known as Teva Pharmaceuticals) is an Israeli multinational pharmaceutical company with headquarters in Tel Aviv, Israel. It specializes primarily in generic drugs, but other business interests include active pharmaceutical ingredients and, to a lesser extent, proprietary pharmaceuticals. Teva Pharmaceuticals was the largest generic drug manufacturer, when it was surpassed briefly by US-based Pfizer. Teva regained its market leader position once Pfizer spun off its generic drug division in a merger with Mylan, forming the new company Viatris at the end of 2020. Overall, Teva is the 18th largest pharmaceutical company in the world. Teva's facilities are located in Israel, North America, Europe, Australia, and South America. Teva shares are listed on the Tel Aviv Stock Exchange. The company is a member of the Pharmaceutical Research and Manufacturers of America (PhRMA). History Salomon, Levin, and Elstein Teva's earliest predecessor was SLE, Ltd., a wholesale drug business founded in 1901 in Jerusalem, which at that time was part of the Ottoman Empire's Mutasarrifate of Jerusalem. SLE Ltd. took its name from the initials of its three cofounders: Chaim Salomon, Moshe Levin and Yitschak Elstein, and used camels to make deliveries. During the 1930s, new immigrants from Europe founded several pharmaceutical companies including Teva (טבע, "nature" in Hebrew) and Zori (צרי, "medicine" in biblical Hebrew). In the 1930s, Salomon, Levin, and Elstein Ltd. also founded Assia (אסיא, "medicine, cure" in Aramaic), a pharmaceutical company. Teva Pharmaceutical Industries 1935–1949 Teva Pharmaceutical Industries took its present form through the efforts of Günther Friedländer and his aunt Else Kober on May 1, 1935. The original registration was under the name Teva Middle East Pharmaceutical & Chemical Works Co. Ltd. in Jerusalem, then part of Mandatory Palestine. Friedländer was a German pharmacist, botanist and pharmacognosist, who im
https://en.wikipedia.org/wiki/Polycarpic
Polycarpic plants are those that flower and set seeds many times before dying. A term of identical meaning is pleonanthic and iteroparous. Polycarpic plants are able to reproduce multiple times due to at least some portion of its meristems being able to maintain a vegetative state in some fashion so that it may reproduce again. This type of reproduction seems to be best suited for plants who have a fair amount of security in their environment as they do continuously reproduce. Generally, in reference to life-history theory, plants will sacrifice their ability in one regard to improve themselves in another regard, so for polycarpic plants that may strive towards continued reproduction, they might focus less on their growth. However, these aspects may not necessarily be directly correlated and some plants, notably invasive species, do not follow this general trend and actually show a fairly long lifespan with frequent reproduction. To an extent, there does seem to be an importance of the balance of these two traits as one study noted how plants that had a very short lifespan as well as plants that had a very long lifespan and also had little reproductive success were not found in any of the nearly 400 plants included in the study. Due to their reduced development, it has been noted how polycarpic plants have less energy to reproduce than monocarpic plants throughout their lifetimes. In addition, as its lifespan increases, the plant is also subject to more inconveniences due to its age, and thus might focus more towards adapting to it, resulting in less energy the plant is able to spend on reproduction. One trend that has been noticed throughout some studies is how quicker lifespans generally impact how quickly the plants increasingly expend their energy towards reproduction. However, the specific structure of polycarpic strategies depends on the specific plant and all polycarpic plants do not seem to have a uniform pattern of how energy is expended on reproduction.
https://en.wikipedia.org/wiki/Bear%20JJ1
Bear JJ1 (2004 – 26 June 2006) was a brown bear whose travels and exploits in Austria and Germany in the first half of 2006 drew international attention. JJ1, also known as Bruno in the German press (some newspapers also gave the bear different names, such as Beppo or Petzi), is believed to have been the first brown bear on German soil in 170 years. Origin JJ1 was originally part of an EU-funded €1 million conservation project in Italy, but had walked across to Austria and into Germany. A spokesman said that there had been "co-ordination" between Italy, Austria and Slovenia to ensure the bear's welfare but apparently Germany had not been informed. The Life Ursus reintroduction project of the Italian province of Trento had introduced 10 Slovenian bears in the region, monitoring them. JJ1 was the first son of Jurka and Joze (thus the name JJ1); his younger brother JJ3 also showed an aggressive character, wandered into Switzerland in 2008, and was killed there. Because of this second problem the mother Jurka was put in captivity in Italy, despite protests by environmentalists; park authorities maintained that 50% of the incidents involving bears had been caused by Jurka or her descendants. In April 2023, his sister JJ4 killed a 26-year-old jogger in Trentino province of northern Italy. In the summer of 2020, she had already attacked and injured a man and his son on Monte Peller and should have been killed; but animal rights activists had prevented that. Overview Previously, the last sighting of a bear in what is now Germany was recorded in 1838 when hunters shot a bear in Bavaria. Initially heralded as a welcome visitor and a symbol of the success of endangered species reintroduction programs, JJ1's dietary preferences for sheep, chickens, and beehives led government officials to believe that he could become a threat to humans, and they ordered that he be shot or captured. Public objection to the order resulted in its revision, and the German government tried to
https://en.wikipedia.org/wiki/Suture%20%28geology%29
In structural geology, a suture is a joining together along a major fault zone, of separate terranes, tectonic units that have different plate tectonic, metamorphic and paleogeographic histories. The suture is often represented on the surface by an orogen or mountain range. Overview In plate tectonics, sutures are the remains of subduction zones, and the terranes that are joined together are interpreted as fragments of different palaeocontinents or tectonic plates. Outcrops of sutures can vary in width from a few hundred meters to a couple of kilometers. They can be networks of mylonitic shear zones or brittle fault zones, but are usually both. Sutures are usually associated with igneous intrusions and tectonic lenses with varying kinds of lithologies from plutonic rocks to ophiolitic fragments. An example from Great Britain is the Iapetus Suture which, though now concealed beneath younger rocks, has been determined by geophysical means to run along a line roughly parallel with the Anglo-Scottish border and represents the joint between the former continent of Laurentia to the north and the former micro-continent of Avalonia to the south. Avalonia is in fact a plain which dips steeply northwestwards through the crust, underthrusting Laurentia. Paleontological use When used in paleontology, suture can also refer to fossil exoskeletons, as in the suture line, a division on a trilobite between the free cheek and the fixed cheek; this suture line allowed the trilobite to perform ecdysis (the shedding of its skin).
https://en.wikipedia.org/wiki/Decision%20tree%20pruning
Pruning is a data compression technique in machine learning and search algorithms that reduces the size of decision trees by removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting. One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risks overfitting the training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information. Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance. Techniques Pruning processes can be divided into two types (pre- and post-pruning). Pre-pruning procedures prevent a complete induction of the training set by replacing a stop () criterion in the induction algorithm (e.g. max. Tree depth or information gain (Attr)> minGain). Pre-pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. Prepruning methods share a common problem, the horizon effect. This is to be understood as the undesired premature termination of the induction by the stop () criterion. Post-pruning (or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to reduce complexity.
https://en.wikipedia.org/wiki/Histone%20code
The histone code is a hypothesis that the transcription of genetic information encoded in DNA is in part regulated by chemical modifications (known as histone marks) to histone proteins, primarily on their unstructured ends. Together with similar modifications such as DNA methylation it is part of the epigenetic code. Histones associate with DNA to form nucleosomes, which themselves bundle to form chromatin fibers, which in turn make up the more familiar chromosome. Histones are globular proteins with a flexible N-terminus (taken to be the tail) that protrudes from the nucleosome. Many of the histone tail modifications correlate very well to chromatin structure and both histone modification state and chromatin structure correlate well to gene expression levels. The critical concept of the histone code hypothesis is that the histone modifications serve to recruit other proteins by specific recognition of the modified histone via protein domains specialized for such purposes, rather than through simply stabilizing or destabilizing the interaction between histone and the underlying DNA. These recruited proteins then act to alter chromatin structure actively or to promote transcription. For details of gene expression regulation by histone modifications see table below. The hypothesis The hypothesis is that chromatin-DNA interactions are guided by combinations of histone modifications. While it is accepted that modifications (such as methylation, acetylation, ADP-ribosylation, ubiquitination, citrullination, SUMO-ylation and phosphorylation) to histone tails alter chromatin structure, a complete understanding of the precise mechanisms by which these alterations to histone tails influence DNA-histone interactions remains elusive. However, some specific examples have been worked out in detail. For example, phosphorylation of serine residues 10 and 28 on histone H3 is a marker for chromosomal condensation. Similarly, the combination of phosphorylation of serine residu
https://en.wikipedia.org/wiki/Tsung
Tsung (formerly known as idx-Tsunami) is a stress testing tool written in the Erlang language and distributed under the GPL license. It can currently stress test HTTP, WebDAV, LDAP, MySQL, PostgreSQL, SOAP and XMPP servers. Tsung can simulate hundreds of simultaneous users on a single system. It can also function in a clustered environment. Features Features include: Several IP addresses can be used on a single machine using the underlying OS's IP Aliasing. OS monitoring (CPU, memory, and network traffic) using SNMP, munin-node agents or Erlang agents on remote servers. Different types of users can be simulated. Dynamic sessions can be described in XML (to retrieve, at runtime, an ID from the server output and use it later in the session). Simulated user thinktimes and the arrival rate can be randomized via probability distribution. HTML reports can be generated during the load to view response time measurements, server CPU, and other statistics.
https://en.wikipedia.org/wiki/Shadow%20%28OS/2%29
In the graphical Workplace Shell (WPS) of the OS/2 operating system, a shadow is an object that represents another object. A shadow is a stand-in for any other object on the desktop, such as a document, an application, a folder, a hard disk, a network share or removable medium, or a printer. A target object can have an arbitrary number of shadows. When double-clicked, the desktop acts the same way as if the original object had been double-clicked. The shadow's context menu is the same as the target object's context menu, with the addition of an "Original" sub-menu, that allows the location of, and explicit operation upon, the original object. A shadow is a dynamic reference to an object. The original may be moved to another place in the file system, without breaking the link. The WPS updates shadows of objects whenever the original target objects are renamed or moved. To do this, it requests notification from the operating system of all file rename operations. (Thus if a target filesystem object is renamed when the WPS is not running, the link between the shadow and the target object is broken.) Similarities to and differences from other mechanisms Shadows are similar in operation to aliases in Mac OS, although there are some differences: Shadows in the WPS are not filesystem objects, as aliases are. They are derived from the WPAbstract class, and thus their backing storage is the user INI file, not a file in the file system. Thus shadows are invisible to applications that do not use the WPS API. The WPS has no mechanism for re-connecting shadows when the link between them and the target object has been broken. (Although where the link has been broken because target objects are temporarily inaccessible, restarting the WPS after the target becomes accessible once more often restores the link.) Shadows are different from symbolic links and shortcuts because they are not filesystem objects, and because shadows are dynamically updated as target objects ar
https://en.wikipedia.org/wiki/List%20of%20military%20clothing%20camouflage%20patterns
This is a list of military clothing camouflage patterns used for battledress. Military camouflage is the use of camouflage by armed forces to protect personnel and equipment from observation by enemy forces. Textile patterns for uniforms have multiple functions, including camouflage, identifying friend from foe, and esprit de corps. The list is organized by pattern; only patterned textiles are shown. It includes current and past issue patterns, with dates; users may include a wide range of military bodies. Patterns See also Military uniform Snow camouflage#Military usage
https://en.wikipedia.org/wiki/Dynamic%20program%20analysis
Dynamic program analysis is analysis of computer software that involves executing the program in question (as opposed to static program analysis, which does not). Dynamic program analysis includes familiar techniques from software engineering such as unit testing, debugging, and measuring code coverage, but also includes lesser-known techniques like program slicing and invariant inference. Dynamic program analysis is widely applied in security in the form of runtime memory error detection, fuzzing, dynamic symbolic execution, and taint tracking. For dynamic program analysis to be effective, the target program must be executed with sufficient test inputs to cover almost all possible outputs. Use of software testing measures such as code coverage helps increase the chance that an adequate slice of the program's set of possible behaviors has been observed. Also, care must be taken to minimize the effect that instrumentation has on the execution (including temporal properties) of the target program. Dynamic analysis is in contrast to static program analysis. Unit tests, integration tests, system tests and acceptance tests use dynamic testing. Types of dynamic analysis Code coverage Computing the code coverage according to a test suite or a workload is a standard dynamic analysis technique. Gcov is the GNU source code coverage program. VB Watch injects dynamic analysis code into Visual Basic programs to monitor code coverage, call stack, execution trace, instantiated objects and variables. Dynamic testing Dynamic testing involves executing a program on a set of test cases. Memory error detection AddressSanitizer: Memory error detection for Linux, macOS, Windows, and more. Part of LLVM. BoundsChecker: Memory error detection for Windows based applications. Part of Micro Focus DevPartner. Dmalloc: Library for checking memory allocation and leaks. Software must be recompiled, and all files must include the special C header file dmalloc.h. Intel Inspector: Dy
https://en.wikipedia.org/wiki/Topological%20entropy
In mathematics, the topological entropy of a topological dynamical system is a nonnegative extended real number that is a measure of the complexity of the system. Topological entropy was first introduced in 1965 by Adler, Konheim and McAndrew. Their definition was modelled after the definition of the Kolmogorov–Sinai, or metric entropy. Later, Dinaburg and Rufus Bowen gave a different, weaker definition reminiscent of the Hausdorff dimension. The second definition clarified the meaning of the topological entropy: for a system given by an iterated function, the topological entropy represents the exponential growth rate of the number of distinguishable orbits of the iterates. An important variational principle relates the notions of topological and measure-theoretic entropy. Definition A topological dynamical system consists of a Hausdorff topological space X (usually assumed to be compact) and a continuous self-map f. Its topological entropy is a nonnegative extended real number that can be defined in various ways, which are known to be equivalent. Definition of Adler, Konheim, and McAndrew Let X be a compact Hausdorff topological space. For any finite open cover C of X, let H(C) be the logarithm (usually to base 2) of the smallest number of elements of C that cover X. For two covers C and D, let be their (minimal) common refinement, which consists of all the non-empty intersections of a set from C with a set from D, and similarly for multiple covers. For any continuous map f: X → X, the following limit exists: Then the topological entropy of f, denoted h(f), is defined to be the supremum of H(f,C) over all possible finite covers C of X. Interpretation The parts of C may be viewed as symbols that (partially) describe the position of a point x in X: all points x ∈ Ci are assigned the symbol Ci . Imagine that the position of x is (imperfectly) measured by a certain device and that each part of C corresponds to one possible outcome of the measurement. then
https://en.wikipedia.org/wiki/Ordinal%20date
An ordinal date is a calendar date typically consisting of a year and an ordinal number, ranging between 1 and 366 (starting on January 1), representing the multiples of a day, called day of the year or ordinal day number (also known as ordinal day or day number). The two parts of the date can be formatted as "YYYY-DDD" to comply with the ISO 8601 ordinal date format. The year may sometimes be omitted, if it is implied by the context; the day may be generalized from integers to include a decimal part representing a fraction of a day. Nomenclature Ordinal date is the preferred name for what was formerly called the "Julian date" or , or , which still seen in old programming languages and spreadsheet software. The older names are deprecated because they are easily confused with the earlier dating system called 'Julian day number' or , which was in prior use and which remains ubiquitous in astronomical and some historical calculations. Calculation Computation of the ordinal day within a year is part of calculating the ordinal day throughout the years from a reference date, such as the Julian date. It is also part of calculating the day of the week, though for this purpose modulo 7 simplifications can be made. In the following text, several algorithms for calculating the ordinal day is presented. The inputs taken are integers , and , for the year, month, and day numbers of the Gregorian or Julian calendar date. Trivial methods The most trivial method of calculating the ordinal day involves counting up all days that have elapsed per the definition: Let O be 0. From , add the length of month to O, taking care of leap year according to the calendar used. Add d to O. Similarly trivial is the use of a lookup table, such as the one referenced. Zeller-like The table of month lengths can be replaced following the method of encoding the month-length variation in Zeller's congruence. As in Zeller, the is changed to if . It can be shown (see below) that for a mont
https://en.wikipedia.org/wiki/Tioguanine
Tioguanine, also known as thioguanine or 6-thioguanine (6-TG) is a medication used to treat acute myeloid leukemia (AML), acute lymphocytic leukemia (ALL), and chronic myeloid leukemia (CML). Long-term use is not recommended. It is given by mouth. Common side effects include bone marrow suppression, liver problems and inflammation of the mouth. It is recommended that liver enzymes be checked weekly when on the medication. People with a genetic deficiency in thiopurine S-methyltransferase are at higher risk of side effects. Avoiding pregnancy when on the medication is recommended for both males and females. Tioguanine is in the antimetabolite family of medications. It is a purine analogue of guanine and works by disrupting DNA and RNA. Tioguanine was developed between 1949 and 1951. It is on the World Health Organization's List of Essential Medicines. Medical uses Acute leukemias in both adults and children Chronic myelogenous leukemia Inflammatory bowel disease, especially ulcerative colitis Psoriasis Colorectal cancer in mice resistant to immunotherapy Side effects Leukopenia and neutropenia Thrombocytopenia Anemia Anorexia Nausea and vomiting Hepatotoxicity: this manifests as: Hepatic veno-occlusive disease The major concern that has inhibited the use of thioguanine has been veno-occlusive disease (VOD) and its histological precursor nodular regenerative hyperplasia (NRH). The incidence of NRH with thioguanine was reported as between 33 and 76%. The risk of ensuing VOD is serious and frequently irreversible so this side effect has been a major concern. However, recent evidence using an animal model for thioguanine-induced NRH/VOD has shown that, contrary to previous assumptions, NRH/VOD is dose dependent and the mechanism for this has been demonstrated. This has been confirmed in human trials, where thioguanine has proven to be safe but efficacious for coeliac disease when used at doses below those commonly prescribed. This has led to a revival of interest
https://en.wikipedia.org/wiki/Enzyme%20inhibitor
An enzyme inhibitor is a molecule that binds to an enzyme and blocks its activity. Enzymes are proteins that speed up chemical reactions necessary for life, in which substrate molecules are converted into products. An enzyme facilitates a specific chemical reaction by binding the substrate to its active site, a specialized area on the enzyme that accelerates the most difficult step of the reaction. An enzyme inhibitor stops ("inhibits") this process, either by binding to the enzyme's active site (thus preventing the substrate itself from binding) or by binding to another site on the enzyme such that the enzyme's catalysis of the reaction is blocked. Enzyme inhibitors may bind reversibly or irreversibly. Irreversible inhibitors form a chemical bond with the enzyme such that the enzyme is inhibited until the chemical bond is broken. By contrast, reversible inhibitors bind non-covalently and may spontaneously leave the enzyme, allowing the enzyme to resume its function. Reversible inhibitors produce different types of inhibition depending on whether they bind to the enzyme, the enzyme-substrate complex, or both. Enzyme inhibitors play an important role in all cells, since they are generally specific to one enzyme each and serve to control that enzyme's activity. For example, enzymes in a metabolic pathway may be inhibited by molecules produced later in the pathway, thus curtailing the production of molecules that are no longer needed. This type of negative feedback is an important way to maintain balance in a cell. Enzyme inhibitors also control essential enzymes such as proteases or nucleases that, if left unchecked, may damage a cell. Many poisons produced by animals or plants are enzyme inhibitors that block the activity of crucial enzymes in prey or predators. Many drug molecules are enzyme inhibitors that inhibit an aberrant human enzyme or an enzyme critical for the survival of a pathogen such as a virus, bacterium or parasite. Examples include methotrexate (u
https://en.wikipedia.org/wiki/Phycologia%20Australica
Phycologia Australica, written by William Henry Harvey, is one of the most important works on phycology of the 19th century. (Phycology is the study of algae.) The work, published in five separate volumes between 1858 and 1863, is the result of Harvey’s extensive collecting along the Australian shores during a three year sabbatical. By the time Harvey set foot in Western Australia, he had already established himself as a leading phycologist having published several large works on algae from the British Isles, northern America as well as the Southern Ocean (Nereis Australica). The fact that Harvey travelled the globe on several occasions and collected the seaweeds which he described himself in his later publications, set him apart from most of his contemperoraries who relied for the most part on specimens collected by others. In addition Harvey’s zest for work, made he pressed sometimes over 700 specimens in a single day, which were distributed to his colleagues a set of Australian algae. Upon his return to Trinity College in Dublin, Harvey embarked on a mission: the illustration and description of over 300 species of Australian algae, for which he deserved the title "father of Australian Phycology". The dedications and specific epithets of the species commemorate his friend George Clifton, of Fremantle, who assisted Harvey as a collector.
https://en.wikipedia.org/wiki/Mental%20environment
The mental environment refers to the sum of all societal influences upon mental health. The term is often used in a context critical of the mental environment in industrialized societies. It is argued that just as industrial societies produce physical toxins and pollutants which harm humans physical health, they also produce psychological toxins (e.g. television, excessive noise, violent marketing tactics, Internet addiction, social media) that cause psychological damage. This poor mental environment may help explain why rates of mental illness are reportedly higher in industrial societies which might also have its roots in poor educational environment and mechanical routinised life present. Magico-religious beliefs are an important contribution of such communal settings. Delusions such as these rooted from childhood are often hard to completely regulate from a person's life. The idea has its roots in evolutionary psychology, as the deleterious consequences of a poor mental environment can be explained by the mismatch between the mental environment humans evolved to exist within and the one they exist within today. "We live in both a mental and physical environment. We can influence the mental environment around us, but to a far greater extent we are influenced by the mental environment. The mental environment contains forces that affect our thinking and emotions and that can dominate our personal minds." - Marshall Vian Summers See also Effects of climate change on human health Environmental psychology Healthy building Socioeconomic status and mental health
https://en.wikipedia.org/wiki/K%C5%91nig%27s%20theorem%20%28graph%20theory%29
In the mathematical area of graph theory, Kőnig's theorem, proved by , describes an equivalence between the maximum matching problem and the minimum vertex cover problem in bipartite graphs. It was discovered independently, also in 1931, by Jenő Egerváry in the more general case of weighted graphs. Setting A vertex cover in a graph is a set of vertices that includes at least one endpoint of every edge, and a vertex cover is minimum if no other vertex cover has fewer vertices. A matching in a graph is a set of edges no two of which share an endpoint, and a matching is maximum if no other matching has more edges. It is obvious from the definition that any vertex-cover set must be at least as large as any matching set (since for every edge in the matching, at least one vertex is needed in the cover). In particular, the minimum vertex cover set is at least as large as the maximum matching set. Kőnig's theorem states that, in any bipartite graph, the minimum vertex cover set and the maximum matching set have in fact the same size. Statement of the theorem In any bipartite graph, the number of edges in a maximum matching equals the number of vertices in a minimum vertex cover. Example The bipartite graph shown in the above illustration has 14 vertices; a matching with six edges is shown in blue, and a vertex cover with six vertices is shown in red. There can be no smaller vertex cover, because any vertex cover has to include at least one endpoint of each matched edge (as well as of every other edge), so this is a minimum vertex cover. Similarly, there can be no larger matching, because any matched edge has to include at least one endpoint in the vertex cover, so this is a maximum matching. Kőnig's theorem states that the equality between the sizes of the matching and the cover (in this example, both numbers are six) applies more generally to any bipartite graph. Proofs Constructive proof The following proof provides a way of constructing a minimum vertex cover
https://en.wikipedia.org/wiki/Coefficient%20diagram%20method
In control theory, the coefficient diagram method (CDM) is an algebraic approach applied to a polynomial loop in the parameter space, where a special diagram called a "coefficient diagram" is used as the vehicle to carry the necessary information, and as the criterion of good design. The performance of the closed loop system is monitored by the coefficient diagram. The most considerable advantages of CDM can be listed as follows: The design procedure is easily understandable, systematic and useful. Therefore, the coefficients of the CDM controller polynomials can be determined more easily than those of the PID or other types of controller. This creates the possibility of an easy realisation for a new designer to control any kind of system. There are explicit relations between the performance parameters specified before the design and the coefficients of the controller polynomials as described in. For this reason, the designer can easily realize many control systems having different performance properties for a given control problem in a wide range of freedom. The development of different tuning methods is required for time delay processes of different properties in PID control. But it is sufficient to use the single design procedure in the CDM technique. This is an outstanding advantage. It is particularly hard to design robust controllers realizing the desired performance properties for unstable, integrating and oscillatory processes having poles near the imaginary axis. It has been reported that successful designs can be achieved even in these cases by using CDM. It is theoretically proven that CDM design is equivalent to LQ design with proper state augmentation. Thus, CDM can be considered an ‘‘improved LQG’’, because the order of the controller is smaller and weight selection rules are also given. It is usually required that the controller for a given plant should be designed under some practical limitations. The controller is desired to be of minimum de
https://en.wikipedia.org/wiki/Xylan
Xylan (; ) (CAS number: 9014-63-5) is a type of hemicellulose, a polysaccharide consisting mainly of xylose residues. It is found in plants, in the secondary cell walls of dicots and all cell walls of grasses. Xylan is the third most abundant biopolymer on Earth, after cellulose and chitin. Composition Xylans are polysaccharides made up of β-1,4-linked xylose (a pentose sugar) residues with side branches of α-arabinofuranose and/or α-glucuronic acids. On the basis of substituted groups xylan can be categorized into three classes i) glucuronoxylan (GX) ii) neutral arabinoxylan (AX) and iii) glucuronoarabinoxylan (GAX). In some cases contribute to cross-linking of cellulose microfibrils and lignin through ferulic acid residues. Occurrence Plant cell structure Xylans play an important role in the integrity of the plant cell wall and increase cell wall recalcitrance to enzymatic digestion; thus, they help plants to defend against herbivores and pathogens (biotic stress). Xylan also plays a significant role in plant growth and development. Typically, xylans content in hardwoods is 10-35%, whereas they are 10-15% in softwoods. The main xylan component in hardwoods is O-acetyl-4-O-methylglucuronoxylan, whereas arabino-4-O-methylglucuronoxylans are a major component in softwoods. In general, softwood xylans differ from hardwood xylans by the lack of acetyl groups and the presence of arabinose units linked by α-(1,3)-glycosidic bonds to the xylan backbone. Algae Some macrophytic green algae contain xylan (specifically homoxylan) especially those within the Codium and Bryopsis genera where it replaces cellulose in the cell wall matrix. Similarly, it replaces the inner fibrillar cell-wall layer of cellulose in some red algae. Food science The quality of cereal flours and the hardness of dough are affected by their xylan content, thus, playing a significant role in bread industry. The main constituent of xylan can be converted into xylitol (a xylose derivative), which
https://en.wikipedia.org/wiki/Stable%20manifold
In mathematics, and in particular the study of dynamical systems, the idea of stable and unstable sets or stable and unstable manifolds give a formal mathematical definition to the general notions embodied in the idea of an attractor or repellor. In the case of hyperbolic dynamics, the corresponding notion is that of the hyperbolic set. Physical example The gravitational tidal forces acting on the rings of Saturn provide an easy-to-visualize physical example. The tidal forces flatten the ring into the equatorial plane, even as they stretch it out in the radial direction. Imagining the rings to be sand or gravel particles ("dust") in orbit around Saturn, the tidal forces are such that any perturbations that push particles above or below the equatorial plane results in that particle feeling a restoring force, pushing it back into the plane. Particles effectively oscillate in a harmonic well, damped by collisions. The stable direction is perpendicular to the ring. The unstable direction is along any radius, where forces stretch and pull particles apart. Two particles that start very near each other in phase space will experience radial forces causing them to diverge, radially. These forces have a positive Lyapunov exponent; the trajectories lie on a hyperbolic manifold, and the movement of particles is essentially chaotic, wandering through the rings. The center manifold is tangential to the rings, with particles experiencing neither compression nor stretching. This allows second-order gravitational forces to dominate, and so particles can be entrained by moons or moonlets in the rings, phase locking to them. The gravitational forces of the moons effectively provide a regularly repeating small kick, each time around the orbit, akin to a kicked rotor, such as found in a phase-locked loop. The discrete-time motion of particles in the ring can be approximated by the Poincaré map. The map effectively provides the transfer matrix of the system. The eigenvector associate
https://en.wikipedia.org/wiki/Cheating%20%28biology%29
Cheating is a term used in behavioral ecology and ethology to describe behavior whereby organisms receive a benefit at the cost of other organisms. Cheating is common in many mutualistic and altruistic relationships. A cheater is an individual who does not cooperate (or cooperates less than their fair share) but can potentially gain the benefit from others cooperating. Cheaters are also those who selfishly use common resources to maximize their individual fitness at the expense of a group. Natural selection favors cheating, but there are mechanisms to regulate it. The stress gradient hypothesis states that facilitation, cooperation or mutualism should be more common in stressful environments, while cheating, competition or parasitism are common in benign environments (i.e nutrient excess). Theoretical models Organisms communicate and cooperate to perform a wide range of behaviors. Mutualism, or mutually beneficial interactions between species, is common in ecological systems. These interactions can be thought of "biological markets" in which species offer partners goods that are relatively inexpensive for them to produce and receive goods that are more expensive or even impossible for them to produce. However, these systems provide opportunities for exploitation by individuals that can obtain resources while providing nothing in return. Exploiters can take on several forms: individuals outside a mutualistic relationship who obtain a commodity in a way that confers no benefit to either mutualist, individuals who receive benefits from a partner but have lost the ability to give any in return, or individuals who have the option of behaving mutualistically towards their partners but chose not to do so. Cheaters, who do not cooperate but benefit from others who do cooperate gain a competitive edge. In an evolutionary context, this competitive edge refers to a greater ability to survive or to reproduce. If individuals who cheat are able to gain survivorship and reprod
https://en.wikipedia.org/wiki/Amyoplasia
Amyoplasia is a condition characterized by a generalized lack in the newborn of muscular development and growth, with contracture and deformity that affect at least two joints. It is the most common form of arthrogryposis. It is characterized by the four limbs being involved, and by the replacement of skeletal muscle by dense fibrous and adipose tissue. Studies involving amyoplasia have revealed similar findings of the muscle tissue due to various causes including that seen in sacral agenesis and amyotrophic lateral sclerosis. So amyoplasia may also include an intermediate common pathway, rather than the primary cause of the contractors. Signs and symptoms Amyoplasia results when a fetus is unable to move sufficiently in the womb. Mothers of children with the disorder often report that their baby was abnormally still during the pregnancy. The lack of movement in utero (also known as fetal akinesia) allows extra connective tissue to form around the joints and, therefore, the joints become fixed. This extra connective tissue replaces muscle tissue, leading to weakness and giving a wasting appearance to the muscles. Additionally, due to the lack of fetal movement, the tendons that connect the muscles to bone are not able to stretch to their normal length and this contributes to the lack of joint mobility as well. Causes There is no single factor that is consistently found in the prenatal history of individuals affected with amyoplasia and, in some cases, there is no known cause of the disorder. Amyoplasia is a sporadic condition that occurs due to lack of fetal movement in the womb. There is no specific gene that is known to cause the disorder. It is thought to be multifactorial, meaning that numerous genes and environmental factors play a role in its development. The recurrence risk is minimal for siblings or children of affected individuals. There have been no reports of recurrent cases of amyoplasia in a family. The fetal akinesia in amyoplasia is thought to b
https://en.wikipedia.org/wiki/Weitzenb%C3%B6ck%20identity
In mathematics, in particular in differential geometry, mathematical physics, and representation theory a Weitzenböck identity, named after Roland Weitzenböck, expresses a relationship between two second-order elliptic operators on a manifold with the same principal symbol. Usually Weitzenböck formulae are implemented for G-invariant self-adjoint operators between vector bundles associated to some principal G-bundle, although the precise conditions under which such a formula exists are difficult to formulate. This article focuses on three examples of Weitzenböck identities: from Riemannian geometry, spin geometry, and complex analysis. Riemannian geometry In Riemannian geometry there are two notions of the Laplacian on differential forms over an oriented compact Riemannian manifold M. The first definition uses the divergence operator δ defined as the formal adjoint of the de Rham operator d: where α is any p-form and β is any ()-form, and is the metric induced on the bundle of ()-forms. The usual form Laplacian is then given by On the other hand, the Levi-Civita connection supplies a differential operator where ΩpM is the bundle of p-forms. The Bochner Laplacian is given by where is the adjoint of . This is also known as the connection or rough Laplacian. The Weitzenböck formula then asserts that where A is a linear operator of order zero involving only the curvature. The precise form of A is given, up to an overall sign depending on curvature conventions, by where R is the Riemann curvature tensor, Ric is the Ricci tensor, is the map that takes the wedge product of a 1-form and p-form and gives a (p+1)-form, is the universal derivation inverse to θ on 1-forms. Spin geometry If M is an oriented spin manifold with Dirac operator ð, then one may form the spin Laplacian Δ = ð2 on the spin bundle. On the other hand, the Levi-Civita connection extends to the spin bundle to yield a differential operator As in the case of Riemannian manifolds, let .
https://en.wikipedia.org/wiki/Windows%20NT%20processor%20scheduling
Windows NT processor scheduling refers to the process by which Windows NT determines which job (task) should be run on the computer processor at which time. Without scheduling, the processor would give attention to jobs based on when they arrived in the queue, which is usually not optimal. As part of the scheduling, the processor gives a priority level to different processes running on the machine. When two processes are requesting service at the same time, the processor performs the jobs for the one with the higher priority. There are six named priority levels: Realtime High Above Normal Normal Below Normal Low These levels have associated numbers with them. Applications start at a base priority level of eight. The system dynamically adjusts the priority level to give all applications access to the processor. Priority levels 0 - 15 are used by dynamic applications. Priority levels 16- 31 are reserved for real-time applications. Affinity In a multiprocessing environment with more than one logical processor (i.e. multiple cores or hyperthreading), more than one task can be running at the same time. However, a process or a thread can be set to run on only a subset of the available logical processors. The Windows Task Manager utility offers a user interface for this at the process level.
https://en.wikipedia.org/wiki/Behavioral%20enrichment
Behavioral enrichment is an animal husbandry principle that seeks to enhance the quality of captive animal care by identifying and providing the environmental stimuli necessary for optimal psychological and physiological well-being. Enrichment can either be active or passive, depending on whether it requires direct contact between the animal and the enrichment. A variety of enrichment techniques are used to create desired outcomes similar to an animal's individual and species' history. Each of the techniques used is intended to stimulate the animal's senses similarly to how they would be activated in the wild. Provided enrichment may be seen in the form of auditory, olfactory, habitat factors, food, research projects, training, and objects. Purpose Environmental enrichment can improve the overall welfare of animals in captivity and create a habitat similar to what they would experience in their wild environment. It aims to maintain an animal's physical and psychological health by increasing the range or number of species-specific behaviors, increasing positive interaction with the captive environment, preventing or reducing the frequency of abnormal behaviors, such as stereotypies, and increasing the individual's ability to cope with the challenges of captivity. Stereotypies are seen in captive animals due to stress and boredom. This includes pacing, self-harm, over-grooming, head-weaving, etc. Environmental enrichment can be offered to any animal in captivity, including: Animals in zoos and related facilities Animals in sanctuaries Animals in shelters and adoption centers Animals used for research Animals used for companionship, e.g. dogs, cats, rabbits, etc. Environmental enrichment can be beneficial to a wide range of vertebrates and invertebrates such as land mammals, marine mammals, and amphibians. In the United States, specific regulations (Animal Welfare Act of 1966) must be followed for enrichment plans in order to guarantee, regulate, and provide ap
https://en.wikipedia.org/wiki/Vampire%20by%20Night
The Vampire by Night (Nina Price) is a fictional character that appears in comic books published by Marvel Comics. She is the niece to Jack Russell and has the ability to shapeshift into either a werewolf or a vampiress between dusk and dawn. Publication history She first appeared in Amazing Fantasy #10 (September 2005) and was created by writer Jeff Parker and artist Federica Manfredi. Fictional character biography Through her mother's bloodline, Nina became part of a long family curse. This curse had originated with Nina's 18th-century ancestor Grigori, who had been tainted by the Darkhold, a grimoire of black magic, and was subsequently bitten by a werewolf who served Dracula. Because of this, all the descendants of Grigori were cursed to transform into werewolves upon their 18th birthday. Nina's mother, Lissa Russell, was a victim of the curse until an altercation with a sorcerer named Doctor Glitternight altered her standing and she was freed from becoming a werewolf. Lissa then married a wealthy business tycoon named Mr. Price and gave birth to Nina. Although her mother was able to alter the curse's path against her, the curse continued its course to Nina. At some point Nina was attacked and bitten by a vampire, who attempted to turn her into one of the undead. This attack altered the curse for Nina somewhat and caused her to exhibit traits of both werewolves and vampires. By day Nina Price seems to be a normal human, but once the sun goes down, she turns into a vampire. Due to the werewolf curse, Nina also transformed into a white wolf during the full moon, and she used her father's money and status to reserve a zoo or park in which she would cage herself until the full moon was over and she was no longer an animal (so she would not hurt innocents). However, Nina had no problem using her supernatural abilities to harm criminals, as she found them acceptable targets for her vampiric thirsts, feeding on rapists and thieves once the sun went down. Falling in
https://en.wikipedia.org/wiki/Nuclear%20astrophysics
Nuclear astrophysics is an interdisciplinary part of both nuclear physics and astrophysics, involving close collaboration among researchers in various subfields of each of these fields. This includes, notably, nuclear reactions and their rates as they occur in cosmic environments, and modeling of astrophysical objects where these nuclear reactions may occur, but also considerations of cosmic evolution of isotopic and elemental composition (often called chemical evolution). Constraints from observations involve multiple messengers, all across the electromagnetic spectrum (nuclear gamma-rays, X-rays, optical, and radio/sub-mm astronomy), as well as isotopic measurements of solar-system materials such as meteorites and their stardust inclusions, cosmic rays, material deposits on Earth and Moon). Nuclear physics experiments address stability (i.e., lifetimes and masses) for atomic nuclei well beyond the regime of stable nuclides into the realm of radioactive/unstable nuclei, almost to the limits of bound nuclei (the drip lines), and under high density (up to neutron star matter) and high temperature (plasma temperatures up to ). Theories and simulations are essential parts herein, as cosmic nuclear reaction environments cannot be realized, but at best partially approximated by experiments. In general terms, nuclear astrophysics aims to understand the origin of the chemical elements and isotopes, and the role of nuclear energy generation, in cosmic sources such as stars, supernovae, novae, and violent binary-star interactions. History In the 1940s, geologist Hans Suess speculated that the regularity that was observed in the abundances of elements may be related to structural properties of the atomic nucleus. These considerations were seeded by the discovery of radioactivity by Becquerel in 1896 as an aside of advances in chemistry which aimed at production of gold. This remarkable possibility for transformation of matter created much excitement among physicists for t
https://en.wikipedia.org/wiki/WMBQ-CD
WMBQ-CD (channel 46) is a class A television station in New York City, affiliated with First Nations Experience. Owned by The WNET Group, it is sister to the city's two PBS member stations, Newark, New Jersey–licensed WNET (channel 13) and Garden City, New York–licensed WLIW (channel 21), as well as WNDT-CD (channel 14). Under a channel sharing arrangement, WMBQ-CD shares transmitter facilities with WLIW at One World Trade Center. Despite WMBQ-CD legally holding a low-power class A license, it transmits using WLIW's full-power spectrum. This ensures complete reception across the New York City television market. History As W22BM and WLBX-LP A construction permit for UHF channel 22 in Cranford, New Jersey, was granted to Craig Fox with alpha-numeric call-sign W22BM on February 11, 1993; it signed on in March 1997. The call-sign was changed to WLBX-LP on April 24, 1998. WLBX-LP was previously an affiliate of The Box until that network's acquisition by Viacom in 2001; the station then carried MTV2 like many other former Box stations. On September 11, 2001, WLBX-LP aired footage from CNN and TechTV. As WMBQ The call sign was changed to WMBQ-CA in 2004. In 2006, Renard Communications Corp. (the Craig Fox-controlled company that by then held the license) began transitioning to a new studio and transmitter they were constructing in Manhattan. Due to this change, WMBQ-CA was displaced from channel 22 to channel 46, and the city of license was changed from Cranford to New York City. On August 17, 2007, Renard Communications Corp. announced that would sell its three stations to Equity Media Holdings for $8 million. However, the transaction had a closing deadline set for June 1, 2008, and either party could cancel the sale if it were not completed by then. The sale had not been consummated by June 19 of that year, as the company was making budget cuts elsewhere; and later that year, Equity Media Holdings entered Chapter 11 bankruptcy. On January 3, 2008, WMBQ-CA went dark,
https://en.wikipedia.org/wiki/Physiologia%20Plantarum
Physiologia Plantarum is a peer-reviewed scientific journal published by Wiley-Blackwell on behalf of the Scandinavian Plant Physiology Society. The journal publishes papers on all aspects of all organizational levels of experimental plant biology ranging from biophysics, biochemistry, molecular and cell biology to ecophysiology. According to the Journal Citation Reports, the journal has a 2021 impact factor of 5.081, ranking it 33rd out of 235 journals in the category "Plant Sciences".
https://en.wikipedia.org/wiki/%CE%92-Pinene
β-Pinene is a monoterpene, an organic compound found in plants. It is one of the two isomers of pinene, the other being α-pinene. It is colorless liquid soluble in alcohol, but not water. It has a woody-green pine-like smell. This is one of the most abundant compounds released by forest trees. If oxidized in air, the allylic products of the pinocarveol and myrtenol family prevail. Sources Many plants from many botanical families contain the compound, including: Cuminum cyminum Humulus lupulus Pinus pinaster Clausena anisata Cannabis sativa Uses
https://en.wikipedia.org/wiki/Inhabited%20set
In mathematics, a set is inhabited if there exists an element . In classical mathematics, the property of being inhabited is equivalent to being non-empty. However, this equivalence is not valid in constructive or intuitionistic logic, and so this separate terminology is mostly used in the set theory of constructive mathematics. Definition In the formal language of first-order logic, set has the property of being if Related definitions A set has the property of being if , or equivalently . Here stands for the negation . A set is if it is not empty, that is, if , or equivalently . Theorems Modus ponens implies , and taking any a false proposition for establishes that is always valid. Hence, any inhabited set is provably also non-empty. Discussion In constructive mathematics, the double-negation elimination principle is not automatically valid. In particular, an existence statement is generally stronger than its double-negated form. The latter merely expresses that the existence cannot be ruled out, in the strong sense that it cannot consistently be negated. In a constructive reading, in order for to hold for some formula , it is necessary for a specific value of satisfying to be constructed or known. Likewise, the negation of a universal quantified statement is in general weaker than an existential quantification of a negated statement. In turn, a set may be proven to be non-empty without one being able to prove it is inhabited. Examples Sets such as or are inhabited, as e.g. witnessed by . The set is empty and thus not inhabited. Naturally, the example section thus focuses on non-empty sets that are not provably inhabited. It is easy to give examples for any simple set theoretical property, because logical statements can always be expressed as set theoretical ones, using an axiom of separation. For example, with a subset defined as , the proposition may always equivalently be stated as . The double-negated existence claim of an entity wit
https://en.wikipedia.org/wiki/Neuroconstructivism
Neuroconstructivism is a theory that states that phylogenetic developmental processes such as gene–gene interaction, gene–environment interaction and, crucially, ontogeny all play a vital role in how the brain progressively sculpts itself and how it gradually becomes specialized over developmental time. Supporters of neuroconstructivism, such as Annette Karmiloff-Smith, argue against innate modularity of mind, the notion that a brain is composed of innate neural structures or modules which have distinct evolutionarily established functions. Instead, emphasis is put on innate domain relevant biases. These biases are understood as aiding learning and directing attention. Module-like structures are therefore the product of both experience and these innate biases. Neuroconstructivism can therefore be seen as a bridge between Jerry Fodor's psychological nativism and Jean Piaget's theory of cognitive development. Development vs. innate modularity Neuroconstructivism has arisen as a direct rebuttal against psychologists who argue for an innate modularity of the brain. Modularity of the brain would require a pre-specified pattern of synaptic connectivity within the cortical microcircuitry of a specific neural system. Instead, Annette Karmiloff-Smith has suggested that the microconnectivity of the brain emerges from the gradual process of ontogenetic development. Proponents of the modular theory might have been misled by the seemingly normal performances of individuals who exhibit a learning disability on tests. While it may appear that cognitive functioning may be impaired in only specified areas, this may be a functional flaw in the test. Many standardized tasks used to assess the extent of damage within the brain do not measure underlying causes, instead only showing the static end-state of complex processes. An alternative explanation to account for these normal test scores would be the ability of the individual to compensate using other brain regions that are not nor
https://en.wikipedia.org/wiki/Pair%20distribution%20function
The pair distribution function describes the distribution of distances between pairs of particles contained within a given volume. Mathematically, if a and b are two particles, the pair distribution function of b with respect to a, denoted by is the probability of finding the particle b at distance from a, with a taken as the origin of coordinates. Overview The pair distribution function is used to describe the distribution of objects within a medium (for example, oranges in a crate or nitrogen molecules in a gas cylinder). If the medium is homogeneous (i.e. every spatial location has identical properties), then there is an equal probability density for finding an object at any position : , where is the volume of the container. On the other hand, the likelihood of finding pairs of objects at given positions (i.e. the two-body probability density) is not uniform. For example, pairs of hard balls must be separated by at least the diameter of a ball. The pair distribution function is obtained by scaling the two-body probability density function by the total number of objects and the size of the container: . In the common case where the number of objects in the container is large, this simplifies to give: . Simple models and general properties The simplest possible pair distribution function assumes that all object locations are mutually independent, giving: , where is the separation between a pair of objects. However, this is inaccurate in the case of hard objects as discussed above, because it does not account for the minimum separation required between objects. The hole-correction (HC) approximation provides a better model: where is the diameter of one of the objects. Although the HC approximation gives a reasonable description of sparsely packed objects, it breaks down for dense packing. This may be illustrated by considering a box completely filled by identical hard balls so that each ball touches its neighbours. In this case, every pair of bal
https://en.wikipedia.org/wiki/Alstr%C3%B6m%20syndrome
Alström syndrome (AS), also called Alström–Hallgren syndrome, is a very rare autosomal recessive genetic disorder characterised by childhood obesity and multiple organ dysfunction. Symptoms include early-onset type 2 diabetes, cone-rod dystrophy resulting in blindness, sensorineural hearing loss and dilated cardiomyopathy. Endocrine disorders typically also occur, such as hypergonadotrophic hypogonadism and hypothyroidism, as well as acanthosis nigricans resulting from hyperinsulinemia. Developmental delay is seen in almost half of people with Alström syndrome. It is caused by mutations in the gene ALMS1, which is involved in the formation of cellular cilia, making Alström syndrome a ciliopathy. At least 239 disease-causing mutations in ALMS1 have been described . Alström syndrome is sometimes confused with Bardet–Biedl syndrome, another ciliopathy which has similar symptoms, but Bardet–Biedl syndrome tends to have later onset in its symptoms, includes polydactyly and is caused by mutations in BBS genes. There is no cure for Alström syndrome. Treatments target the individual symptoms and can include diet, corrective lenses, hearing aids, medications for diabetes and heart issues and dialysis and transplantation in the case of kidney or liver failure. Prognosis varies depending on the specific combination of symptoms, but individuals with Alström syndrome rarely live beyond 50. At least 900 cases have been reported. Prevalence is fewer than 1 in 1,000,000 individuals in the general population, but the disorder is much more common in Acadians, both in Nova Scotia and Louisiana. It was first described by Swedish psychiatrist Carl-Henry Alström and his three associates, B. Hallgren, I. B. Nilsson and H. Asander, in 1959. Signs and symptoms Symptoms for Alström syndrome generally appear during infancy with great variability in age. Some of the symptoms include: Heart failure (dilated cardiomyopathy) in over 60% of cases, usually within the first few weeks after birt
https://en.wikipedia.org/wiki/Parachlamydiaceae
Parachlamydiaceae is a family of bacteria in the order Chlamydiales. Species in this family have a Chlamydia–like cycle of replication and their ribosomal RNA genes are 80–90% identical to ribosomal genes in the Chlamydiaceae. The Parachlamydiaceae naturally infect amoebae and can be grown in cultured Vero cells. The Parachlamydiaceae are not recognized by monoclonal antibodies that detect Chlamydiaceae lipopolysaccharide. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) Unassigned species: "Ca. Mesochlamydia elodeae" Corsaro et al. 2012 "Ca. Metachlamydia lacustris" Corsaro et al. 2010 Isolated Endosymbionts include: Hall's coccus P9 UV-7 endosymbiont of Acanthamoeba sp. TUME1 endosymbiont of Acanthamoeba sp. UWC22 endosymbiont of Acanthamoeba sp. UWE1 Uncultured lineages include: Neochlamydia turtle type 1 environmental Neochlamydia corvenA4 cvC15 cvC7 cvE5 Parachlamydia acanthamoebae has variable Gram staining characteristics and is mesophilic. Trophozoites of Acanthamoeba hosting these strains were isolated from asymptomatic women in Germany and also in an outbreak of humidifier fever (‘Hall’s coccus’) in Vermont USA. Four patients from Nova Scotia whose sera recognized Hall's coccus did not show serological cross-reaction with antigens from the Chlamydiaceae. Notes Metachlamydia lacustris and Protochlamydia species were found at the National Center for Biotechnology Information (NCBI) but have no standing with the Bacteriological Code (1990 and subsequent Revision) as detailed by List of Prokaryotic names with Standing in Nomenclature (LPSN) as a result of the following reasons: • No pure culture isolated or available for prokaryotes. • Not validly published because the effective publication only documents deposit of the type strain in a single recognized culture collection. • Not approved and published by the International
https://en.wikipedia.org/wiki/Auxiliary-field%20Monte%20Carlo
Auxiliary-field Monte Carlo is a method that allows the calculation, by use of Monte Carlo techniques, of averages of operators in many-body quantum mechanical (Blankenbecler 1981, Ceperley 1977) or classical problems (Baeurle 2004, Baeurle 2003, Baeurle 2002a). Reweighting procedure and numerical sign problem The distinctive ingredient of "auxiliary-field Monte Carlo" is the fact that the interactions are decoupled by means of the application of the Hubbard–Stratonovich transformation, which permits the reformulation of many-body theory in terms of a scalar auxiliary-field representation. This reduces the many-body problem to the calculation of a sum or integral over all possible auxiliary-field configurations. In this sense, there is a trade-off: instead of dealing with one very complicated many-body problem, one faces the calculation of an infinite number of simple external-field problems. It is here, as in other related methods, that Monte Carlo enters the game in the guise of importance sampling: the large sum over auxiliary-field configurations is performed by sampling over the most important ones, with a certain probability. In classical statistical physics, this probability is usually given by the (positive semi-definite) Boltzmann factor. Similar factors arise also in quantum field theories; however, these can have indefinite sign (especially in the case of Fermions) or even be complex-valued, which precludes their direct interpretation as probabilities. In these cases, one has to resort to a reweighting procedure (i.e., interpret the absolute value as probability and multiply the sign or phase to the observable) to get a strictly positive reference distribution suitable for Monte Carlo sampling. However, it is well known that, in specific parameter ranges of the model under consideration, the oscillatory nature of the weight function can lead to a bad statistical convergence of the numerical integration procedure. The problem is known as the numerical si
https://en.wikipedia.org/wiki/EnergyGuide
The EnergyGuide provides consumers in the United States information about the energy consumption, efficiency, and operating costs of appliances and consumer products. Clothes washers, dishwashers, refrigerators, freezers, televisions, water heaters, window air conditioners, mini split air conditioners, central air conditioners, furnaces, boilers, heat pumps, and other electronic appliances are all required to have EnergyGuide labels. The label must show the model number, the size, key features, and display largely a graph showing the annual operating cost in range with similar models, and the estimated yearly energy cost. Appliance energy labeling was mandated by the Energy Policy and Conservation Act of 1975, which directed the Federal Trade Commission to "develop and administer a mandatory energy labeling program covering major appliances, equipment, and lighting." The first appliance labeling rule was established in 1979 and all products were required to carry the label starting in 1980. Energy Star is a similar labeling program, but requires more stringent efficiency standards for an appliance to become qualified, and is not a required program, but rather a voluntary one. See also EnerGuide – A similar label used in Canada which also includes a whole-house evaluation European Union energy label – A similar label used in European Union Energy rating label – A similar label in Australia & New Zealand
https://en.wikipedia.org/wiki/Distributed%20design%20patterns
In software engineering, a distributed design pattern is a design pattern focused on distributed computing problems. Classification Distributed design patterns can be divided into several groups: Distributed communication patterns Security and reliability patterns Event driven patterns Examples MapReduce Bulk synchronous parallel Remote Session See also Software engineering List of software engineering topics
https://en.wikipedia.org/wiki/Central%20Bureau
The Central Bureau was one of two Allied signals intelligence (SIGINT) organisations in the South West Pacific area (SWPA) during World War II. Central Bureau was attached to the headquarters of the Supreme Commander, Southwest Pacific Area, General Douglas MacArthur. The role of the Bureau was to research and decrypt intercepted Imperial Japanese Army (land and air) traffic and work in close co-operation with other SIGINT centers in the United States, United Kingdom and India. Air activities included both army and navy air forces, as there was no independent Japanese air force. The other unit was the joint Royal Australian Navy/United States Navy Fleet Radio Unit, Melbourne (FRUMEL), which reported directly to CINCPAC (Admiral Chester Nimitz) in Hawaii and the Chief of Naval Operations (Admiral Ernest King) in Washington, D.C. Central Bureau is the precursor to the Defense Signals Bureau, which after a number of name changes is (from 2013) called the Australian Signals Directorate. Structure Central Bureau comprised: administrative personnel supply personnel cryptographic personnel cryptanalytic personnel interpreters translators a field section which included the intercept and communications personnel History Origins Beginning in January 1942, U.S. Navy stations in Hawaii (Station HYPO), Cavite/Corregidor (Station CAST) and OP-20-G (Station NEGAT, at Washington) began issuing formal intelligence decrypts far in advance of the U.S. Army or Central Bureau. General MacArthur had his own signals intelligence unit, Station 6, while he was in command in the Philippines before the start of the war, and was not fully dependent on the U.S. Navy for that type of information. However, most of the signals intelligence he received was from the U.S. Navy's Station CAST, originally at Cavite in the Manila Navy Yard, and evacuated to Corregidor Island after Japanese successes. Prior to the war, it had to be sent by courier, which caused some delay and annoyance. Ge
https://en.wikipedia.org/wiki/Waddlia
Waddlia is a genus of bacteria in its own family, Waddliaceae. Species in this genus have a Chlamydia-like cycle of replication and their ribosomal RNA genes are 80–90% identical to ribosomal genes in the Chlamydiaceae. The type species is Waddlia chondrophila strain WSU 86-1044T, which was isolated from the tissues of a first-trimester aborted bovine fetus. Isolated in 1986, this species was originally characterized as a Rickettsia. DNA sequencing of the ribosomal genes corrected the characterization. Another W. chondrophila strain, 2032/99, was found along with Neospora caninum in a septic stillborn calf. Waddlia chondrophila may be linked to miscarriages in pregnant women. A study found Waddlia chondrophila present in the placenta and vagina of 32 women, 10 of which who had miscarriages. It is hypothesized that the bacterial grows in placental cells, damaging the placenta. The species Waddlia malaysiensis G817 has been proposed. W. malaysiensis was identified in the urine of Malaysian fruit bats (Eonycteris spelaea). See also List of bacterial orders List of bacteria genera
https://en.wikipedia.org/wiki/Texas%20Instruments%20DaVinci
The Texas Instruments DaVinci is a family of system on a chip processors that are primarily used in embedded video and vision applications. Many processors in the family combine a DSP core based on the TMS320 C6000 VLIW DSP family and an ARM CPU core into a single system on chip. By using both a general-purpose processor and a DSP, the control and media portions can both be executed by separate processors. Later chips in the family included DSP-only and ARM-only processors. All the later chips integrate several accelerators to offload commodity application specific processing from the processor cores to dedicated accelerators. Most notable among these are HDVICP, an H.264, SVC and MPEG-4 compression and decompression engine, ISP, an accelerator engine with methods for enhancing video primarily input from camera sensors and an OSD engine for display acceleration. Some of the newest processors also integrate a vision coprocessor in the SoC. History DaVinci processors were introduced at a time when embedded processors with homogeneous processor cores were widely used. These processors were based either on cores that could do signal processing optimally, like DSPs or GPUs or based on cores that could do general purpose processing optimally, like, powerPC, ARM, and StrongARM. By using both a general-purpose processor and a DSP on a single chip, the control and media portions can both be executed by processors that excel at their respective tasks. By providing a bundled offering with system and application software, evaluation modules and debug tools based on Code Composer Studio, TI DaVinci processors were intended to win over a broader set of customers looking to add video feature to their electronic products. TI announced its first DaVinci branded video processors, the DM6443 and DM6446, on 5 December 2005. A year later, TI followed up with DSP only versions of the chips in the family, called DM643x (DM6431, DM6433, DM6435, DM6437). On January 15, 2007, TI announc
https://en.wikipedia.org/wiki/SGPIO
Serial general purpose input/output (SGPIO) is a four-signal (or four-wire) bus used between a host bus adapter (HBA) and a backplane. Of the four signals, three are driven by the HBA and one by the backplane. Typically, the HBA is a storage controller located inside a server, desktop, rack or workstation computer that interfaces with hard disk drives or solid state drives to store and retrieve data. It is considered an extension of the general-purpose input/output (GPIO) concept. The SGPIO specification is maintained by the Small Form Factor Committee in the SFF-8485 standard. The International Blinking Pattern Interpretation indicates how SGPIO signals are interpreted into blinking light-emitting diodes (LEDs) on disk arrays and storage back-planes. History SGPIO was developed as an engineering collaboration between American Megatrends Inc, at the time makers of back-planes, and LSI-Logic in 2004. SGPIO was later published by the SFF committee as specification SFF-8485. Host bus adapters The SGPIO signal consists of 4 electrical signals; it typically originates from a host bus adapter (HBA). iPass connectors (Usually SFF-8087 or SFF-8484) carry both SAS/SATA electrical connections between the HBA and the hard drives as well as the 4 SGPIO signals. Backplanes with SGPIO bus interface A backplane is a circuit board with connectors and power circuitry into which hard drives are attached; they can have multiple slots, each of which can be populated with a hard drive. Typically the back-plane is equipped with LEDs which by their color and activity, indicate the slot's status; typically, a slot's LED will emit a particular color or blink pattern to indicate its current status. SGPIO interpretation and LED blinking patterns Although many hardware vendors define their own proprietary LED blinking pattern, the common standard for SGPIO interpretation and LED blinking pattern can be found in the IBPI specification. On back-planes, vendors use typically 2 or 3 LED
https://en.wikipedia.org/wiki/Flora%20Brasiliensis
Flora Brasiliensis is a book published between 1840 and 1906 by the editors Carl Friedrich Philipp von Martius, August Wilhelm Eichler, Ignatz Urban and many others. It contains taxonomic treatments of 22,767 species, mostly Brazilian angiosperms. The work was begun by Stephan Endlicher and Martius. Von Martius completed 46 of the 130 fascicles before his death in 1868, with the monograph being completed in 1906. It was published by the Missouri Botanical Gardens. Book's structure This Floras volumes are an attempt to systematically categorise the known plants of the region. 15 volumes 40 parts 10,367 pages See also Historia naturalis palmarum
https://en.wikipedia.org/wiki/Allergy%20to%20cats
Allergies to cats are one of the most common allergies among human individuals. Among the eight known cat allergens, the most prominent allergen is secretoglobin Fel d 1, which is produced in the anal glands, salivary glands, and, mainly, in sebaceous glands of cats, and is ubiquitous in the United States, even in households without cats. Allergic symptoms associated with cats include coughing, wheezing, chest tightening, itching, nasal congestion, rash, watery eyes, sneezing, chapped lips, and similar symptoms. In worst-case scenarios, allergies to cats can develop into more life-threatening conditions such as rhinitis and mild to severe forms of asthma. Despite these symptoms, there are many types of solutions to mitigate the allergic effects of cats, including medications, vaccines, and home remedies. Hypoallergenic cats are another solution for individuals who want pets without the allergic consequences. Furthermore, prospective pet owners can reduce allergic reactions by selecting female cats, which are associated with a lower production of allergens. Cat allergens Eight cat allergens have been recognized by the World Health Organization/International Union of Immunological Societies (WHO/IUIS) Allergen Nomenclature Sub‐Committee. Fel d 1 is the most prominent cat allergen, accounting for 96% of human cat allergies. The remaining cat allergens are Fel d 2–8, with Fel d 4, a major urinary protein found in the saliva of cats, occurring the most in humans among the other seven allergens. All cats produce Fel d 1, including hypoallergenic cats. The main method of transmission is through a cat's saliva or dander, which adheres to clothing. A 2004 study found that 63% of people allergic to cats had antibodies against Fel d 4. Fel d 1 Fel d 1 is the most dominant cat allergen. It is part of the secretoglobulin family, which are proteins found only in mammals and birds. Fel d 1 is primarily secreted through the sebaceous glands and can be found on the skin and
https://en.wikipedia.org/wiki/Scale%20space%20implementation
In the areas of computer vision, image analysis and signal processing, the notion of scale-space representation is used for processing measurement data at multiple scales, and specifically enhance or suppress image features over different ranges of scale (see the article on scale space). A special type of scale-space representation is provided by the Gaussian scale space, where the image data in N dimensions is subjected to smoothing by Gaussian convolution. Most of the theory for Gaussian scale space deals with continuous images, whereas one when implementing this theory will have to face the fact that most measurement data are discrete. Hence, the theoretical problem arises concerning how to discretize the continuous theory while either preserving or well approximating the desirable theoretical properties that lead to the choice of the Gaussian kernel (see the article on scale-space axioms). This article describes basic approaches for this that have been developed in the literature. Statement of the problem The Gaussian scale-space representation of an N-dimensional continuous signal, is obtained by convolving fC with an N-dimensional Gaussian kernel: In other words: However, for implementation, this definition is impractical, since it is continuous. When applying the scale space concept to a discrete signal fD, different approaches can be taken. This article is a brief summary of some of the most frequently used methods. Separability Using the separability property of the Gaussian kernel the N-dimensional convolution operation can be decomposed into a set of separable smoothing steps with a one-dimensional Gaussian kernel G along each dimension where and the standard deviation of the Gaussian σ is related to the scale parameter t according to t = σ2. Separability will be assumed in all that follows, even when the kernel is not exactly Gaussian, since separation of the dimensions is the most practical way to implement multidimensional smoothing, especia
https://en.wikipedia.org/wiki/Bone%20exercise%20monitor
A bone exercise monitor is an instrument which is used to measure and analyze the bone strengthening qualities of physical activity and to help in prevention of osteoporosis with physical activity and exercise. The bone strengthening quality of physical exercise is very difficult to assess and monitor due to the great variability of intensity of exercise modes and individual differences in exercise patterns. Bone exercise monitors utilize accelerometers for measurement, and the collected data is analyzed with a specially developed algorithm. The bone is stimulated by the acceleration and deceleration forces also known as G-forces causing impacts on the body, stimulating bone growth by adding new bone and by improving its architectural strength. The Bone Exercise Monitor is worn on the hip during the daily chores or during exercise. The monitor measures the accelerations and deceleration of the body and analyzes the results. The daily (and weekly) achieved bone exercise is shown on the monitor's display.
https://en.wikipedia.org/wiki/Polymelia
Polymelia is a birth defect in which an affected individual has more than the usual number of limbs. It is a type of dysmelia. In humans and most land-dwelling vertebrates, this means having five or more limbs. The extra limb is most commonly shrunken and/or deformed. The term is from Greek πολυ- "many", μέλεα "limbs". Sometimes an embryo started as conjoined twins, but one twin degenerated completely except for one or more limbs, which end up attached to the other twin. Sometimes small extra legs between the normal legs are caused by the body axis forking in the dipygus condition. Notomelia (from Greek for "back-limb-condition") is polymelia where the extra limb is rooted along or near the midline of the back. Notomelia has been reported in Angus cattle often enough to be of concern to farmers. Cephalomelia (from Greek for "head-limb-condition") is polymelia where the extra limb is rooted on the head. Origin Tetrapod legs evolved in the Devonian or Carboniferous geological period from the pectoral fins and pelvic fins of their crossopterygian fish ancestors. Fish fins develop along a "fin line", which runs from the back of the head along the midline of the back, round the end of the tail, and forwards along the underside of the tail, and at the cloaca splits into left and right fin lines which run forwards to the gills. In the paired ventral part of the fin line, normally only the pectoral and pelvic fins survive (but the Devonian acanthodian fish Mesacanthus developed a third pair of paired fins); but along the non-paired parts of the fin line, other fins develop. In tetrapods, only the four paired fins normally persisted, and became the four legs. Notomelia and cephalomelia are atavistic reappearances of dorsal fins. Some other cases of polymelia are extra development along the paired part of the fin lines, or along the ventral posterior non-paired part of the fin line. Notable cases Humans 1995: Somali baby girl born with three left arms. In March 2006
https://en.wikipedia.org/wiki/DCMU
DCMU (3-(3,4-dichlorophenyl)-1,1-dimethylurea) is an algicide and herbicide of the arylurea class that inhibits photosynthesis. It was introduced by Bayer in 1954 under the trade name of Diuron. History In 1952, chemists at E. I. du Pont de Nemours and Company patented a series of aryl urea derivatives as herbicides. Several compounds covered by this patent were commercialized as herbicides: monuron (4-chlorophenyl), chlortoluron (3-chloro-4-methylphenyl) and DCMU, the (3,4-dichlorophenyl) example. Subsequently, over thirty related urea analogs with the same mechanism of action reached the market worldwide. Synthesis As described in the du Pont patent, the starting material is a substituted aryl amine, an aniline, which is treated with phosgene to form its isocyanate derivative. This is subsequently reacted with dimethylamine to give the final product. Aryl-NH2 + COCl2 → Aryl-NCO Aryl-NCO + NH(CH3)2 → Aryl-NHCON(CH3)2 Mechanism of action DCMU is a very specific and sensitive inhibitor of photosynthesis. It blocks the QB plastoquinone binding site of photosystem II, disallowing the electron flow from photosystem II to plastoquinone. This interrupts the photosynthetic electron transport chain in photosynthesis and thus reduces the ability of the plant to turn light energy into chemical energy (ATP and reductant potential). DCMU only blocks electron flow from photosystem II, it has no effect on photosystem I or other reactions in photosynthesis, such as light absorption or carbon fixation in the Calvin cycle. However, because it blocks electrons produced from water oxidation in PS II from entering the plastoquinone pool, "linear" photosynthesis is effectively shut down, as there are no available electrons to exit the photosynthetic electron flow cycle for reduction of NADP+ to NADPH. In fact, it was found that DCMU not only does not inhibit the cyclic photosynthetic pathway, but, under certain circumstances, actually stimulates it. Because of these effects, DCMU
https://en.wikipedia.org/wiki/Primary%20interatrial%20foramen
In the developing heart, the atria are initially open to each other, with the opening known as the primary interatrial foramen or ostium primum (or interatrial foramen primum). The foramen lies beneath the edge of septum primum and the endocardial cushions. It progressively decreases in size as the septum grows downwards, and disappears with the formation of the atrial septum. Structure The foramen lies beneath the edge of septum primum and the endocardial cushions. It progressively decreases in size as the septum grows downwards, and disappears with the formation of the atrial septum. Closure The septum primum, a which grows down to separate the primitive atrium into the left atrium and right atrium, grows in size over the course of heart development. The primary interatrial foramen is the gap between the septum primum and the septum intermedium, which gets progressively smaller until it closes. Clinical significance Failure of the septum primum to fuse with the endocardial cushion can lead to an ostium primum atrial septal defect. This is the second most common type of atrial septal defect and is commonly seen in Down syndrome. Typically this defect will cause a shunt to occur from the left atrium to the right atrium. Children born with this defect may be asymptomatic, however, over time pulmonary hypertension and the resulting hypertrophy of the right side of the heart will lead to a reversal of this shunt. This reversal is called Eisenmenger syndrome.
https://en.wikipedia.org/wiki/Septum%20spurium
During development of the heart, the orifice of the sinus venosus lies obliquely, and is guarded by two valves, the right and left venous valves; above the opening these unite with each other and are continuous with a fold named the septum spurium.
https://en.wikipedia.org/wiki/Spina%20vestibuli
Spina vestibuli, or vestibular spine, is a bony structure, in anatomical terms a 'mesenchymal condensation', which extends from the mediastinum of the heart and covers the leading edge of the primary atrial septum; its origin is outside of the heart. Below the opening of the orifice of the coronary sinus they fuse to form a triangular thickening — the spina vestibuli. It is believed to play a formative role in atrial septation, which is how the common atrium of the heart divides into two during gestation. Its lack of development is thought to contribute to the morphogenesis of atrioventricular septal defect (AVSD) in Down syndrome.
https://en.wikipedia.org/wiki/Valve%20of%20inferior%20vena%20cava
The valve of the inferior vena cava (eustachian valve) is a venous valve that lies at the junction of the inferior vena cava and right atrium. Development In prenatal development, the eustachian valve helps direct the flow of oxygen-rich blood through the right atrium into the left atrium and away from the right ventricle. Before birth, the fetal circulation directs oxygen-rich blood returning from the placenta to mix with blood from the hepatic veins in the inferior vena cava. Streaming this blood across the atrial septum via the foramen ovale increases the oxygen content of blood in the left atrium. This in turn increases the oxygen concentration of blood in the left ventricle, the aorta, the coronary circulation and the circulation of the developing brain. Following birth and separation from the placenta, the oxygen content in the inferior vena cava falls. With the onset of breathing, the left atrium receives oxygen-rich blood from the lungs via the pulmonary veins. As blood flow to the lungs increases, the amount of blood flow entering the left atrium increases. When the pressure in the left atrium exceeds the pressure in the right atrium, the foramen ovale begins to close and limits the blood flow between the left and right atrium. While the eustachian valve persists in adult life, it essentially does not have a specific function after the gestational period. Variation There is a large variability in size, shape, thickness, and texture of the persistent eustachian valve, and in the extent to which it encroaches on neighboring structures such as the atrial septum. At one end of the spectrum, the embryonic eustachian valve disappears completely or is represented only by a thin ridge. Most commonly, it is a crescentic fold of endocardium arising from the anterior rim of the IVC orifice. The lateral horn of the crescent tends to meet the lower end of the crista terminalis, while the medial horn joins the thebesian valve, a semicircular valvular fold at t
https://en.wikipedia.org/wiki/Left%20atrioventricular%20orifice
The left atrioventricular orifice (left atrioventricular opening or mitral orifice) is placed below and to the left of the aortic orifice. It is a little smaller than the corresponding aperture of the opposite side. It is surrounded by a dense fibrous ring, covered by the lining membrane of the heart, and is reguarded as the bicuspid or mitral valve.
https://en.wikipedia.org/wiki/Aortic%20orifice
The aortic orifice, (aortic opening) is a circular opening, in front and to the right of the left atrioventricular orifice, from which it is separated by the anterior cusp of the bicuspid valve. It is guarded by the aortic semilunar valve. The portion of the ventricle immediately below the aortic orifice is termed the aortic vestibule, and has fibrous instead of muscular walls.
https://en.wikipedia.org/wiki/Rapid%20modes%20of%20evolution
Rapid modes of evolution have been proposed by several notable biologists after Charles Darwin proposed his theory of evolutionary descent by natural selection. In his book On the Origin of Species (1859), Darwin stressed the gradual nature of descent, writing: It may be said that natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. We see nothing of these slow changes in progress, until the hand of time has marked the long lapses of ages, and then so imperfect is our view into long past geological ages, that we only see that the forms of life are now different from what they formerly were. (1859) Evolutionary developmental biology Work in developmental biology has identified dynamical and physical mechanisms of tissue morphogenesis that may underlie such abrupt morphological transitions. Consequently, consideration of mechanisms of phylogenetic change that are actually (not just apparently) non-gradual is increasingly common in the field of evolutionary developmental biology, particularly in studies of the origin of morphological novelty. A description of such mechanisms can be found in the multi-authored volume Origination of Organismal Form. See also Evolution Evolutionary developmental biology Otto Schindewolf Punctuated equilibrium Quantum evolution Richard Goldschmidt Saltationism Bibliography Darwin, C. (1859) On the Origin of Species London: Murray. Goldschmidt, R. (1940) The Material Basis of Evolution. New Haven, Conn.: Yale University Press. Gould, S. J. (1977) "The Return of Hopeful Monsters" Natural History 86 (June/July): 22-30. Gould, S. J. (2002) The Structure of Evolutionary Theory. Cambridge MA: Harvard Univ. Press. Müller, G. B. and Newman,
https://en.wikipedia.org/wiki/Right%20atrioventricular%20orifice
The right atrioventricular orifice (right atrioventricular opening) is the large oval aperture of communication between the right atrium and ventricle in the heart. Situated at the base of the atrium, it measures about 3.8 to 4 cm. in diameter and is surrounded by a fibrous ring, covered by the lining membrane of the heart; it is considerably larger than the corresponding aperture on the left side, being sufficient to admit the ends of four fingers. It is guarded by the tricuspid valve. See also Left atrioventricular orifice
https://en.wikipedia.org/wiki/Universal%20quadratic%20form
In mathematics, a universal quadratic form is a quadratic form over a ring that represents every element of the ring. A non-singular form over a field which represents zero non-trivially is universal. Examples Over the real numbers, the form x2 in one variable is not universal, as it cannot represent negative numbers: the two-variable form over R is universal. Lagrange's four-square theorem states that every positive integer is the sum of four squares. Hence the form over Z is universal. Over a finite field, any non-singular quadratic form of dimension 2 or more is universal. Forms over the rational numbers The Hasse–Minkowski theorem implies that a form is universal over Q if and only if it is universal over Qp for all p (where we include , letting Q∞ denote R). A form over R is universal if and only if it is not definite; a form over Qp is universal if it has dimension at least 4. One can conclude that all indefinite forms of dimension at least 4 over Q are universal. See also The 15 and 290 theorems give conditions for a quadratic form to represent all positive integers.
https://en.wikipedia.org/wiki/Immunochemistry
Immunochemistry is the study of the chemistry of the immune system. This involves the study of the properties, functions, interactions and production of the chemical components (antibodies/immunoglobulins, toxin, epitopes of proteins like CD4, antitoxins, cytokines/chemokines, antigens) of the immune system. It also include immune responses and determination of immune materials/products by immunochemical assays. In addition, immunochemistry is the study of the identities and functions of the components of the immune system. Immunochemistry is also used to describe the application of immune system components, in particular antibodies, to chemically labelled antigen molecules for visualization. Various methods in immunochemistry have been developed and refined, and used in scientific study, from virology to molecular evolution. Immunochemical techniques include: enzyme-linked immunosorbent assay, immunoblotting (e.g., Western blot assay), precipitation and agglutination reactions, immunoelectrophoresis, immunophenotyping, immunochromatographic assay and cyflometry. One of the earliest examples of immunochemistry is the Wasserman test to detect syphilis. Svante Arrhenius was also one of the pioneers in the field; he published Immunochemistry in 1907 which described the application of the methods of physical chemistry to the study of the theory of toxins and antitoxins. Immunochemistry is also studied from the aspect of using antibodies to label epitopes of interest in cells (immunocytochemistry) or tissues (immunohistochemistry).
https://en.wikipedia.org/wiki/Differentiation%20in%20Fr%C3%A9chet%20spaces
In mathematics, in particular in functional analysis and nonlinear analysis, it is possible to define the derivative of a function between two Fréchet spaces. This notion of differentiation, as it is Gateaux derivative between Fréchet spaces, is significantly weaker than the derivative in a Banach space, even between general topological vector spaces. Nevertheless, it is the weakest notion of differentiation for which many of the familiar theorems from calculus hold. In particular, the chain rule is true. With some additional constraints on the Fréchet spaces and functions involved, there is an analog of the inverse function theorem called the Nash–Moser inverse function theorem, having wide applications in nonlinear analysis and differential geometry. Mathematical details Formally, the definition of differentiation is identical to the Gateaux derivative. Specifically, let and be Fréchet spaces, be an open set, and be a function. The directional derivative of in the direction is defined by if the limit exists. One says that is continuously differentiable, or if the limit exists for all and the mapping is a continuous map. Higher order derivatives are defined inductively via A function is said to be if It is or smooth if it is for every Properties Let and be Fréchet spaces. Suppose that is an open subset of is an open subset of and are a pair of functions. Then the following properties hold: Fundamental theorem of calculus. If the line segment from to lies entirely within then The chain rule. For all and Linearity. is linear in More generally, if is then is multilinear in the 's. Taylor's theorem with remainder. Suppose that the line segment between and lies entirely within If is then where the remainder term is given by Commutativity of directional derivatives. If is then for every permutation σ of The proofs of many of these properties rely fundamentally on the fact that it is possible to de
https://en.wikipedia.org/wiki/Intervenous%20tubercle
The intervenous tubercle (tubercle of Lower) is a small projection on the posterior wall of the right atrium, above the fossa ovalis. It is distinct in the hearts of quadrupeds, but in man is scarcely visible. It was supposed by Lower to direct the blood from the superior vena cava toward the atrioventricular opening.
https://en.wikipedia.org/wiki/L%C3%BApin
Revista Lúpin (Lúpin Magazine) was a monthly Argentine comics magazine. Guillermo Guerrero and Héctor Mario Sidoli launched it on February 1, 1966. It was discontinued in 2007. The comic takes its name from the popular Argentine comic strip hero Lúpin el Piloto, which debuted in 1959. Concept The comic featured the air adventures of its pilot hero Lúpin, as well as other strips with regular characters, usually with a technical or scientific bent. It also included plans for building model aircraft, simple electronic devices, telescopes, and other technical objects. The name Lúpin is a lunfardo word for "looping the loop" that comes from the English word "looping". The accent on the u is not standard in Spanish spelling. Sidoli died in December 2006. It was announced in May 2007, that issue 499 of the Revista Lúpin had been the last because of a dispute between Sidoli's widow and Guerrero over author's rights. Some characters might have continued in a new magazine owned by Guerrero alone. However, Guerrero died on June 25, 2009. Cultural impact Although never a huge popular success, it received letters from Argentine engineers, pilots, and even a NASA astronaut who read it as children. Néstor Kirchner, the president of Argentina from 2003 to 2007, was nicknamed Lúpin because of his physical resemblance to the character.
https://en.wikipedia.org/wiki/Chain%20loading
Chain loading is a method used by computer programs to replace the currently executing program with a new program, using a common data area to pass information from the current program to the new program. It occurs in several areas of computing. Chain loading is similar to the use of overlays. Unlike overlays, however, chain loading replaces the currently executing program in its entirety. Overlays usually replace only a portion of the running program. Like the use of overlays, the use of chain loading increases the I/O load of an application. Chain loading in boot manager programs In operating system boot manager programs, chain loading is used to pass control from the boot manager to a boot sector. The target boot sector is loaded in from disk, replacing the in-memory boot sector from which the boot manager itself was bootstrapped, and executed. Chain loading in Unix In Unix (and in Unix-like operating systems), the exec() system call is used to perform chain loading. The program image of the current process is replaced with an entirely new image, and the current thread begins execution of that image. The common data area comprises the process' environment variables, which are preserved across the system call. Chain loading in Linux In addition to the process level chain loading Linux supports the system call to replace the entire operating system kernel with a different version. The new kernel boots as if it were started from power up and no running processes are preserved. Chain loading in BASIC programs In BASIC programs, chain loading is the purview of the CHAIN statement (or, in Commodore BASIC, the LOAD statement), which causes the current program to be terminated and the chained-to program to be loaded and invoked (with, on those dialects of BASIC that support it, an optional parameter specifying the line number from which execution is to commence, rather than the default of the first line of the new program). The common data area varies
https://en.wikipedia.org/wiki/Core%20common%20area
The core common area is that . Named after magnetic core memory, the term has persisted into the modern era and is commonly used by both the Fortran and BASIC languages. See also Chain loading Operating system technology
https://en.wikipedia.org/wiki/Iterated%20binary%20operation
In mathematics, an iterated binary operation is an extension of a binary operation on a set S to a function on finite sequences of elements of S through repeated application. Common examples include the extension of the addition operation to the summation operation, and the extension of the multiplication operation to the product operation. Other operations, e.g., the set-theoretic operations union and intersection, are also often iterated, but the iterations are not given separate names. In print, summation and product are represented by special symbols; but other iterated operators often are denoted by larger variants of the symbol for the ordinary binary operator. Thus, the iterations of the four operations mentioned above are denoted and , respectively. More generally, iteration of a binary function is generally denoted by a slash: iteration of over the sequence is denoted by , following the notation for reduce in Bird–Meertens formalism. In general, there is more than one way to extend a binary operation to operate on finite sequences, depending on whether the operator is associative, and whether the operator has identity elements. Definition Denote by aj,k, with and , the finite sequence of length of elements of S, with members (ai), for . Note that if , the sequence is empty. For , define a new function Fl on finite nonempty sequences of elements of S, where Similarly, define If f has a unique left identity e, the definition of Fl can be modified to operate on empty sequences by defining the value of Fl on an empty sequence to be e (the previous base case on sequences of length 1 becomes redundant). Similarly, Fr can be modified to operate on empty sequences if f has a unique right identity. If f is associative, then Fl equals Fr, and we can simply write F. Moreover, if an identity element e exists, then it is unique (see Monoid). If f is commutative and associative, then F can operate on any non-empty finite multiset by applying it to an a
https://en.wikipedia.org/wiki/Scale-space%20axioms
In image processing and computer vision, a scale space framework can be used to represent an image as a family of gradually smoothed images. This framework is very general and a variety of scale space representations exist. A typical approach for choosing a particular type of scale space representation is to establish a set of scale-space axioms, describing basic properties of the desired scale-space representation and often chosen so as to make the representation useful in practical applications. Once established, the axioms narrow the possible scale-space representations to a smaller class, typically with only a few free parameters. A set of standard scale space axioms, discussed below, leads to the linear Gaussian scale-space, which is the most common type of scale space used in image processing and computer vision. Scale space axioms for the linear scale-space representation The linear scale space representation of signal obtained by smoothing with the Gaussian kernel satisfies a number of properties 'scale-space axioms' that make it a special form of multi-scale representation: linearity where and are signals while and are constants, shift invariance where denotes the shift (translation) operator semi-group structure with the associated cascade smoothing property existence of an infinitesimal generator non-creation of local extrema (zero-crossings) in one dimension, non-enhancement of local extrema in any number of dimensions at spatial maxima and at spatial minima, rotational symmetry for some function , scale invariance for some functions and where denotes the Fourier transform of , positivity , normalization . In fact, it can be shown that the Gaussian kernel is a unique choice given several different combinations of subsets of these scale-space axioms: most of the axioms (linearity, shift-invariance, semigroup) correspond to scaling being a semigroup of shift-invariant linear operator, which is satisfied by a number of families
https://en.wikipedia.org/wiki/High-Z%20Supernova%20Search%20Team
The High-Z Supernova Search Team was an international cosmology collaboration which used Type Ia supernovae to chart the expansion of the universe. The team was formed in 1994 by Brian P. Schmidt, then a post-doctoral research associate at Harvard University, and Nicholas B. Suntzeff, a staff astronomer at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. The original team first proposed for the research on September 29, 1994 in a proposal called A Pilot Project to Search for Distant Type Ia Supernova to the CTIO. The original team as co-listed on the first observing proposal was: Nicholas Suntzeff (PI); Brian Schmidt (Co-I); (other Co-Is) R. Chris Smith, Robert Schommer, Mark M. Phillips, Mario Hamuy, Roberto Aviles, Jose Maza, Adam Riess, Robert Kirshner, Jason Spiromilio, and Bruno Leibundgut. The original project was awarded four nights of telescope time on the CTIO Víctor M. Blanco Telescope on the nights of February 25, 1995, and March 6, 24, and 29, 1995. The pilot project led to the discovery of supernova SN1995Y. In 1995, the HZT elected Brian P. Schmidt of the Mount Stromlo Observatory which is part of the Australian National University to manage the team. The team expanded to roughly 20 astronomers located in the United States, Europe, Australia, and Chile. They used the Víctor M. Blanco telescope to discover Type Ia supernovae out to redshifts of z = 0.9. The discoveries were verified with spectra taken mostly from the telescopes of the Keck Observatory, and the European Southern Observatory. In a 1998 study led by Adam Riess, the High-Z Team became the first to publish evidence that the expansion of the Universe is accelerating (Riess et al. 1998, AJ, 116, 1009, submitted March 13, 1998, accepted May 1998). The team later spawned Project ESSENCE led by Christopher Stubbs of Harvard University and the Higher-Z Team led by Adam Riess of Johns Hopkins University and Space Telescope Science Institute. In 2011, Riess and Schmidt, along with Sau
https://en.wikipedia.org/wiki/Xi%20baryon
The Xi baryons or cascade particles are a family of subatomic hadron particles which have the symbol and may have an electric charge () of +2 , +1 , 0, or −1 , where is the elementary charge. Like all conventional baryons, particles contain three quarks. baryons, in particular, contain either one up or one down quark and two other, more massive quarks. The two more massive quarks are any two of strange, charm, or bottom (doubles allowed). For notation, the assumption is that the two heavy quarks in the are both strange; subscripts "c" and "b" are added for each even heavier charm or bottom quark that replaces one of the two presumed strange quarks. They are historically called the cascade particles because of their unstable state; they are typically observed to decay rapidly into lighter particles, through a chain of decays (cascading decays). The first discovery of a charged Xi baryon was in cosmic ray experiments by the Manchester group in 1952. The first discovery of the neutral Xi particle was at Lawrence Berkeley Laboratory in 1959. It was also observed as a daughter product from the decay of the omega baryon () observed at Brookhaven National Laboratory in 1964. The Xi spectrum is important to nonperturbative quantum chromodynamics (QCD), such as lattice QCD. History The particle is also known as the cascade B particle and contains quarks from all three families. It was discovered by D0 and CDF experiments at Fermilab. The discovery was announced on 12 June 2007. It was the first known particle made of quarks from all three quark generations – namely, a down quark, a strange quark, and a bottom quark. The D0 and CDF collaborations reported the consistent masses of the new state. The Particle Data Group world average mass is . For notation, the assumption is that the two heavy quarks are both strange, denoted by a simple  ; a subscript "c" is added for each constituent charm quark, and a "b" for each bottom quark. Hence , , , , etc. Unless specified,
https://en.wikipedia.org/wiki/Indole%20test
The indole test is a biochemical test performed on bacterial species to determine the ability of the organism to convert tryptophan into indole. This division is performed by a chain of a number of different intracellular enzymes, a system generally referred to as "tryptophanase." Biochemistry Indole is generated by reductive deamination from tryptophan via the intermediate molecule indolepyruvic acid. Tryptophanase catalyzes the deamination reaction, during which the amine (-NH2) group of the tryptophan molecule is removed. Final products of the reaction are indole, pyruvic acid, ammonium (NH4+) and energy. Pyridoxal phosphate is required as a coenzyme. Performing a test Like many biochemical tests on bacteria, results of an indole test are indicated by a change in color following a reaction with an added reagent. Pure bacterial culture must be grown in sterile tryptophan or peptone broth for 24–48 hours before performing the test. Following incubation, five drops of Kovac's reagent (isoamyl alcohol, para-Dimethylaminobenzaldehyde, concentrated hydrochloric acid) are added to the culture broth. A positive result is shown by the presence of a red or reddish-violet color in the surface alcohol layer of the broth. A negative result appears yellow. A variable result can also occur, showing an orange color as a result. This is due to the presence of skatole, also known as methyl indole or methylated indole, another possible product of tryptophan degradation. The positive red color forms as a result of a series of reactions. The para-Dimethylaminobenzaldehyde reacts with indole present in the medium to form a red rosindole dye. The isoamyl alcohol forms a complex with rosindole dye, which causes it to precipitate. The remaining alcohol and the precipitate then rise to the surface of the medium. A variation on this test using Ehrlich's reagent (using ethyl alcohol in place of isoamyl alcohol, developed by Paul Ehrlich) is used when performing the test on nonferment
https://en.wikipedia.org/wiki/Refactorable%20number
A refactorable number or tau number is an integer n that is divisible by the count of its divisors, or to put it algebraically, n is such that . The first few refactorable numbers are listed in as 1, 2, 8, 9, 12, 18, 24, 36, 40, 56, 60, 72, 80, 84, 88, 96, 104, 108, 128, 132, 136, 152, 156, 180, 184, 204, 225, 228, 232, 240, 248, 252, 276, 288, 296, ... For example, 18 has 6 divisors (1 and 18, 2 and 9, 3 and 6) and is divisible by 6. There are infinitely many refactorable numbers. Properties Cooper and Kennedy proved that refactorable numbers have natural density zero. Zelinsky proved that no three consecutive integers can all be refactorable. Colton proved that no refactorable number is perfect. The equation has solutions only if is a refactorable number, where is the greatest common divisor function. Let be the number of refactorable numbers which are at most . The problem of determining an asymptotic for is open. Spiro has proven that There are still unsolved problems regarding refactorable numbers. Colton asked if there are there arbitrarily large such that both and are refactorable. Zelinsky wondered if there exists a refactorable number , does there necessarily exist such that is refactorable and . History First defined by Curtis Cooper and Robert E. Kennedy where they showed that the tau numbers have natural density zero, they were later rediscovered by Simon Colton using a computer program he had made which invents and judges definitions from a variety of areas of mathematics such as number theory and graph theory. Colton called such numbers "refactorable". While computer programs had discovered proofs before, this discovery was one of the first times that a computer program had discovered a new or previously obscure idea. Colton proved many results about refactorable numbers, showing that there were infinitely many and proving a variety of congruence restrictions on their distribution. Colton was only later alerted that Kennedy and Cooper
https://en.wikipedia.org/wiki/Gardasil
Gardasil is an HPV vaccine for use in the prevention of certain strains of human papillomavirus (HPV). It was developed by Merck & Co. High-risk human papilloma virus (hr-HPV) genital infection is the most common sexually transmitted infection among women. The HPV strains that Gardasil protects against are sexually transmitted, specifically HPV types 6, 11, 16 and 18. HPV types 16 and 18 cause an estimated 70% of cervical cancers, and are responsible for most HPV-induced anal, vulvar, vaginal, and penile cancer cases. HPV types 6 and 11 cause an estimated 90% of genital warts cases. HPV type 16 is responsible for almost 90% of HPV-positive oropharyngeal cancers, and the prevalence is higher in males than females. Though Gardasil does not treat existing infection, vaccination is still recommended for HPV-positive individuals, as it may protect against one or more different strains of the disease. The vaccine was approved for medical use in the United States in 2006, initially for use in females aged 9–26. In 2007, the Advisory Committee on Immunization Practices recommended Gardasil for routine vaccination of girls aged 11 and 12 years. As of August 2009, vaccination was recommended for both males and females before adolescence and the beginning of potential sexual activity. By 2011, the vaccine had been approved in 120 other countries. In 2014, the US Food and Drug Administration (FDA) approved a nine-valent version, Gardasil 9, to protect against infection with the strains covered by the first generation of Gardasil as well as five other HPV strains responsible for 20% of cervical cancers (types 31, 33, 45, 52, and 58). In 2018, the FDA approved expanded use of Gardasil 9 for individuals 27 to 45 years old. Types Gardasil is available as Gardasil which protects against 4 types of HPV (6, 11, 16, 18) and Gardasil 9 which protects against an additional 5 types (31, 33, 45, 52, 58). Medical uses In the United States, Gardasil is indicated for: girls and women 9
https://en.wikipedia.org/wiki/Divisia%20monetary%20aggregates%20index
In econometrics and official statistics, and particularly in banking, the Divisia monetary aggregates index is an index of money supply. It uses Divisia index methods. Background The monetary aggregates used by most central banks (notably the US Federal Reserve) are simple-sum indexes in which all monetary components are assigned the same weight: in which is one of the monetary components of the monetary aggregate . The summation index implies that all monetary components contribute equally to the money total, and it views all components as dollar-for-dollar perfect substitutes. It has been argued that such an index does not weight such components in a way that properly summarizes the services of the quantities of money. There have been many attempts at weighting monetary components within a simple-sum aggregate. An index can rigorously apply microeconomic- and aggregation-theoretic foundations in the construction of monetary aggregates. That approach to monetary aggregation was derived and advocated by William A. Barnett (1980) and has led to the construction of monetary aggregates based on Diewert's (1976) class of superlative quantity index numbers. The new aggregates are called the Divisia aggregates or Monetary Services Indexes. Salam Fayyad's 1986 PhD dissertation did early research with those aggregates using U.S. data. This index is a discrete-time approximation with this definition: Here, the growth rate of the aggregate is the weighted average of the growth rates of the component quantities. The discrete time Divisia weights are defined as the expenditure shares averaged over the two periods of the change for , where is the expenditure share of asset during period , and is the user cost of asset , derived by Barnett (1978), Which is the opportunity cost of holding a dollar's worth of the th asset. In the last equation, is the market yield on the th asset, and is the yield available on a benchmark asset, held only to carry wealth between diff
https://en.wikipedia.org/wiki/Laughter%20in%20the%20Rain
"Laughter in the Rain" is a song composed and recorded by Neil Sedaka, with lyrics by Phil Cody. It includes a 20-second saxophone solo by Jim Horn. Background The song was released on Elton John's Rocket Records. After hearing a version of "Laughter in the Rain" by singer Lea Roberts on the radio several weeks before the planned release of his single, Sedaka phoned Elton John to have MCA Records rush the Sedaka version to release within five days. The opening chord of the chorus was based on that used by John in "Goodbye Yellow Brick Road", which Sedaka has described as a "drop-dead chord." He combined that with a pentatonic melody inspired by Aaron Copland. Cody claims to have written the lyrics in about five minutes after smoking marijuana and falling asleep under a tree for a couple of hours. Hello - Philip Cody here. To clarify - Neil and I started writing the song early in the day and I just wasn't getting it. So, I went for a walk, smoked a very small amount of weed and sat under a tree and took a short nap. It was a bright sunny day, not a cloud in the sky and yet, when I got back to my post at Sedaka's right elbow, the lyric just fell onto the page with very little effort from me. Somewhere, in my consciousness, I guess I was having a Gene Kelly moment . . . Chart performance In the U.S., "Laughter in the Rain" reached #1 on the Billboard Hot 100 on February 1, 1975 (Sedaka's first single to top the Hot 100 since 1962). The song spent two weeks at the top of the adult contemporary chart. The record was also a major hit in Canada, reaching #2 on the pop singles chart and #1 on the adult contemporary chart. It was also released in the U.K., where it spent nine weeks on the Singles Chart, peaking at #15 on June 22, 1974. Weekly charts Lea Roberts Neil Sedaka Year-end charts Personnel Neil Sedaka – lead vocal, piano Danny Kortchmar – electric guitar Dean Parks - acoustic guitar Leland Sklar – bass guitar Russ Kunkel – drums Jim Horn – tenor saxophone
https://en.wikipedia.org/wiki/Derrick%20Lonsdale
Derrick Lonsdale (born April 22, 1924) is an American pediatrician and researcher into the benefits of certain nutrients in preventing disease and psychotic behavior. He is a Fellow of the American College of Nutrition (FACN), and also a Fellow of the American College for Advancement in Medicine (FACAM) Lonsdale is known for his research into thiamine and his controversial theory that thiamine deficiency is widespread among Americans and predictive of anti-social behavior. Positions Lonsdale was a practitioner in pediatrics at the Cleveland Clinic for 20 years. He became Head of the Section of Biochemical Genetics at the Clinic. In 1982, Lonsdale retired from the Cleveland Clinic and joined the Preventive Medicine Group to specialize in nutrient-based therapy. He is also on the Scientific Research Advisory Committee of the American College for Advancement in Medicine and is an editor of their Journal. Research work Lonsdale hypothesizes that healing comes from the body itself rather than from external medical interventions. Lonsdale has studied the use of nutrients to prevent diseases. He is particularly interested in Vitamin B1, also known as thiamine. The World Health Organisation have cited three of Lonsdale's thiamine deficiency papers on Sudden Infant Death Syndrome. Lonsdale has spoken at orthomolecular medicine conferences. Autism Lonsdale led an uncontrolled study on the treatment of autism spectrum children with thiamine.<ref>Treatment of autism spectrum children with thiamine tetrahydrofurfuryl disulfide: A pilot study Derrick Lonsdale, Raymond J. Shamberger 2 & Tapan Audhya</> (2002), Neuroendocrinol Lett, Vol 23:302-308.</ref> He also led a study (uncontrolled) of secretin, which he and Shamberger say led to an improvement in behaviour and bowel control of autistic children in his study. Both of these studies are controversial because they link nutrition with autism. Analysing the findings in the latter study, autism researchers say that while
https://en.wikipedia.org/wiki/Insular%20dwarfism
Insular dwarfism, a form of phyletic dwarfism, is the process and condition of large animals evolving or having a reduced body size when their population's range is limited to a small environment, primarily islands. This natural process is distinct from the intentional creation of dwarf breeds, called dwarfing. This process has occurred many times throughout evolutionary history, with examples including dinosaurs, like Europasaurus and Magyarosaurus dacus, and modern animals such as elephants and their relatives. This process, and other "island genetics" artifacts, can occur not only on islands, but also in other situations where an ecosystem is isolated from external resources and breeding. This can include caves, desert oases, isolated valleys and isolated mountains ("sky islands"). Insular dwarfism is one aspect of the more general "island effect" or "Foster's rule", which posits that when mainland animals colonize islands, small species tend to evolve larger bodies (island gigantism), and large species tend to evolve smaller bodies. This is itself one aspect of island syndrome, which describes the differences in morphology, ecology, physiology and behaviour of insular species compared to their continental counterparts. Possible causes There are several proposed explanations for the mechanism which produces such dwarfism. One is a selective process where only smaller animals trapped on the island survive, as food periodically declines to a borderline level. The smaller animals need fewer resources and smaller territories, and so are more likely to get past the break-point where population decline allows food sources to replenish enough for the survivors to flourish. Smaller size is also advantageous from a reproductive standpoint, as it entails shorter gestation periods and generation times. In the tropics, small size should make thermoregulation easier. Among herbivores, large size confers advantages in coping with both competitors and predators, so a reduc
https://en.wikipedia.org/wiki/List%20of%20flags%20of%20Bosnia%20and%20Herzegovina
This is a list of flags used in Bosnia and Herzegovina. For more information about the national flag, visit the article Flag of Bosnia and Herzegovina. National flag Subnational flags Entities Districts Military flags Cantons of the Federation Former flags of the Cantons of the Federation Political flags Ethnic groups flags Historical national flags Proposed flags Proposals for the Socialist Republic of Bosnia and Herzegovina Proposals before Dayton Agreement First set of proposals Second set of proposals Third set of proposals See also Flag of Bosnia and Herzegovina – main article on the Bosnian and Herzegovinian national flag Coat of arms of Bosnia and Herzegovina Flag of the Federation of Bosnia and Herzegovina
https://en.wikipedia.org/wiki/Fall%20prevention
Fall prevention includes any action taken to help reduce the number of accidental falls suffered by susceptible individuals, such as the elderly (idiopathic) and people with neurological (Parkinson's, Multiple sclerosis, stroke survivors, Guillain-Barre, traumatic brain injury, incomplete spinal cord injury) or orthopedic (lower limb or spinal column fractures or arthritis, post-surgery, joint replacement, lower limb amputation, soft tissue injuries) indications. Current approaches to fall prevention are problematic because even though awareness is high among professionals that work with seniors and fall prevention activities are pervasive among community living establishments, fall death rates among older adults have more than doubled. The challenges are believed to be three-fold. First, insufficient evidence exists that any fall risk screening instrument is adequate for predicting falls. While the strongest predictors of fall risk tend to include a history of falls during the past year, gait, and balance abnormalities, existing models show a strong bias and therefore mostly fail to differentiate between adults that are at low risk and high risk of falling. Second, current fall prevention interventions in the United States are limited between short-term individualized therapy provided by a high-cost physical therapist or longer-term wellness activity provided in a low-cost group setting. Neither arrangement is optimum in preventing falls over a large population, especially as these evidence-based physical exercise programs have limited effectiveness (approximately 25%). Even multifactorial interventions, which include extensive physical exercise, medication adjustment, and environmental modification only lower fall risk by 31% after 12 months. Questions around effectiveness of current approaches (physical exercise and multifactorial interventions) have been found in multiple settings, including long-term care facilities and hospitals. The final challenge is adh