source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Software%20walkthrough
In software engineering, a walkthrough or walk-through is a form of software peer review "in which a designer or programmer leads members of the development team and other interested parties through a software product, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems". The reviews are also performed by assessors, specialists, etc. and are suggested or mandatory as required by norms and standards. "Software product" normally refers to some kind of technical document. As indicated by the IEEE definition, this might be a software design document or program source code, but use cases, business process definitions, test case specifications, and a variety of other technical documentation may also be walked through. A walkthrough differs from software technical reviews in its openness of structure and its objective of familiarization. It differs from software inspection in its ability to suggest direct alterations to the product reviewed. It lacks of direct focus on training and process improvement, process and product measurement. Process A walkthrough may be quite informal, or may follow the process detailed in IEEE 1028 and outlined in the article on software reviews. Objectives and participants In general, a walkthrough has one or two broad objectives: to gain feedback about the technical quality or content of the document; and/or to familiarize the audience with the content. A walkthrough is normally organized and directed by the author of the technical document. Any combination of interested or technically qualified personnel (from within or outside the project) may be included as seems appropriate. IEEE 1028 recommends three specialist roles in a walkthrough: The author, who presents the software product in step-by-step manner at the walk-through meeting, and is probably responsible for completing most action items; The walkthrough leader, who conducts the walkthrough, handle
https://en.wikipedia.org/wiki/Software%20technical%20review
A software technical review is a form of peer review in which "a team of qualified personnel ... examines the suitability of the software product for its intended use and identifies discrepancies from specifications and standards. Technical reviews may also provide recommendations of alternatives and examination of various alternatives" (IEEE Std. 1028-1997, IEEE Standard for Software Reviews, clause 3.7). "Software product" normally refers to some kind of technical document. This might be a software design document or program source code, but use cases, business process definitions, test case specifications, and a variety of other technical documentation, may also be subject to technical review. Technical review differs from software walkthroughs in its specific focus on the technical quality of the product reviewed. It differs from software inspection in its ability to suggest direct alterations to the product reviewed, and its lack of a direct focus on training and process improvement. The term formal technical review is sometimes used to mean a software inspection. A 'Technical Review' may also refer to an acquisition lifecycle event or Design review. Objectives and participants The purpose of a technical review is to arrive at a technically superior version of the work product reviewed, whether by correction of defects or by recommendation or introduction of alternative approaches. While the latter aspect may offer facilities that software inspection lacks, there may be a penalty in time lost to technical discussions or disputes which may be beyond the capacity of some participants. IEEE 1028 recommends the inclusion of participants to fill the following roles: The Decision Maker (the person for whom the technical review is conducted) determines if the review objectives have been met. The Review Leader is responsible for performing administrative tasks relative to the review, ensuring orderly conduct, and ensuring that the review meets its objectives.
https://en.wikipedia.org/wiki/National%20Agency%20for%20Food%20and%20Drug%20Administration%20and%20Control
The National Agency for Food and Drug Administration and Control (NAFDAC) is a Nigerian federal agency under the Federal Ministry of Health that is responsible for regulating and controlling the manufacture, importation, exportation, advertisement, distribution, sale, and use of food, drugs, cosmetics, medical devices, chemicals, and packaged water. The agency is headed by Dr. Monica Eimunjeze in an acting capacity on November 12, 2022, according to an internal memo dated November 17, 2022, signed by Oboli A.U and copied to all Directors at the agency, following the expiration of Prof Mojisola Adeyeye. She was appointed on 12 November 2022 by the President of the Federal Republic of Nigeria as the acting Director-General of the National Agency for Food and Drug Administration and Control (NAFDAC). Formation The organization was established to counter illicit and counterfeit products in Nigeria in 1993 under the country's health and safety law. Adulterated and counterfeit drugs are a problem in Nigeria. In one 1989 incident, over 150 children died as a result of paracetamol syrup containing diethylene glycol. The problem of fake drugs was so severe that neighboring countries such as Ghana and Sierra Leone officially banned the sale of drugs, foods, and beverages products made in Nigeria. Such problems led to the establishment of NAFDAC, with the goal of eliminating counterfeit pharmaceuticals, foods, and beverages products that are not manufactured in Nigeria and ensuring that available medications are safe and effective. The formation of NAFDAC was inspired by a 1988 World Health Assembly resolution requesting countries' help in combating the global health threat posed by counterfeit pharmaceuticals. In December 1992, NAFDAC's first governing council was formed. The council was chaired by Tanimu Saulawa. In January 1993, supporting legislation was approved as Legislative Decree No. 15 of 1993. On January 1, 1994, NAFDAC was officially established as a “parast
https://en.wikipedia.org/wiki/Mobile%20dating
Mobile dating services, also known as cell dating, cellular dating, or cell phone dating, allow individuals to chat, flirt, meet, and possibly become romantically involved by means of text messaging, mobile chatting, and the mobile web. These services allow their users to provide information about themselves in a short profile which is either stored in their phones as a dating ID or as a username on the mobile dating site. They can then search for other IDs online or by calling a certain phone number dictated by the service. The criteria include age, gender and sexual preference. Usually these sites are free to use but standard text messaging fees may still apply as well as a small fee the dating service charges per message. Mobile dating websites, in order to increase the opportunities for meeting, focus attention on users that share the same social network and proximity. Some companies even offer services such as homing devices to alert users when another user is within thirty feet of one another. Some systems involve bluetooth technology to connect users in locations such as bars and clubs. This is known as proximity dating. These systems are actually more popular in some countries in Europe and Asia than online dating. With the advent of GPS Phones and GSM localization, proximity dating is likely to rise sharply in popularity. According to The San Francisco Chronicle in 2005, "Mobile dating is the next big leap in online socializing." More than 3.6 million cell phone users logged into mobile dating sites in March 2007, with most users falling between the ages of 27-35. Some experts believe that the rise in mobile dating is due to the growing popularity of online dating. Others believe it is all about choice, as Joe Brennan Jr., vice president of Webdate says, "It's about giving people a choice. They don't have to date on their computer. They can date on their handset, it's all about letting people decide what path is best for them." A study published in 201
https://en.wikipedia.org/wiki/Characterization%20%28materials%20science%29
Characterization, when used in materials science, refers to the broad and general process by which a material's structure and properties are probed and measured. It is a fundamental process in the field of materials science, without which no scientific understanding of engineering materials could be ascertained. The scope of the term often differs; some definitions limit the term's use to techniques which study the microscopic structure and properties of materials, while others use the term to refer to any materials analysis process including macroscopic techniques such as mechanical testing, thermal analysis and density calculation. The scale of the structures observed in materials characterization ranges from angstroms, such as in the imaging of individual atoms and chemical bonds, up to centimeters, such as in the imaging of coarse grain structures in metals. While many characterization techniques have been practiced for centuries, such as basic optical microscopy, new techniques and methodologies are constantly emerging. In particular the advent of the electron microscope and secondary ion mass spectrometry in the 20th century has revolutionized the field, allowing the imaging and analysis of structures and compositions on much smaller scales than was previously possible, leading to a huge increase in the level of understanding as to why different materials show different properties and behaviors. More recently, atomic force microscopy has further increased the maximum possible resolution for analysis of certain samples in the last 30 years. Microscopy Microscopy is a category of characterization techniques which probe and map the surface and sub-surface structure of a material. These techniques can use photons, electrons, ions or physical cantilever probes to gather data about a sample's structure on a range of length scales. Some common examples of microscopy techniques include: Optical microscopy Scanning electron microscopy (SEM) Transmission electron mi
https://en.wikipedia.org/wiki/Ceiling%20level
In audio equipment the ceiling level, also known as the point of distortion, is the maximum input signal amplitude above which output distortion exceeds an acceptable level. The Ceiling Level or Ceiling Value is the maximum permissible concentration of a hazardous material in the working environment. This level should not be exceeded at any time. It is usually (but not invariably) set somewhat above the relevant time-weighted average for the chemical. Sound measurements
https://en.wikipedia.org/wiki/Pre-shared%20key
In cryptography, a pre-shared key (PSK) is a shared secret which was previously shared between the two parties using some secure channel before it needs to be used. Key To build a key from shared secret, the key derivation function is typically used. Such systems almost always use symmetric key cryptographic algorithms. The term PSK is used in Wi-Fi encryption such as Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA), where the method is called WPA-PSK or WPA2-PSK, and also in the Extensible Authentication Protocol (EAP), where it is known as EAP-PSK. In all these cases, both the wireless access points (AP) and all clients share the same key. The characteristics of this secret or key are determined by the system which uses it; some system designs require that such keys be in a particular format. It can be a password, a passphrase, or a hexadecimal string. The secret is used by all systems involved in the cryptographic processes used to secure the traffic between the systems. Crypto systems rely on one or more keys for confidentiality. One particular attack is always possible against keys, the brute force key space search attack. A sufficiently long, randomly chosen, key can resist any practical brute force attack, though not in principle if an attacker has sufficient computational power (see password strength and password cracking for more discussion). Unavoidably, however, pre-shared keys are held by both parties to the communication, and so can be compromised at one end, without the knowledge of anyone at the other. There are several tools available to help one choose strong passwords, though doing so over any network connection is inherently unsafe as one cannot in general know who, if anyone, may be eavesdropping on the interaction. Choosing keys used by cryptographic algorithms is somewhat different in that any pattern whatsoever should be avoided, as any such pattern may provide an attacker with a lower effort attack than brute force search. T
https://en.wikipedia.org/wiki/Software%20audit%20review
A software audit review, or software audit, is a type of software review in which one or more auditors who are not members of the software development organization conduct "An independent examination of a software product, software process, or set of software processes to assess compliance with specifications, standards, contractual agreements, or other criteria". "Software product" mostly, but not exclusively, refers to some kind of technical document. IEEE Std. 1028 offers a list of 32 "examples of software products subject to audit", including documentary products such as various sorts of plan, contracts, specifications, designs, procedures, standards, and reports, but also non-documentary products such as data, test data, and deliverable media. Software audits are distinct from software peer reviews and software management reviews in that they are conducted by personnel external to, and independent of, the software development organization, and are concerned with compliance of products or processes, rather than with their technical content, technical quality, or managerial implications. The term "software audit review" is adopted here to designate the form of software audit described in IEEE Std. 1028. Objectives and participants "The purpose of a software audit is to provide an independent evaluation of conformance of software products and processes to applicable regulations, standards, guidelines, plans, and procedures". The following roles are recommended: The Initiator (who might be a manager in the audited organization, a customer or user representative of the audited organization, or a third party), decides upon the need for an audit, establishes its purpose and scope, specifies the evaluation criteria, identifies the audit personnel, decides what follow-up actions will be required, and distributes the audit report. The Lead Auditor (who must be someone "free from bias and influence that could reduce his ability to make independent, objective evalu
https://en.wikipedia.org/wiki/Substantia%20gelatinosa%20of%20Rolando
The apex of the posterior grey column, one of the three grey columns of the spinal cord, is capped by a V-shaped or crescentic mass of translucent, gelatinous neuroglia, termed the substantia gelatinosa of Rolando (or SGR) (or gelatinous substance of posterior horn of spinal cord), which contains both neuroglia cells, and small nerve cells. The gelatinous appearance is due to a very low concentration of myelinated fibers. It extends the entire length of the spinal cord and into the medulla oblongata where it becomes the spinal nucleus of the trigeminal nerve. It is named after Luigi Rolando. It corresponds to Rexed lamina II. Structure The SGR, or lamina II, is composed of an outer lamina II and an inner lamina II.  In rodents, the inner lamina II is divided into a dorsal and ventral inner lamina II. The distinction between these laminae lies in the areas of the spinal cord that send information to and from the laminae (input and output projections). The cell types within the SGR include islet cells, central cells, stalked or large vertical cells, small vertical cells, and radial cells. The islet cells and small vertical cells are primarily GABAergic, while the large vertical cells and radial cells are primarily glutamatergic. The descriptors GABAergic and glutamatergic refer to the neurotransmitter (GABA and glutamate, respectively) that the cell releases. Typically, the release of GABA from one cell causes the next cell to stop firing. The release of glutamate typically causes the next cell to depolarize and fire. Central cells can be either glutamatergic or GABAergic. These cells synapse on each other to modulate pain signaling through the release of these different neurotransmitters and various neuropeptides. The cells in the SGR receive input from each other and primary afferent neurons and project outwards to other cells within the lamina. Complex circuits of excitation and inhibition lead to transmission and inhibition of pain signals through the spin
https://en.wikipedia.org/wiki/Tombs%20%26%20Treasure
Tombs & Treasure, known in Japan as , is an adventure game originally developed by Falcom in 1986 for the PC-8801, PC-9801, FM-7, MSX 2 and X1 Japanese computer systems. A Famicom/NES version, released in 1988, was altered to be more story-based, and features new music and role-playing elements; an English-language NES version was published by Infocom in 1991. Japanese enhanced remakes were released for the Saturn and Windows systems in 1998 and 1999, respectively. The game takes place in the ancient Mayan city of Chichen Itza on the Yucatán Peninsula. It alternates between using a three-quarters overhead view for travelling from ruin to ruin, and switching to a first-person perspective upon entering a specific location. Plot At the start of the game, the player is allowed to name both the protagonist, a young brown-haired man, and the lead female, a green-haired lass who is the daughter of one Professor Imes; if no names are manually entered, the game will randomly choose from a pre-coded list of names for both characters. Professor Imes is a renowned archaeologist who has been investigating an artifact known as the Sun Key, which potentially has the ability to unlock the greatest secrets of the lost Mayan civilization, and is rumored to be housed somewhere within Chichen Itza. However, on his latest expedition, he and his team mysteriously disappeared while exploring different temples between June 22 and July 14; only his guide, José, was able to escape unharmed and return with some artifacts that the team found, hoping they will help the player in his quest to find the professor. The three, after talking with the professor's secretary Anne, travel to Mexico and into the ancient city to look for clues. Several actual sites of Chichen Itza are explored by the player, although their interiors and purposes are purposefully changed slightly in order to help create an atmosphere of fantasy and mystery-solving intrigue. Furthermore, each ruin is home to a demon that
https://en.wikipedia.org/wiki/Talairach%20coordinates
Talairach coordinates, also known as Talairach space, is a 3-dimensional coordinate system (known as an 'atlas') of the human brain, which is used to map the location of brain structures independent from individual differences in the size and overall shape of the brain. It is still common to use Talairach coordinates in functional brain imaging studies and to target transcranial stimulation of brain regions. However, alternative methods such as the MNI Coordinate System (originated at the Montreal Neurological Institute and Hospital) have largely replaced Talairach for stereotaxy and other procedures. History The coordinate system was first created by neurosurgeons Jean Talairach and Gabor Szikla in their work on the Talairach Atlas in 1967, creating a standardized grid for neurosurgery. The grid was based on the idea that distances to lesions in the brain are proportional to overall brain size (i.e., the distance between two structures is larger in a larger brain). In 1988 a second edition of the Talairach Atlas came out that was coauthored by Tournoux, and it is sometimes known as the Talairach-Tournoux system. This atlas was based on single post-mortem dissection of a human brain. The Talairach Atlas uses Brodmann areas as the labels for brain regions. Description The Talairach coordinate system is defined by making two anchors, the anterior commissure and posterior commissure, lie on a straight horizontal line. Since these two points lie on the midsagittal plane, the coordinate system is completely defined by requiring this plane to be vertical. Distances in Talairach coordinates are measured from the anterior commissure as the origin (as defined in the 1998 edition). The y-axis points posterior and anterior to the commissures, the left and right is the x-axis, and the z-axis is in the ventral-dorsal (down and up) directions. Once the brain is reoriented to these axes, the researchers must also outline the six cortical outlines of the brain: anterior, posteri
https://en.wikipedia.org/wiki/Posterior%20commissure
The posterior commissure (also known as the epithalamic commissure) is a rounded band of white fibers crossing the middle line on the dorsal aspect of the rostral end of the cerebral aqueduct. It is important in the bilateral pupillary light reflex. It constitutes part of the epithalamus. Its fibers acquire their medullary sheaths early, but their connections have not been definitively determined. Most of them have their origin in a nucleus, the nucleus of the posterior commissure (nucleus of Darkschewitsch), which lies in the periaqueductal grey at rostral end of the cerebral aqueduct, in front of the oculomotor nucleus. Some are thought to be derived from the posterior part of the thalamus and from the superior colliculus, whereas others are believed to be continued downward into the medial longitudinal fasciculus. For the pupillary light reflex, the olivary pretectal nucleus innervates both Edinger-Westphal nuclei. To reach the contralateral Edinger-Westphal nucleus, the axons cross in the posterior commissure.
https://en.wikipedia.org/wiki/Comparison%20of%20free%20and%20open-source%20software%20licenses
This comparison only covers software licenses which have a linked Wikipedia article for details and which are approved by at least one of the following expert groups: the Free Software Foundation, the Open Source Initiative, the Debian Project and the Fedora Project. For a list of licenses not specifically intended for software, see List of free-content licences. FOSS licenses FOSS stands for "Free and Open Source Software". There is no one universally agreed-upon definition of FOSS software and various groups maintain approved lists of licenses. The Open Source Initiative (OSI) is one such organization keeping a list of open-source licenses. The Free Software Foundation (FSF) maintains a list of what it considers free. FSF's free software and OSI's open-source licenses together are called FOSS licenses. There are licenses accepted by the OSI which are not free as per the Free Software Definition. The Open Source Definition allows for further restrictions like price, type of contribution and origin of the contribution, e.g. the case of the NASA Open Source Agreement, which requires the code to be "original" work. The OSI does not endorse FSF license analysis (interpretation) as per their disclaimer. The FSF's Free Software Definition focuses on the user's unrestricted rights to use a program, to study and modify it, to copy it, and redistribute it for any purpose, which are considered by the FSF the four essential freedoms. The OSI's open-source criteria focuses on the availability of the source code and the advantages of an unrestricted and community driven development model. Yet, many FOSS licenses, like the Apache License, and all Free Software licenses allow commercial use of FOSS components. General comparison For a simpler comparison across the most common licenses see free-software license comparison. The following table compares various features of each license and is a general guide to the terms and conditions of each license, based on seven subjects o
https://en.wikipedia.org/wiki/Genetic%20divergence
Genetic divergence is the process in which two or more populations of an ancestral species accumulate independent genetic changes (mutations) through time, often leading to reproductive isolation and continued mutation even after the populations have become reproductively isolated for some period of time, as there isn’t genetic exchange anymore. In some cases, subpopulations cover living in ecologically distinct peripheral environments can exhibit genetic divergence from the remainder of a population, especially where the range of a population is very large (see parapatric speciation). The genetic differences among divergent populations can involve silent mutations (that have no effect on the phenotype) or give rise to significant morphological and/or physiological changes. Genetic divergence will always accompany reproductive isolation, either due to novel adaptations via selection and/or due to genetic drift, and is the principal mechanism underlying speciation. On a molecular genetics level, genetic divergence is due to changes in a small number of genes in a species, resulting in speciation. However, researchers argue that it is unlikely that divergence is a result of a significant, single, dominant mutation in a genetic locus because if that were so, the individual with that mutation would have zero fitness. Consequently, they could not reproduce and pass the mutation on to further generations. Hence, it is more likely that divergence, and subsequently reproductive isolation, are the outcomes of multiple small mutations over evolutionary time accumulating in a population isolated from gene flow. Causes Founder effect One possible cause of genetic divergence is the founder effect, which is when a few individuals become isolated from their original population. Those individuals might overrepresent a certain genetic pattern, which means that certain biological characteristics are overrepresented. These individuals can form a new population with different gene
https://en.wikipedia.org/wiki/Vacant%20niche
The issue of what exactly defines a vacant niche, also known as empty niche, and whether they exist in ecosystems is controversial. The subject is intimately tied into a much broader debate on whether ecosystems can reach equilibrium, where they could theoretically become maximally saturated with species. Given that saturation is a measure of the number of species per resource axis per ecosystem, the question becomes: is it useful to define unused resource clusters as niche 'vacancies'? History of the concept Whether vacant niches are permissible has been both confirmed and denied as the definition of a niche has changed over time. Within the pre-Hutchinsonian niche frameworks of Grinnell (1917) and Elton (1927) vacant niches were allowable. In the framework of Grinnell, the species niche was largely equivalent to its habitat, such that a niche vacancy could be looked upon as a habitat vacancy. The Eltonian framework considered the niche to be equivalent to a species position in a trophic web, or food chain, and in this respect there is always going to be a vacant niche at the top predator level. Whether this position gets filled depends upon the ecological efficiency of the species filling it however. The Hutchinsonian niche framework, on the other hand, directly precludes the possibility of there being vacant niches. Hutchinson defined the niche as an n-dimensional hyper-volume whose dimensions correspond to resource gradients over which species are distributed in a unimodal fashion. In this we see that the operational definition of his niche rests on the fact that a species is needed in order to rationally define a niche in the first place. This fact didn't stop Hutchinson from making statements inconsistent with this such as: “The question raised by cases like this is whether the three Nilghiri Corixinae fill all the available niches...or whether there are really empty niches.. . .The rapid spread of introduced species often gives evidence of empty niches,
https://en.wikipedia.org/wiki/Tree%20moss
Tree moss is a common name for several organisms and may refer to: Climacium, a genus of mosses which resemble miniature trees Climacium dendroides, a common species of Climacium Evernia, a genus of lichens which grow on trees Usnea, a genus of lichens which grow on trees
https://en.wikipedia.org/wiki/Set%20theory%20of%20the%20real%20line
Set theory of the real line is an area of mathematics concerned with the application of set theory to aspects of the real numbers. For example, one knows that all countable sets of reals are null, i.e. have Lebesgue measure 0; one might therefore ask the least possible size of a set which is not Lebesgue null. This invariant is called the uniformity of the ideal of null sets, denoted . There are many such invariants associated with this and other ideals, e.g. the ideal of meagre sets, plus more which do not have a characterisation in terms of ideals. If the continuum hypothesis (CH) holds, then all such invariants are equal to , the least uncountable cardinal. For example, we know is uncountable, but being the size of some set of reals under CH it can be at most . On the other hand, if one assumes Martin's Axiom (MA) all common invariants are "big", that is equal to , the cardinality of the continuum. Martin's Axiom is consistent with . In fact one should view Martin's Axiom as a forcing axiom that negates the need to do specific forcings of a certain class (those satisfying the ccc, since the consistency of MA with large continuum is proved by doing all such forcings (up to a certain size shown to be sufficient). Each invariant can be made large by some ccc forcing, thus each is big given MA. If one restricts to specific forcings, some invariants will become big while others remain small. Analysing these effects is the major work of the area, seeking to determine which inequalities between invariants are provable and which are inconsistent with ZFC. The inequalities among the ideals of measure (null sets) and category (meagre sets) are captured in Cichon's diagram. Seventeen models (forcing constructions) were produced during the 1980s, starting with work of Arnold Miller, to demonstrate that no other inequalities are provable. These are analysed in detail in the book by Tomek Bartoszynski and Haim Judah, two of the eminent workers in the field. One curious
https://en.wikipedia.org/wiki/Undervalued%20stock
An undervalued stock is defined as a stock that is selling at a price significantly below what is assumed to be its intrinsic value. For example, if a stock is selling for $50, but it is worth $100 based on predictable future cash flows, then it is an undervalued stock. The undervalued stock has the intrinsic value below the investment's true intrinsic value. Numerous popular books discuss undervalued stocks. Examples are The Intelligent Investor by Benjamin Graham, also known as "The Dean of Wall Street," and The Warren Buffett Way by Robert Hagstrom. The Intelligent Investor puts forth Graham's principles that are based on mathematical calculations such as the price/earning ratio. He was less concerned with the qualitative aspects of a business such as the nature of a business, its growth potential and its management. For example, Amazon, Facebook, Netflix and Tesla in 2016, although they had a promising future, would not have appealed to Graham, since their price-earnings ratios were too high. Graham's ideas had a significant influence on the young Warren Buffett, who later became a famous US billionaire. Determining factors Morningstar uses five factors to determine when something is a value stock, namely: price/prospective earnings (a predictive version of price/earnings ratio sometimes called Forward P/E.) price/book price/sales price/cash flow dividend yield Warren Buffett, also known as "The Oracle of Omaha," stated that the value of a business is the sum of the cash flows over the life of the business discounted at an appropriate interest rate. This is in reference to the ideas of John Burr Williams. Therefore, one would not be able to predict whether a stock is undervalued without predicting the future profits of a company and future interest rates. Buffett stated that he is interested in predictable businesses and he uses the interest rate on the 10-year treasury bond in his calculations. Therefore, an investor has to be fairly certain that a c
https://en.wikipedia.org/wiki/Ecological%20goods%20and%20services
Ecological goods and services (EG&S) are the economical benefits (goods and services) arising from the ecological functions of ecosystems. Such benefits accrue to all living organisms, including animals and plants, rather than to humans alone. However, there is a growing recognition of the importance to society that ecological goods and services provide for health, social, cultural, and economic needs. Introduction Examples of ecological goods include clean air, and abundant fresh water. Examples of ecological services include purification of air and water, maintenance of biodiversity, decomposition of wastes, soil and vegetation generation and renewal, pollination of crops and natural vegetation, groundwater recharge through wetlands, seed dispersal, greenhouse gas mitigation, and aesthetically pleasing landscapes. The products and processes of ecological goods and services are complex and occur over long periods of time. They are a sub-category of public goods. The concern over ecological goods and services arises because we are losing them at an unsustainable rate, and therefore land use managers must devise a host of tools to encourage the provision of more ecological goods and services. Rural and suburban settings are especially important, as lands that are developed and converted from their natural state lose their ecological functions. Therefore, ecological goods and services provided by privately held lands become increasingly important. Markets A market may be created wherein ecological goods and services are demanded by society and supplied by public and private landowners. Some believe that public lands alone are not adequate to supply this market, and that privately held lands are needed to close this gap. What has emerged is the notion that rural landowners who provide ecological goods and services to society through good stewardship practices on their land should be duly compensated. The main tool to accomplish this to date has been to pay farmers
https://en.wikipedia.org/wiki/Homomorphic%20encryption
Homomorphic encryption is a form of encryption that allows computations to be performed on encrypted data without first having to decrypt it. The resulting computations are left in an encrypted form which, when decrypted, result in an output that is identical to that produced had the operations been performed on the unencrypted data. Homomorphic encryption can be used for privacy-preserving outsourced storage and computation. This allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted. Homomorphic encryption eliminates the need for processing data in the clear, thereby preventing attacks that would enable a hacker to access that data while it is being processed, using privilege escalation. For sensitive data, such as health care information, homomorphic encryption can be used to enable new services by removing privacy barriers inhibiting data sharing or increasing security to existing services. For example, predictive analytics in health care can be hard to apply via a third party service provider due to medical data privacy concerns, but if the predictive analytics service provider can operate on encrypted data instead, these privacy concerns are diminished. Moreover, even if the service provider's system is compromised, the data would remain secure. Description Homomorphic encryption is a form of encryption with an additional evaluation capability for computing over encrypted data without access to the secret key. The result of such a computation remains encrypted. Homomorphic encryption can be viewed as an extension of public-key cryptography. Homomorphic refers to homomorphism in algebra: the encryption and decryption functions can be thought of as homomorphisms between plaintext and ciphertext spaces. Homomorphic encryption includes multiple types of encryption schemes that can perform different classes of computations over encrypted data. The computations are represented as either Boolean or arith
https://en.wikipedia.org/wiki/Segregate%20%28taxonomy%29
In taxonomy, a segregate, or a segregate taxon is created when a taxon is split off from another taxon. This other taxon will be better known, usually bigger, and will continue to exist, even after the segregate taxon has been split off. A segregate will be either new or ephemeral: there is a tendency for taxonomists to disagree on segregates, and later workers often reunite a segregate with the 'mother' taxon. If a segregate is generally accepted as a 'good' taxon it ceases to be a segregate. Thus, this is a way of indicating change in the taxonomic status. It should not be confused with, for example, the subdivision of a genus into subgenera. For example, the genus Alsobia is a segregate from the genus Episcia; The genera Filipendula and Aruncus are segregates from the genus Spiraea. External links A more detailed explanation, with multiple examples on mushrooms. Botanical nomenclature Plant taxonomy Taxonomy (biology)
https://en.wikipedia.org/wiki/Noga%20Alon
Noga Alon (; born 1956) is an Israeli mathematician and a professor of mathematics at Princeton University noted for his contributions to combinatorics and theoretical computer science, having authored hundreds of papers. Education and career Alon was born in 1956 in Haifa, where he graduated from the Hebrew Reali School in 1974. He graduated summa cum laude from the Technion – Israel Institute of Technology in 1979, earned a master's degree in mathematics in 1980 from Tel Aviv University, and received his Ph.D. in Mathematics at the Hebrew University of Jerusalem in 1983 with the dissertation Extremal Problems in Combinatorics supervised by Micha Perles. After postdoctoral research at the Massachusetts Institute of Technology he returned to Tel Aviv University as a senior lecturer in 1985, obtained a permanent position as an associate professor there in 1986, and was promoted to full professor in 1988. He was head of the School of Mathematical Science from 1999 to 2001, and was given the Florence and Ted Baumritter Combinatorics and Computer Science Chair, before retiring as professor emeritus and moving to Princeton University in 2018. He was editor-in-chief of the journal Random Structures and Algorithms beginning in 2008. Research Alon has published more than five hundred research papers, mostly in combinatorics and in theoretical computer science, and one book, on the probabilistic method. He has also published under the pseudonym "A. Nilli", based on the name of his daughter Nilli Alon. His research contributions include the combinatorial Nullstellensatz, an algebraic tool with many applications in combinatorics; color-coding, a technique for fixed-parameter tractability of pattern-matching algorithms in graphs; and the Alon–Boppana bound in spectral graph theory. Selected works Book The Probabilistic Method, with Joel Spencer, Wiley, 1992. 2nd ed., 2000; 3rd ed., 2008; 4th ed., 2016. Research articles Previously in the ACM Symposium on Theory o
https://en.wikipedia.org/wiki/Monitor%20filter
A monitor filter is an accessory to the computer display to filter out the light reflected from the smooth glass surface of a CRT or flat panel display. Many also include a ground to dissipate static buildup. A secondary use for monitor filters is privacy as they decrease the viewing angle of a monitor, preventing it from being viewed from the side; in this case, they are also called privacy screens. The standard type of anti-glare filter consists of a coating that reduces the reflection from a glass or plastic surface. These are manufactured from polycarbonate or acrylic plastic. An older variety of anti-glare filter used a mesh filter that had the appearance of a nylon screen. Although effective, a mesh filter also caused degradation of the image quality. Marketing names of privacy filters: HP's "SureView" Lenovo's "PrivacyGuard" Dell's "SafeScreen" Support for privacy screen is available since Linux kernel 5.17 that expose it through Direct Rendering Manager and is used by GNOME 42.
https://en.wikipedia.org/wiki/Pallister%E2%80%93Hall%20syndrome
Pallister–Hall syndrome (PHS) is a rare genetic disorder that affects various body systems. The main features are a non-cancerous mass on the hypothalamus (hypothalamic hamartoma) and extra digits (polydactylism). The prevalence of Pallister-Hall Syndrome is unknown; about 100 cases have been reported in publication. History The syndrome was originally described by American and Canadian geneticists Philip Pallister and Judith Hall in their research of newborn deaths due to pituitary failure. Subsequent discovery of living children and adults expanded the understanding of the syndrome and established the transmission pattern within families. Presentation The main characteristics of the syndrome are extra fingers and/or toes (polydactyly), with the skin between some fingers or toes potentially fused or "webbed" (cutaneous syndactyly), and a benign mass or lesion in the brain called a hypothalamic hamartoma. This benign tumor may not cause any medical problems; however, some hypothalamic hamartomas lead to seizures or hormone abnormalities. Other features of Pallister–Hall syndrome include a split opening of the airway called bifid epiglottis, laryngeal cleft, blockage of the anal opening (imperforate anus), and kidney abnormalities. Signs and symptoms of this disorder vary from mild to severe. Seizures The most common type of seizure from this disorder of that occur is known as gelastic epilepsy or "laughing" seizures. Seizures may begin at any age but usually before three or four years of age. The seizures usually start with laughter described as being "hollow" or "empty" and unpleasant. The laughter occurs suddenly for no obvious reason and is often out of place. Other traditional seizure types such as tonic-clonic and absence seizures may also develop due to temporal lobe epilepsy. People with Pallister-Hall may experience less severe seizures than people with a hypothalamic hamartoma only. Transmission Pallister-Hall Syndrome occurs due to a mutation in the G
https://en.wikipedia.org/wiki/Pronunciation%20assessment
Automatic pronunciation assessment is the use of speech recognition to verify the correctness of pronounced speech, as distinguished from manual assessment by an instructor or proctor. Also called speech verification, pronunciation evaluation, and pronunciation scoring, the main application of this technology is computer-aided pronunciation teaching (CAPT) when combined with computer-aided instruction for computer-assisted language learning (CALL), speech remediation, or accent reduction. Pronunciation assessment does not determine unknown speech (as in dictation or automatic transcription) but instead, knowing the expected word(s) in advance, it attempts to verify the correctness of the learner's pronunciation and ideally their intelligibility to listeners, sometimes along with often inconsequential prosody such as intonation, pitch, tempo, rhythm, and stress. Pronunciation assessment is also used in reading tutoring, for example in products such as Microsoft Teams and from Amira Learning. Automatic pronunciation assessment can also be used to help diagnose and treat speech disorders such as apraxia. The earliest work on pronunciation assessment avoided measuring genuine listener intelligibility, a shortcoming corrected in 2011 at the Toyohashi University of Technology, and included in the Versant high-stakes English fluency assessment from Pearson and mobile apps from 17zuoye Education & Technology, but still missing in 2023 products from Google Search, Microsoft, Educational Testing Service, Speechace, and ELSA. Assessing authentic listener intelligibility is essential for avoiding inaccuracies from accent bias, especially in high-stakes assessments; from words with multiple correct pronunciations; and from phoneme coding errors in machine-readable pronunciation dictionaries. In the Common European Framework of Reference for Languages (CEFR) assessment criteria for "overall phonological control", intelligibility outweighs formally correct pronunciation at all le
https://en.wikipedia.org/wiki/Multiseat%20configuration
A multiseat, multi-station or multiterminal system is a single computer which supports multiple independent local users at the same time. A "seat" consists of all hardware devices assigned to a specific workplace at which one user sits at and interacts with the computer. It consists of at least one graphics device (graphics card or just an output (e.g. HDMI/VGA/DisplayPort port) and the attached monitor/video projector) for the output and a keyboard and a mouse for the input. It can also include video cameras, sound cards and more. Motivation Since the 1960s computers have been shared between users. Especially in the early days of computing when computers were extremely expensive the usual paradigm was a central mainframe computer connected to numerous terminals. With the advent of personal computing this paradigm has been largely replaced by personal computers (or one computer per user). Multiseat setups are a return to this multiuser paradigm but based around a PC which supports a number of zero-clients usually consisting of a terminal per user (screen, keyboard, mouse). In some situations a multiseat setup is more cost-effective because it is not necessary to buy separate motherboards, microprocessors, RAM, hard disks and other components for each user. For example, buying one high speed CPU, usually costs less than buying several slower CPUs. History In the 1970s, it was very commonplace to connect multiple computer terminals to a single mainframe computer, even graphical terminals. Early terminals were connected with RS-232 type serial connections, either directly, or through modems. With the advent of Internet Protocol based networking, it became possible for multiple users to log into a host using telnet or – for a graphic environment – an X Window System "server". These systems would retain a physically secure "root console" for system administration and direct access to the host machine. Support for multiple consoles in a PC running the X interface w
https://en.wikipedia.org/wiki/Periodic%20point
In mathematics, in the study of iterated functions and dynamical systems, a periodic point of a function is a point which the system returns to after a certain number of function iterations or a certain amount of time. Iterated functions Given a mapping from a set into itself, a point in is called periodic point if there exists an >0 so that where is the th iterate of . The smallest positive integer satisfying the above is called the prime period or least period of the point . If every point in is a periodic point with the same period , then is called periodic with period (this is not to be confused with the notion of a periodic function). If there exist distinct and such that then is called a preperiodic point. All periodic points are preperiodic. If is a diffeomorphism of a differentiable manifold, so that the derivative is defined, then one says that a periodic point is hyperbolic if that it is attractive if and it is repelling if If the dimension of the stable manifold of a periodic point or fixed point is zero, the point is called a source; if the dimension of its unstable manifold is zero, it is called a sink; and if both the stable and unstable manifold have nonzero dimension, it is called a saddle or saddle point. Examples A period-one point is called a fixed point. The logistic map exhibits periodicity for various values of the parameter . For between 0 and 1, 0 is the sole periodic point, with period 1 (giving the sequence which attracts all orbits). For between 1 and 3, the value 0 is still periodic but is not attracting, while the value is an attracting periodic point of period 1. With greater than 3 but less than there are a pair of period-2 points which together form an attracting sequence, as well as the non-attracting period-1 points 0 and As the value of parameter rises toward 4, there arise groups of periodic points with any positive integer for the period; for some values of one of these repeating sequences is
https://en.wikipedia.org/wiki/Schedule
A schedule or a timetable, as a basic time-management tool, consists of a list of times at which possible tasks, events, or actions are intended to take place, or of a sequence of events in the chronological order in which such things are intended to take place. The process of creating a schedule — deciding how to order these tasks and how to commit resources between the variety of possible tasks — is called scheduling, and a person responsible for making a particular schedule may be called a scheduler. Making and following schedules is an ancient human activity. Some scenarios associate this kind of planning with learning life skills. Schedules are necessary, or at least useful, in situations where individuals need to know what time they must be at a specific location to receive a specific service, and where people need to accomplish a set of goals within a set time. Schedules can usefully span both short periods, such as a daily or weekly schedule, and long-term planning for periods of several months or years. They are often made using a calendar, where the person making the schedule can note the dates and times at which various events are planned to occur. Schedules that do not set forth specific times for events to occur may instead list algorithmically an expected order in which events either can or must take place. In some situations, schedules can be uncertain, such as where the conduct of daily life relies on environmental factors outside human control. People who are vacationing or otherwise seeking to reduce stress and achieve relaxation may intentionally avoid having a schedule for a certain period of time. Kinds of schedules Publicly available schedules Certain kinds of schedules reflect information that is generally made available to the public, so that members of the public can plan certain activities around them. These may include things like: Hours of operation of businesses, tourist attractions, and government offices, which allow consumers of
https://en.wikipedia.org/wiki/Temporal%20fascia
The temporal fascia (or deep temporal fascia) is a fascia of the head that covers the temporalis muscle and structures situated superior to the zygomatic arch. The fascia is attached superiorly at the superior temporal line; inferiorly, it splits into two layers at the superior border of the zygomatic arch - the superficial layer then attaches to the lateral aspect of the superior border of the arch, and the deep layer to its medial aspect. The space between the two layers is occupied by adipose tissue and contains a branch of the superficial temporal artery, and the zygomaticotemporal nerve. Anatomy The temporal fascia t is a strong fibrous investment. Structure Superiorly, it is a single layer, attached to the entire extent of the superior temporal line. Inferiorly, where it is fixed to the zygomatic arch, it consists of two layers, one of which is inserted into the lateral, and the other into the medial border of the arch. Contents A small quantity of fat, the orbital branch of the superficial temporal artery, and a filament from the zygomatic branch of the maxillary nerve, are contained between the two layers created by the inferior split of the fascia. Attachments The superficial fibers of the temporalis muscle attach onto the deep surface of the temporal fascia. Relations Superficial to the temporal fascia are the auricularis anterior muscle and superior auricular muscle, the galea aponeurotica, and (part of) the orbicularis oculi muscle. The superficial temporal vessels and the auriculotemporal nerve cross it inferoposteriorly. The parotid fascia proceeds to the temporal fascia.
https://en.wikipedia.org/wiki/Conjugacy%20problem
In abstract algebra, the conjugacy problem for a group G with a given presentation is the decision problem of determining, given two words x and y in G, whether or not they represent conjugate elements of G. That is, the problem is to determine whether there exists an element z of G such that The conjugacy problem is also known as the transformation problem. The conjugacy problem was identified by Max Dehn in 1911 as one of the fundamental decision problems in group theory; the other two being the word problem and the isomorphism problem. The conjugacy problem contains the word problem as a special case: if x and y are words, deciding if they are the same word is equivalent to deciding if is the identity, which is the same as deciding if it's conjugate to the identity. In 1912 Dehn gave an algorithm that solves both the word and conjugacy problem for the fundamental groups of closed orientable two-dimensional manifolds of genus greater than or equal to 2 (the genus 0 and genus 1 cases being trivial). It is known that the conjugacy problem is undecidable for many classes of groups. Classes of group presentations for which it is known to be soluble include: free groups (no defining relators) one-relator groups with torsion braid groups knot groups finitely presented conjugacy separable groups finitely generated abelian groups (relators include all commutators) Gromov-hyperbolic groups biautomatic groups CAT(0) groups Fundamental groups of geometrizable 3-manifolds
https://en.wikipedia.org/wiki/Lateral%20grey%20column
The lateral grey column (lateral column, lateral cornu, lateral horn of spinal cord, intermediolateral column) is one of the three grey columns of the spinal cord (which give the shape of a butterfly); the others being the anterior and posterior grey columns. The lateral grey column is primarily involved with activity in the sympathetic division of the autonomic motor system. It projects to the side as a triangular field in the thoracic and upper lumbar regions (specifically T1-L2) of the postero-lateral part of the anterior grey column. Background information Nervous system The nervous system is the system of neurons, or nerve cells that relay electrical signals through the brain and body. A nerve cell receives signals from other nerve cells through tree-branch-like extensions called dendrites and passes signals through a long extension called an axon (or nerve fiber). Synapses are places where one cell's axon passes information to another cell's dendrite by sending chemicals called neurotransmitters across a small gap called a synaptic cleft. Synapses occur in various locations, including ganglia (singular: ganglion), which are masses of nerve cell bodies. Preganglionic nerve cells in the sympathetic nervous system (all of which come from the lateral grey column), use the neurotransmitter acetylcholine, while postganglionic sympathetic nerve cells use norepinephrine. Grey matter in the brain and spinal cord is any accumulation of cell bodies and neuropil (neuropil is tissue rich in nerve cell bodies and dendrites). White matter consists of nerve tracts (groups of axons) and commissures (tracts that cross the brain's midline). Sympathetic nervous system The nervous system is divided into the central nervous system (brain and spinal cord) and the peripheral nervous system (everything else). The peripheral nervous system is divided into the somatic nervous system (voluntary processes) and the autonomic nervous system (involuntary processes). The autonomic nervou
https://en.wikipedia.org/wiki/Interspecies%20communication
Interspecies communication is communication between different species of animals, plants, or microorganisms. Mutualism Cooperative interspecies communication implies sharing and understanding information between two or more species that work towards the benefit of both species (mutualism). Since the 1970s, primatologist Sue Savage-Rumbaugh has been working with primates at Georgia State University's Language Research Center (LRC), and more recently, the Iowa Primate Learning Sanctuary. In 1985, using lexigram symbols, a keyboard and monitor, and other computer technology, Savage-Rumbaugh began her groundbreaking work with Kanzi, a male bonobo (P. paniscus). Her research has made significant contributions to a growing body of work in sociobiology studying language learning in non-human primates and exploring the role of language and communication as an evolutionary mechanism. Koko, a lowland gorilla, began learning a modified American Sign Language as an infant, when Francine "Penny" Patterson, PhD, started working with her in 1975. Penny and Koko worked together at the Gorilla Foundation in one of the longest interspecies communication studies ever conducted until Koko's death in 2018. Koko had a vocabulary of over 1000 signs, and understood a greater amount of spoken English. In April 1998, Koko gave an AOL live chat. Sign language was used to relay to Koko questions from the online audience of 7,811 AOL members, The following is an excerpt from the live chat. AOL: MInyKitty asks Koko are you going to have a baby in the future? PENNY: OK, is that for Koko? Koko are you going to have a baby in the future? KOKO: Koko-love eat ... sip. AOL: Me too! PENNY: What about a baby? You going to have baby? She's just thinking...her hands are together... KOKO: Unattention. PENNY: Oh poor sweetheart. She said 'unattention.' She covered her face with her hands..which means it's not happening, basically, or it hasn't happened yet. . . I don't see it. AOL: That's sad! PEN
https://en.wikipedia.org/wiki/U.S.%20National%20Vegetation%20Classification
The U.S. National Vegetation Classification (NVC or USNVC) is a scheme for classifying the natural and cultural vegetation communities of the United States. The purpose of this standardized vegetation classification system is to facilitate communication between land managers, scientists, and the public when managing, researching, and protecting plant communities. The non-profit group NatureServe maintains the NVC for the U.S. government. See also British National Vegetation Classification Vegetation classification External links The U.S. National Vegetation Classification website "National Vegetation Classification Standard, Version 2" FGDC-STD-005-2008, Vegetation Subcommittee, Federal Geographic Data Committee, February 2008 U.S. Geological Survey page about the Vegetation Characterization Program Federal Geographic Data Committee page about the NVC Environment of the United States Flora of the United States NatureServe Biological classification
https://en.wikipedia.org/wiki/Hydraulic%20seal
A hydraulic seal is a relatively soft, non-metallic ring, captured in a groove or fixed in a combination of rings, forming a seal assembly, to block or separate fluid in reciprocating motion applications. Hydraulic seals are vital in machinery. Their use is critical in providing a way for fluid power to be converted to linear motion. Materials Hydraulic seals can be made from a variety of materials such as polyurethane, rubber or PTFE. The type of material is determined by the specific operating conditions or limits due to fluid type, pressure, fluid chemical compatibility or temperature. Static A static hydraulic seal is located in a groove and sees no movement - only sealing within its confined space, acting like a gasket. To achieve this the gasket should be under pressure. The pressure is applied by tightening of the bolts. Dynamic A type of dynamic hydraulic seal called a rod seal is exposed to movement on its inner diameter along the shaft or rod of a hydraulic cylinder. Hydraulic Piston seals prevent fluid from crossing the area of the piston head. Rod seals ensure that fluid does not leak from the cylinder and adequate pressure is maintained. Wiper seals are installed to prevent contamination from entering the hydraulic system. A rod seal prevents leakage of hydraulic fluid to the outside of the sealing system. Additionally, rod seals help contribute, in combination with a wiper seal, to preventing contamination of the environment. A type of dynamic hydraulic seal called a piston seal is exposed to movement on its outer diameter along the tube or bore of a hydraulic cylinder. Different Piston Seal profiles offer distinct advantages depending on the sealing application. Temperature, Pressure, Stroke Speed and other environmental considerations would lead users to different profiles and materials.
https://en.wikipedia.org/wiki/Anticarcinogen
An anticarcinogen (also known as a carcinopreventive agent) is a substance that counteracts the effects of a carcinogen or inhibits the development of cancer. Anticarcinogens are different from anticarcinoma agents (also known as anticancer or anti-neoplastic agents) in that anticarcinoma agents are used to selectively destroy or inhibit cancer cells after cancer has developed. Interest in anticarcinogens is motivated primarily by the principle that it is preferable to prevent disease (preventive medicine) than to have to treat it (rescue medicine). In theory, anticarcinogens may act via different mechanisms including enhancement of natural defences against cancer, deactivation of carcinogens, and blocking the mechanisms by which carcinogens act (such as free radical damage to DNA). Confirmation that a substance possesses anticarcinogenic activity requires extensive in vitro, in vivo, and clinical investigation. Health claims for anticarcinogens are regulated by various national and international organizations like the US Food and Drug Administration (FDA) and European Food Safety Authority (EFSA). See also Mutagen Carcinogenesis
https://en.wikipedia.org/wiki/Pfeilstorch
, ; plural , ) is a stork that gets injured by an arrow while wintering in Africa and returns to Europe with the arrow stuck in its body. As of 2003, about 25 have been documented in Germany. The first and most famous was a white stork found in 1822 near the German village of Klütz, in the state of Mecklenburg-Vorpommern. It was carrying a spear from central Africa in its neck. The specimen was stuffed and can be seen today in the zoological collection of the University of Rostock. It is therefore referred to as the . This was crucial in understanding the migration of European birds. Before migration was understood, people struggled to explain the sudden annual disappearance of birds like the white stork and barn swallow. Besides migration, some theories of the time held that they turned into other kinds of birds, mice, or hibernated underwater during the winter, and such theories were even propagated by zoologists of the time. The in particular proved that birds migrate long distances to wintering grounds.
https://en.wikipedia.org/wiki/Switched%20capacitor
A switched capacitor (SC) is an electronic circuit that implements a function by moving charges into and out of capacitors when electronic switches are opened and closed. Usually, non-overlapping clock signals are used to control the switches, so that not all switches are closed simultaneously. Filters implemented with these elements are termed switched-capacitor filters, which depend only on the ratios between capacitances and the switching frequency, and not on precise resistors. This makes them much more suitable for use within integrated circuits, where accurately specified resistors and capacitors are not economical to construct, but accurate clocks and accurate relative ratios of capacitances are economical. SC circuits are typically implemented using metal–oxide–semiconductor (MOS) technology, with MOS capacitors and MOS field-effect transistor (MOSFET) switches, and they are commonly fabricated using the complementary MOS (CMOS) process. Common applications of MOS SC circuits include mixed-signal integrated circuits, digital-to-analog converter (DAC) chips, analog-to-digital converter (ADC) chips, pulse-code modulation (PCM) codec-filters, and PCM digital telephony. Parallel resistor simulation using a switched-capacitor The simplest switched-capacitor (SC) circuit is made of one capacitor and two switches S and S which alternatively connect the capacitor to either in or out at a switching frequency of . Recall that Ohm's law can express the relationship between voltage, current, and resistance as: The following equivalent resistance calculation will show how during each switching cycle, this switched-capacitor circuit transfers an amount of charge from in to out such that it behaves according to a similar linear current–voltage relationship with Equivalent resistance calculation By definition, the charge on any capacitor with a voltage between its plates is: Therefore, when S is closed while S is open, the charge stored in the capacitor will b
https://en.wikipedia.org/wiki/Fairy%20Flag
The Fairy Flag (Scottish Gaelic: Am Bratach Sìth) is an heirloom of the chiefs of Clan MacLeod. It is held in Dunvegan Castle along with other notable heirlooms, such as the Dunvegan Cup and Sir Rory Mor's Horn. The Fairy Flag is known for the numerous traditions of celtic fairies, and magical properties associated with it. The flag is made of silk, is yellow or brown in colour, and is a square of side about . It has been examined numerous times in the last two centuries, and its condition has somewhat deteriorated. It is ripped and tattered, and is considered to be extremely fragile. The flag is covered in small red "elf dots". In the early part of the 19th century, the flag was also marked with small crosses, but these have since disappeared. The silk of the flag has been stated to have originated in the Far East, and was therefore extremely precious, which led some to believe that the flag may have been an important relic of some sort. Others have attempted to associate the flag with the Crusades or even a raven banner, which was said to have been used by various Viking leaders in the British Isles. There are numerous traditions and stories associated with the flag, most of which deal with its magical properties and mysterious origins. The flag is said to have originated as: a gift from the fairies to an infant chieftain; a gift to a chief from a departing fairy-lover; a reward for defeating an evil spirit. The various powers attributed to the Fairy Flag include: the ability to multiply a clan's military forces; the ability to save the lives of certain clanfolk; the ability to cure a plague on cattle; the ability to increase the chances of fertility; and the ability to bring herring into the loch at Dunvegan. Some traditions relate that if the flag were to be unfurled and waved more than three times, it would either vanish, or lose its powers forever. Clan tradition, preserved in the early 19th century, tells how the Fairy Flag was entrusted to a family of here
https://en.wikipedia.org/wiki/Relative%20velocity
The relative velocity (also or ) is the velocity of an object or observer B in the rest frame of another object or observer A. Classical mechanics In one dimension (non-relativistic) We begin with relative motion in the classical, (or non-relativistic, or the Newtonian approximation) that all speeds are much less than the speed of light. This limit is associated with the Galilean transformation. The figure shows a man on top of a train, at the back edge. At 1:00 pm he begins to walk forward at a walking speed of 10 km/h (kilometers per hour). The train is moving at 40 km/h. The figure depicts the man and train at two different times: first, when the journey began, and also one hour later at 2:00 pm. The figure suggests that the man is 50 km from the starting point after having traveled (by walking and by train) for one hour. This, by definition, is 50 km/h, which suggests that the prescription for calculating relative velocity in this fashion is to add the two velocities. The diagram displays clocks and rulers to remind the reader that while the logic behind this calculation seem flawless, it makes false assumptions about how clocks and rulers behave. (See The train-and-platform thought experiment.) To recognize that this classical model of relative motion violates special relativity, we generalize the example into an equation: where: is the velocity of the Man relative to Earth, is the velocity of the Man relative to the Train, is the velocity of the Train relative to Earth. Fully legitimate expressions for "the velocity of A relative to B" include "the velocity of A with respect to B" and "the velocity of A in the coordinate system where B is always at rest". The violation of special relativity occurs because this equation for relative velocity falsely predicts that different observers will measure different speeds when observing the motion of light. In two dimensions (non-relativistic) The figure shows two objects A and B moving at constant
https://en.wikipedia.org/wiki/Ganglion%20impar
The pelvic portion of each sympathetic trunk is situated in front of the sacrum, medial to the anterior sacral foramina. It consists of four or five small sacral ganglia, connected together by interganglionic cords, and continuous above with the abdominal portion. Below, the two pelvic sympathetic trunks converge, and end on the front of the coccyx in a small ganglion, the ganglion impar, also known as azygos or ganglion of Walther. Clinical significance A study found that in some cases a single injection of nerve block at the ganglion impar offered complete relief from coccydynia.
https://en.wikipedia.org/wiki/Aztec%20codex
Aztec codices ( , sing. codex) are Mesoamerican manuscripts made by the pre-Columbian Aztec, and their Nahuatl-speaking descendants during the colonial period in Mexico. History Before the start of the Spanish colonization of the Americas, the Mexica and their neighbors in and around the Valley of Mexico relied on painted books and records to document many aspects of their lives. Painted manuscripts contained information about their history, science, land tenure, tribute, and sacred rituals. According to the testimony of Bernal Díaz del Castillo, Moctezuma had a library full of such books, known as amatl, or amoxtli, kept by a calpixqui or nobleman in his palace, some of them dealing with tribute. After the conquest of Tenochtitlan, indigenous nations continued to produce painted manuscripts, and the Spaniards came to accept and rely on them as valid and potentially important records. The native tradition of pictorial documentation and expression continued strongly in the Valley of Mexico several generations after the arrival of Europeans. The latest examples of this tradition reach into the early seventeenth century. Formats Since the 19th century, the word codex has been applied to all Mesoamerican pictorial manuscripts, regardless of format or date, despite the fact that pre-Hispanic Aztec manuscripts were (strictly speaking) non-codical in form. Aztec codices were usually made from long sheets of fig-bark paper (amate) or stretched deerskins sewn together to form long and narrow strips; others were painted on big cloths. Thus, usual formats include screenfold books, strips known as tiras, rolls, and cloths, also known as lienzos. While no Aztec codex preserves its covers, from the example of Mixtec codices it is assumed that Aztec screenfold books had wooden covers, perhaps decorated with mosaics in turquoise, as the surviving wooden covers of Codex Vaticanus B suggests. Writing and pictography Aztec codices differ from European books in that most of their c
https://en.wikipedia.org/wiki/Reynolds%20syndrome
Reynolds syndrome is a rare secondary laminopathy, consisting of the combination of primary biliary cirrhosis and progressive systemic sclerosis. In some patients this syndrome has also been associated with Sjögren's syndrome and hemolytic anemia. Typical clinical features include jaundice, elevated blood levels of alkaline phosphatase, calcinosis cutis, telangiectasias, and pruritus. This disease may cause white or yellow-ish spots on the arms or legs. The syndrome, a special case of scleroderma, is named after the American physician, Telfer B. Reynolds, MD (1921–2004), who first described it. He is also known for creating one of the world's first hepatology programs at the University of Southern California. It should not be confused with the more common Raynaud's phenomenon.
https://en.wikipedia.org/wiki/Allostasis
Allostasis is the efficient regulation required to prepare the body to satisfy its needs before they arise by budgeting those needed resources such as oxygen, insulin etc., as opposed to homeostasis, in which the goal is a steady state. Allostasis, stability through variation, was proposed by Peter Sterling and Joseph Eyer in 1988 as a new model of physiological regulation. The goal of every living being is to “find and maintain a steady state for survival” which is achieved through allostasis and homeostasis. The term allostasis is used more frequently now since it is more inclusive of the idea that not everything in the body is in a single steady state meaning that there are varying levels of energy. Evolution and allostasis A second perspective on allostasis is that it is included in the story of how the brain was created. Barrett argues that during evolution organisms' internal systems became much more advanced; continuing to just have several groups of cells would have poorly managed these new systems that these bodies were acquiring. A brain was needed instead because its large size is much more capable of efficient management. However, in rare cases animal species do not rely on brains nor a similar allostatic process. The sea squirt is one example because once the larvae have fully grown they “absorb their brain.” The sea squirt's allostatic process would not be as complex as a human's for example since both species have ecological niches that are of different complexities (i.e. ”All animals have brains that are adapted to their environmental niches and life cycles”). Etymology Allostasis was coined from the Greek ἄλλος (állos, "other," "different") + the στάσις (stasis, "standing still"). The concept was named by Sterling and Eyer in 1988 to mean "remaining stable by being variable". Nature of concept Allostasis proposes a broader hypothesis than homeostasis: The key goal of physiological regulation is not rigid constancy; rather, it is flexible va
https://en.wikipedia.org/wiki/Ouchterlony%20double%20immunodiffusion
Ouchterlony double immunodiffusion (also known as passive double immunodiffusion) is an immunological technique used in the detection, identification and quantification of antibodies and antigens, such as immunoglobulins and extractable nuclear antigens. The technique is named after Örjan Ouchterlony, the Swedish physician who developed the test in 1948 to evaluate the production diphtheria toxins from isolated bacteria. Procedure A gel plate is cut to form a series of holes ("wells") in an agar or agarose gel. A sample extract of interest (for example human cells harvested from tonsil tissue) is placed in one well, sera or purified antibodies are placed in another well and the plate left for 48 hours to develop. During this time the antigens in the sample extract and the antibodies each diffuse out of their respective wells. Where the two diffusion fronts meet, if any of the antibodies recognize any of the antigens, they will bind to the antigens and form an immune complex. The immune complex precipitates in the gel to give a thin white line (precipitin line), which is a visual signature of antigen recognition. The method can be conducted in parallel with multiple wells filled with different antigen mixtures and multiple wells with different antibodies or mixtures of antibodies, and antigen-antibody reactivity can be seen by observing between which wells the precipitate is observed. When more than one well is used there are many possible outcomes based on the reactivity of the antigen and antibody selected. The zone of equivalence lines may give a full identity (i.e. a continuous line), partial identity (i.e. a continuous line with a spur at one end), or a non-identity (i.e. the two lines cross completely). The sensitivity of the assay can be increased by using a stain such as Coomassie brilliant blue, this is done by repeated staining and destaining of the assay until the precipitin lines are at maximum visibility. Theory Precipitation occurs with most antige
https://en.wikipedia.org/wiki/Modifier%20letter%20apostrophe
The modifier letter apostrophe is a letter found in Unicode encoding, used primarily for various glottal sounds. Encoding The letter apostrophe is encoded at , which is in the Spacing Modifier Letters Unicode block. In Unicode code charts it looks identical to the , but this is not true for all fonts. The primary difference between the letter apostrophe and U+2019 is that the letter apostrophe U+02BC has the Unicode General Category "Letter, modifier" (Lm), while U+2019 has the category "Punctuation, Final quote" (Pf). In early Unicode (versions 1.0–2.1.9) U+02BC was preferred for the punctuation apostrophe in English. Since version 3.0.0, however, U+2019 is preferred, because it is defined as a punctuation mark. The behavior of Unicode letters and punctuation marks differs, causing complications if punctuation code points are used for letters or vice versa. Use In the International Phonetic Alphabet, it is used to express ejective consonants, such as and . It denotes a glottal stop in orthographies of many languages, such as Nenets (in Cyrillic script) and the artificial Klingon language. In one version of the Kildin Sami alphabet, it denotes preaspiration. In the Ukrainian alphabet and in the Belarusian alphabet, U+02BC is used for the semi-letter 'apostrophe' (which plays a role similar to Russian ) in certain contexts, such as, for example, in internationalized domain names where a punctuation mark would be disallowed. In Bodo and Dogri written in Devanagari, it marks high tone and low-rising tone on short vowels, respectively. See also Apostrophe Hamza Modifier letter double apostrophe Saltillo and Okina, other letters used for glottal stop Spacing Modifier Letters
https://en.wikipedia.org/wiki/Data%20Protection%20API
Data Protection Application Programming Interface (DPAPI) is a simple cryptographic application programming interface available as a built-in component in Windows 2000 and later versions of Microsoft Windows operating systems. In theory, the Data Protection API can enable symmetric encryption of any kind of data; in practice, its primary use in the Windows operating system is to perform symmetric encryption of asymmetric private keys, using a user or system secret as a significant contribution of entropy. A detailed analysis of DPAPI inner-workings was published in 2011 by Bursztein et al. For nearly all cryptosystems, one of the most difficult challenges is "key management" in part, how to securely store the decryption key. If the key is stored in plain text, then any user that can access the key can access the encrypted data. If the key is to be encrypted, another key is needed, and so on. DPAPI allows developers to encrypt keys using a symmetric key derived from the user's logon secrets, or in the case of system encryption, using the system's domain authentication secrets. The DPAPI keys used for encrypting the user's RSA keys are stored under %APPDATA%\Microsoft\Protect\{SID} directory, where {SID} is the Security Identifier of that user. The DPAPI key is stored in the same file as the master key that protects the users private keys. It usually is 64 bytes of random data. Security properties DPAPI doesn't store any persistent data for itself; instead, it simply receives plaintext and returns ciphertext (or conversely). DPAPI security relies upon the Windows operating system's ability to protect the master key and RSA private keys from compromise, which in most attack scenarios is most highly reliant on the security of the end user's credentials. A main encryption/decryption key is derived from user's password by PBKDF2 function. Particular data binary large objects can be encrypted in a way that salt is added and/or an external user-prompted password (aka "
https://en.wikipedia.org/wiki/Horizontal%20transmission
Horizontal transmission is the transmission of organisms between biotic and/or abiotic members of an ecosystem that are not in a parent-progeny relationship. This concept has been generalized to include transmissions of infectious agents, symbionts, and cultural traits between humans. Because the evolutionary fate of the agent is not tied to reproductive success of the host, horizontal transmission tends to evolve virulence. It is therefore a critical concept for evolutionary medicine. Biological Pathogen transmission In biological, but not cultural, transmissions the carriers (also known as vectors) may include other species. The two main biological modes of transmission are anterior station and posterior station. In anterior station, transmission occurs via the bite of an infected organism (the vector), like in malaria, dengue fever, and bubonic plague. Posterior station is transmission via contact with infected feces. Examples are rickettsiae driven diseases (like typhus), which are contracted by a body louse's fecal material being scratched into the bloodstream. The vector is not necessarily another species, however. For example, a dog infected with Rabies may infect another dog via anterior station transmission. Moreover, there are other modes of biological transmission, such as generalized bleeding in ebola. Symbiont transmission Symbiosis describes a relationship in which at least two organisms are in an intimately integrated state, such that one organism acts a host and the other as the symbiont. There are obligate, those that require the host for survival, and facultative symbionts, those that can survive independently of the host. Symbionts can follow vertical, horizontal, or a mixed mode of transmission to their host. Horizontal, or lateral, transmission describes the acquisition of a facultative symbiont from the environment or from a nearby host. The life cycle of the host includes both symbiotic and aposymbiotic phases. The aposymbiotic p
https://en.wikipedia.org/wiki/Funiculus%20%28neuroanatomy%29
A funiculus or column is a small bundle of axons (nerve fibres), enclosed by the perineurium. A small nerve may consist of a single funiculus, but a larger nerve will have several funiculi collected together into larger bundles known as fascicles. Fascicles are bound together in a common membrane, the epineurium. Funiculi in the spinal cord are portions of white matter. Examples include: Anterior funiculus of the spinal cord Lateral funiculus of the spinal cord Posterior funiculus of the spinal cord Funiculus separans of the rhomboid fossa
https://en.wikipedia.org/wiki/Anosmin-1
Anosmin-1 is a secreted, EM associated glycoprotein found in humans and other organisms responsible for normal development, which is expressed in the brain, spinal cord and kidney. Absence or damage to the protein results in Kallmann syndrome in humans, which is characterized by loss of olfactory bulbs and GnRH secretion leading to anosmia and hypothalamic hypogonadotropic hypogonadism. Anosmin-1 is coded by the KAL-1 gene, which is found on the X chromosome. Anosmin-1 is 100 kilodaltons and is expressed on the outside of cells. Because of this and because of its contribution to normal migration of nerve cells, a role in the extracellular matrix has been postulated. Function During neural crest cell development, anosmin-1 plays a role in cranial neural cell formation by spatiotemporal regulation. Secreated anosmin-1 enhances FGF activity by promoting FGF8-FGFR1 complex formation, whereas inhibits both BMP5 and WNT3A activities. As a results, orchestrated regulation of FGF, BMP, and WNT by anosmin-1 control EMT and MET during neural crest cell development. In human retinal pigment epithelial cell (RPE), the expression of anosmin-1 is regulated by TGF-β which remain to be investigated. Structure and pathology Anosmin-1 is encoded by a gene ANOS1 (earlier called ADMLX, KAL, KAL1, KALIG1). In human it is located on the X chromosome at Xp22.3 and is affected in some male individuals with Kallmann syndrome. This gene codes for a protein of the extracellular matrix named anosmin-1, which is involved in the migration of certain nerve cell precursors (neuroendocrine GnRH cells) during embryogenesis. Deletion or mutation of this gene results in loss of the functional protein and affects the proper development of the olfactory nerves and olfactory bulbs. In addition, neural cells that produce GnRH fail to migrate to the hypothalamus. Clinically, mutation results in the X-linked form of Kallmann syndrome. Individuals with Kallmann syndrome experience anosmia (lack of smell
https://en.wikipedia.org/wiki/Archaeoacoustics
Archaeoacoustics is a sub-field of archaeology and acoustics which studies the relationship between people and sound throughout history. It is an interdisciplinary field with methodological contributions from acoustics, archaeology, and computer simulation, and is broadly related to topics within cultural anthropology such as experimental archaeology and ethnomusicology. Since many cultures have sonic components, applying acoustical methods to the study of archaeological sites and artifacts may reveal new information on the civilizations examined. Disciplinary methodology As the study of archaeoacoustics is concerned with a variety of cultural phenonemena, the methodologies depend on the subject of inquiry. The majority of archaeoacoustic studies can be grouped into either a study of artefacts (like musical instruments) or places (buildings or sites). For archaeological or historical sites that still exist in the present day, measurement methods from the realm of architectural acoustics may be used to characterise the behaviour of the site's acoustic field. When sites have been altered from their original state, a mixed-methodology approach may be used where acoustic measurements are combined with virtual reconstructions and simulations. The output of these simulations may be used to listen to the historical state of the site (via Auralization), to aide in an analysis based in the principles of psychoacoustic and for societal contextualization when historically relevant sources are taken into consideration. For archaeological objects, acoustic measurements and simulations may be used to investigate the possible acoustic behaviour of artifacts found at a site, as in the case of an acoustic jar. In a similar vein, the relationship between cultural uses of a space and artifacts found within it can be examined experimentally, as with lithophones and ringing rocks. Notable applications Natural formations Iegor Reznikoff and Michel Dauvois studied the prehistoric
https://en.wikipedia.org/wiki/Environmental%20indicator
Environmental indicators are simple measures that tell us what is happening in the environment. Since the environment is very complex, indicators provide a more practical and economical way to track the state of the environment than if we attempted to record every possible variable in the environment. For example, concentrations of ozone depleting substances (ODS) in the atmosphere, tracked over time, is a good indicator with respect to the environmental issue of stratospheric ozone depletion. Environmental indicators have been defined in different ways but common themes exist. “An environmental indicator is a numerical value that helps provide insight into the state of the environment or human health. Indicators are developed based on quantitative measurements or statistics of environmental condition that are tracked over time. Environmental indicators can be developed and used at a wide variety of geographic scales, from local to regional to national levels.” “A parameter or a value derived from parameters that describe the state of the environment and its impact on human beings, ecosystems and materials, the pressures on the environment, the driving forces and the responses steering that system. An indicator has gone through a selection and/or aggregation process to enable it to steer action.” Discussion Environmental indicator criteria and frameworks have been used to help in their selection and presentation. It can be considered, for example, that there are major subsets of environmental indicators in-line with the Pressure-State-Response model developed by the OECD. One subset of environmental indicators is the collection of ecological indicators which can include physical, biological and chemical measures such as atmospheric temperature, the concentration of ozone in the stratosphere or the number of breeding bird pairs in an area. These are also referred to as “state” indicators as their focus is on the state of the environment or conditions in the en
https://en.wikipedia.org/wiki/Shepherd%20moon
A shepherd moon, also called a herder moon or watcher moon, is a small natural satellite that clears a gap in planetary-ring material or keeps particles within a ring contained. The name is a result of their limiting the "herd" of the ring particles as a shepherd. Due to their gravitational effect, they pick up particles and deflect them from their original orbits through orbital resonance. This causes gaps in the ring system, such as the particularly striking Cassini Division, as well as other characteristic bands, or strange "twisted" deformation of rings. Discovery The existence of shepherd moons was theorized in early 1979. Observations of the rings of Uranus show that they are very thin and well defined, with sharp gaps between rings. To explain this, Goldreich and Tremaine suggested that two small satellites that were undetected at the time might be confining each ring. The first images of shepherd satellites were taken later that year by Voyager 1. Examples Jupiter Several of Jupiter's small innermost moons, namely Metis and Adrastea, are within Jupiter's ring system and are also within Jupiter's Roche limit. It is possible that these rings are composed of material that is being pulled off these two bodies by Jupiter's tidal forces, possibly facilitated by impacts of ring material on their surfaces. Saturn The complex ring system of Saturn has several such satellites. These include Prometheus (F ring), Daphnis (Keeler Gap), Pan (Encke Gap), Janus, and Epimetheus (both A ring). Uranus Uranus also has shepherd moons on its ε ring, Cordelia and Ophelia. They are interior and exterior shepherds, respectively. Both moons are well within Uranus's synchronous orbit radius, and their orbits are therefore slowly decaying due to tidal deceleration. Neptune Neptune's rings are very unusual in that they first appeared to be composed of incomplete arcs in Earth-based observations, but Voyager 2's images showed them to be complete rings with bright clumps. It is
https://en.wikipedia.org/wiki/Tarski%27s%20axiomatization%20of%20the%20reals
In 1936, Alfred Tarski set out an axiomatization of the real numbers and their arithmetic, consisting of only the 8 axioms shown below and a mere four primitive notions: the set of reals denoted R, a binary total order over R, denoted by the infix operator <, a binary operation of addition over R, denoted by the infix operator +, and the constant 1. The literature occasionally mentions this axiomatization but never goes into detail, notwithstanding its economy and elegant metamathematical properties. This axiomatization appears little known, possibly because of its second-order nature. Tarski's axiomatization can be seen as a version of the more usual definition of real numbers as the unique Dedekind-complete ordered field; it is however made much more concise by using unorthodox variants of standard algebraic axioms and other subtle tricks (see e.g. axioms 4 and 5, which combine the usual four axioms of abelian groups). The term "Tarski's axiomatization of real numbers" also refers to the theory of real closed fields, which Tarski showed completely axiomatizes the first-order theory of the structure 〈R, +, ·, <〉. The axioms Axioms of order (primitives: R, <): Axiom 1 If x < y, then not y < x. That is, "<" is an asymmetric relation. This implies that "<" is not a reflexive relationship, i.e. for all x, x < x is false. Axiom 2 If x < z, there exists a y such that x < y and y < z. In other words, "<" is dense in R. Axiom 3 "<" is Dedekind-complete. More formally, for all X, Y ⊆ R, if for all x ∈ X and y ∈ Y, x < y, then there exists a z such that for all x ∈ X and y ∈ Y, if z ≠ x and z ≠ y, then x < z and z < y. To clarify the above statement somewhat, let X ⊆ R and Y ⊆ R. We now define two common English verbs in a particular way that suits our purpose: X precedes Y if and only if for every x ∈ X and every y ∈ Y, x < y. The real number z separates X and Y if and only if for every x ∈ X with x ≠ z and every y ∈ Y with y ≠ z, x < z and z < y. Axiom 3 can the
https://en.wikipedia.org/wiki/Hydrogen%E2%80%93deuterium%20exchange
Hydrogen–deuterium exchange (also called H–D or H/D exchange) is a chemical reaction in which a covalently bonded hydrogen atom is replaced by a deuterium atom, or vice versa. It can be applied most easily to exchangeable protons and deuterons, where such a transformation occurs in the presence of a suitable deuterium source, without any catalyst. The use of acid, base or metal catalysts, coupled with conditions of increased temperature and pressure, can facilitate the exchange of non-exchangeable hydrogen atoms, so long as the substrate is robust to the conditions and reagents employed. This often results in perdeuteration: hydrogen-deuterium exchange of all non-exchangeable hydrogen atoms in a molecule. An example of exchangeable protons which are commonly examined in this way are the protons of the amides in the backbone of a protein. The method gives information about the solvent accessibility of various parts of the molecule, and thus the tertiary structure of the protein. The theoretical framework for understanding hydrogen exchange in proteins was first described by Kaj Ulrik Linderstrøm-Lang and he was the first to apply H/D exchange to study proteins. Exchange reaction In protic solution exchangeable protons such as those in hydroxyl or amine group exchange protons with the solvent. If D2O is solvent, deuterons will be incorporated at these positions. The exchange reaction can be followed using a variety of methods (see Detection). Since this exchange is an equilibrium reaction, the molar amount of deuterium should be high compared to the exchangeable protons of the substrate. For instance, deuterium is added to a protein in H2O by diluting the H2O solution with D2O (e.g. tenfold). Usually exchange is performed at physiological pH (7.0–8.0) where proteins are in their most native ensemble of conformational states. The H/D exchange reaction can also be catalysed, by acid, base or metal catalysts such as platinum. For the backbone amide hydrogen atoms of p
https://en.wikipedia.org/wiki/Foramen%20singulare
The foramen singulare is a foramen in the wall of the internal auditory meatus that gives passage to the branch of the inferior division of the vestibular nerve that innervates the ampulla of the posterior semicircular canal.
https://en.wikipedia.org/wiki/Lateral%20vestibular%20nucleus
The lateral vestibular nucleus (Deiters's nucleus) is the continuation upward and lateralward of the principal nucleus, and in it terminate many of the ascending branches of the vestibular nerve. Structure It consists of very large multipolar cells whose axons form an important part of the posterior longitudinal bundle (aka medial longitudinal fasciculus) of the same and the opposite side. The axons bifurcate as they enter the posterior longitudinal bundle, the ascending branches send terminals and collaterals to the motor nuclei of the abducens, trochlear and oculomotor nerves via the ascending component of the medial longitudinal fasciculus, and are concerned in coordinating the movements of the eyes with alterations in the position of the head; the descending branches pass down in the posterior longitudinal bundle into the anterior funiculus of the spinal cord as the vestibulospinal fasciculus (anterior marginal bundle) and are distributed to motor nuclei of the anterior column by terminals and collaterals. Other fibers are said to pass directly to the vestibulospinal fasciculus without passing into the posterior longitudinal bundle. The fibers which pass into the vestibulospinal fasciculus are intimately concerned with equilibratory reflexes. Other axons from Deiters’s nucleus are supposed to cross and ascend in the opposite medial lemniscus to the ventro-lateral nuclei of the thalamus; still other fibers pass into the cerebellum with the inferior peduncle and are distributed to the cortex of the vermis and the roof nuclei of the cerebellum; according to Cajal they merely pass through the nucleus fastigii on their way to the cortex of the vermis and the hemisphere. History Eponym Deiter's nucleus was named after German neuroanatomist Otto Friedrich Karl Deiters (1834–1863).
https://en.wikipedia.org/wiki/Superior%20vestibular%20nucleus
The superior vestibular nucleus (Bechterew's nucleus) is the dorso-lateral part of the vestibular nucleus and receives collaterals and terminals from the ascending branches of the vestibular nerve. Sends uncrossed fibers to cranial nerve 3 and 4 via the medial longitudinal fasciculus (MLF)
https://en.wikipedia.org/wiki/Membrane%20biology
Membrane biology is the study of the biological and physiochemical characteristics of membranes, with applications in the study of cellular physiology. Membrane bioelectrical impulses are described by the Hodgkin cycle. Biophysics Membrane biophysics is the study of biological membrane structure and function using physical, computational, mathematical, and biophysical methods. A combination of these methods can be used to create phase diagrams of different types of membranes, which yields information on thermodynamic behavior of a membrane and its components. As opposed to membrane biology, membrane biophysics focuses on quantitative information and modeling of various membrane phenomena, such as lipid raft formation, rates of lipid and cholesterol flip-flop, protein-lipid coupling, and the effect of bending and elasticity functions of membranes on inter-cell connections. See also
https://en.wikipedia.org/wiki/Femoral%20ring
The femoral ring is the opening at the proximal, abdominal end of the femoral canal, and represents the (superiorly directed/oriented) base of the conically-shaped femoral canal. The femoral ring is oval-shaped, with its long diameter being directed transversely and measuring about 1.25 cm. The opening of the femoral ring is filled in by extraperitoneal fat, forming the femoral septum. Part of the intestine can sometimes pass through the femoral ring into the femoral canal causing a femoral hernia. Boundaries The femoral ring is bounded as follows: anteriorly by the inguinal ligament. posteriorly by the pectineus and its covering fascia. medially by the crescentic base of the lacunar ligament. laterally by the fibrous septum on the medial side of the femoral vein. Additional images See also Femoral canal Femoral hernia Inguinal canal
https://en.wikipedia.org/wiki/FreeOTFE
FreeOTFE is a discontinued open source computer program for on-the-fly disk encryption (OTFE). On Microsoft Windows, and Windows Mobile (using FreeOTFE4PDA), it can create a virtual drive within a file or partition, to which anything written is automatically encrypted before being stored on a computer's hard or USB drive. It is similar in function to other disk encryption programs including TrueCrypt and Microsoft's BitLocker. The author, Sarah Dean, went absent as of 2011. The FreeOTFE website is unreachable as of June 2013 and the domain name is now registered by a domain squatter. The original program can be downloaded from a mirror at Sourceforge. In June 2014, a fork of the project now named LibreCrypt appeared on GitHub. Overview FreeOTFE was initially released by Sarah Dean in 2004, and was the first open source code disk encryption system that provided a modular architecture allowing 3rd parties to implement additional algorithms if needed. Older FreeOTFE licensing required that any modification to the program be placed in the public domain. This does not conform technically to section 3 of the Open Source definition. Newer program licensing omits this condition. The FreeOTFE license has not been approved by the Open Source Initiative and is not certified to be labeled with the open-source certification mark. This software is compatible with Linux encrypted volumes (e.g. LUKS, cryptoloop, dm-crypt), allowing data encrypted under Linux to be read (and written) freely. It was the first open source transparent disk encryption system to support Windows Vista and PDAs. Optional two-factor authentication using smart cards and/or hardware security modules (HSMs, also termed security tokens) was introduced in v4.0, using the PKCS#11 (Cryptoki) standard developed by RSA Laboratories. FreeOTFE also allows any number of "hidden volumes" to be created, giving plausible deniability and deniable encryption, and also has the option of encrypting full partitions or d
https://en.wikipedia.org/wiki/HTTP%20compression
HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization. HTTP data is compressed before it is sent from the server: compliant browsers will announce what methods are supported to the server before downloading the correct format; browsers that do not support compliant compression method will download uncompressed data. The most common compression schemes include gzip and Brotli; a full list of available schemes is maintained by the IANA. There are two different ways compression can be done in HTTP. At a lower level, a Transfer-Encoding header field may indicate the payload of an HTTP message is compressed. At a higher level, a Content-Encoding header field may indicate that a resource being transferred, cached, or otherwise referenced is compressed. Compression using Content-Encoding is more widely supported than Transfer-Encoding, and some browsers do not advertise support for Transfer-Encoding compression to avoid triggering bugs in servers. Compression scheme negotiation The negotiation is done in two steps, described in RFC 2616 and RFC 9110: 1. The web client advertises which compression schemes it supports by including a list of tokens in the HTTP request. For Content-Encoding, the list is in a field called Accept-Encoding; for Transfer-Encoding, the field is called TE. GET /encrypted-area HTTP/1.1 Host: www.example.com Accept-Encoding: gzip, deflate 2. If the server supports one or more compression schemes, the outgoing data may be compressed by one or more methods supported by both parties. If this is the case, the server will add a Content-Encoding or Transfer-Encoding field in the HTTP response with the used schemes, separated by commas. HTTP/1.1 200 OK Date: mon, 26 June 2016 22:38:34 GMT Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux) Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT Accept-Ranges: bytes Content-Length: 438 Connection: close Content-Type: text/html; charset=UT
https://en.wikipedia.org/wiki/Femoral%20canal
The femoral canal is the medial (and smallest) compartment of the three compartments of the femoral sheath. It is conical in shape. The femoral canal contains lymphatic vessels, and adipose and loose connective tissue, as well as - sometimes - a deep inguinal lymph node. The function of the femoral canal is to accommodate the distension of the femoral vein when venous return from the leg is increased or temporarily restricted (e.g. during a Valsalva maneuver). The proximal, abdominal end of the femoral canal forms the femoral ring. The femoral canal should not be confused with the nearby adductor canal. Anatomy The femoral canal is bordered: anterosuperiorly by the inguinal ligament posteriorly by the pectineal ligament lying anterior to the superior pubic ramus Medially by the lacunar ligament Laterally by the femoral vein Physiological significance The position of the femoral canal medially to the femoral vein is of physiologic importance. The space of the canal allows for the expansion of the femoral vein when venous return from the lower limbs is increased or when increased intra-abdominal pressure (valsalva maneuver) causes a temporary stasis in the venous flow. Clinical significance The entrance to the femoral canal is the femoral ring, through which bowel can sometimes enter, causing a femoral hernia. Though femoral hernias are rare, their passage through the inflexible femoral ring puts them at particular risk of strangulation, giving them surgical priority. See also Femoral ring
https://en.wikipedia.org/wiki/List%20of%20common%20coordinate%20transformations
This is a list of some of the most commonly used coordinate transformations. 2-dimensional Let be the standard Cartesian coordinates, and the standard polar coordinates. To Cartesian coordinates From polar coordinates From log-polar coordinates By using complex numbers , the transformation can be written as That is, it is given by the complex exponential function. From bipolar coordinates From 2-center bipolar coordinates From Cesàro equation To polar coordinates From Cartesian coordinates Note: solving for returns the resultant angle in the first quadrant (). To find one must refer to the original Cartesian coordinate, determine the quadrant in which lies (for example, (3,−3) [Cartesian] lies in QIV), then use the following to solve for The value for must be solved for in this manner because for all values of , is only defined for , and is periodic (with period ). This means that the inverse function will only give values in the domain of the function, but restricted to a single period. Hence, the range of the inverse function is only half a full circle. Note that one can also use From 2-center bipolar coordinates Where 2c is the distance between the poles. To log-polar coordinates from Cartesian coordinates Arc-length and curvature In Cartesian coordinates In polar coordinates 3-dimensional Let (x, y, z) be the standard Cartesian coordinates, and (ρ, θ, φ) the spherical coordinates, with θ the angle measured away from the +Z axis (as , see conventions in spherical coordinates). As φ has a range of 360° the same considerations as in polar (2 dimensional) coordinates apply whenever an arctangent of it is taken. θ has a range of 180°, running from 0° to 180°, and does not pose any problem when calculated from an arccosine, but beware for an arctangent. If, in the alternative definition, θ is chosen to run from −90° to +90°, in opposite direction of the earlier definition, it can be found uniquely from an arcsine, but beware of an arccota
https://en.wikipedia.org/wiki/Galvanic%20isolation
Galvanic isolation is a principle of isolating functional sections of electrical systems to prevent current flow; no direct conduction path is permitted. Energy or information can still be exchanged between the sections by other means, such as capacitive, inductive, radiative, optical, acoustic, or mechanical coupling. Galvanic isolation is used where two or more electric circuits must communicate, but their grounds may be at different potentials. It is an effective method of breaking ground loops by preventing unwanted current from flowing between two units sharing a ground conductor. Galvanic isolation is also used for safety, preventing accidental electric shocks. Methods Transformer Transformers are probably the most common means of galvanic isolation. They are almost universally used in power supplies because they are a mature technology that can carry significant power. They are also used to isolate data signals in Ethernet over twisted pair. Transformers couple by magnetic flux. Except for the autotransformer, the primary and secondary windings of a transformer are not electrically connected to each other. The voltage difference that may safely be applied between windings without risk of breakdown (the isolation voltage) is specified in kilovolts by an industry standard. The same applies to magnetic amplifiers and transductors. While transformers are usually used to step up or step down the voltages, isolation transformers with a 1:1 ratio are used mostly in safety applications while keeping the voltage the same. If two electronic systems have a common ground, they are not galvanically isolated. The common ground might not normally and intentionally have connection to functional poles, but might become connected. For this reason isolation transformers do not supply a GND/earth pole. Opto-isolator Opto-isolators transmit information by modulating light. The sender (light source) and receiver (photosensitive device) are not electrically connected. Typical
https://en.wikipedia.org/wiki/Amaranth%20oil
Amaranth oil is extracted from the seeds of two species of the genus Amaranthus — A. cruentus and A. hypochondriacus — that are called, collectively, amaranth grain. Amaranth oil is a light-to-medium-colored, clear liquid that is pourable at low temperatures. It is a source of fatty acids, with oleic acid, linoleic acid, and palmitic acid having the highest proportions. The oil is valued for its ability to add temperature stability at both high and low temperatures. Commercial uses of amaranth oil include foods, cosmetics, shampoos, and intermediates for manufacture of lubricants, pharmaceuticals, rubber chemicals, aromatics, and surface active agents. As a food oil, amaranth oil has a delicate taste. The oil content of the actual amaranth grain ranges from 4.8 to 8.1%, which is relatively low compared to other sources of seed oil. The melting point of amaranth oil is . Chemically, the major constituents of amaranth oil are: {| class="wikitable" ! Fatty acid !! Content |- | Linoleic acid || align="right" | 50% |- | Oleic acid|| align="right" | 23% |- | Palmitic acid || align="right" | 19% |- || Stearic acid || align="right" | 3% |}
https://en.wikipedia.org/wiki/Ben%20oil
Ben oil is pressed from the seeds of the Moringa oleifera, known variously as the horseradish tree, ben oil tree, or drumstick tree. The oil is characterized by an unusually long shelf life and a mild, but pleasant taste. The name of the oil is derived from the presence of behenic acid. The oil's components are: Seeds offer a relatively high yield of 22-38% oil. Ben oil has been used for thousands of years as a perfume base, and continues to be used in that capacity today. The oil can also be used as a fuel. Burkill reports: It burns with a clear light and without smoke. It is an excellent salad oil, and gives a good soap... It can be used for oiling machinery, and indeed has a reputation for this purpose as watch oil, but is now superseded by sperm oil. History Greece The ancient Greeks manufactured ben oil and other herbal oils. Theophrastos, in the fourth century BC, had very strong opinions about which oils to use to make perfumes, and ben oil was firmly at the top of the list. Rome In Rome, around 70 AD, Pliny the Elder described the tree and its fruits under the name after the Greek word meaning "ointment". Around the same time Dioscorides described the fruit as balanos myrepsike (roughly "acorn-shaped fruit well-suited for preparation of fragrant ointments"). He observed that "grinding the kernels, like bitter almonds, produces a liquid which is used instead of oil to prepare precious ointments." Dioscorides' recommendation was influential in promoting the "balanos" fruits and their oil for medicinal purposes. Alexandria During the same era, Alexandria in Egypt had become the predominant center for the production of perfumes. This was still true when the Arabs captured the city in 642 AD and became familiar with both the fruits and the oil. The Arabic word for included not merely the fruit, but also the oil and the herbal oils extracted from it. That Arabic herbal oils usually included myrrh resin, Indian cardamom, and other types of cardamom lead to
https://en.wikipedia.org/wiki/Westlothiana
Westlothiana ("animal from West Lothian") is a genus of reptile-like tetrapod that lived about 338 million years ago during the latest part of the Viséan age of the Carboniferous. Members of the genus bore a superficial resemblance to modern-day lizards. The genus is known from a single species, Westlothiana lizziae. The type specimen was discovered in the East Kirkton Limestone at the East Kirkton Quarry, West Lothian, Scotland in 1984. This specimen was nicknamed "Lizzie the lizard" by fossil hunter Stan Wood, and this name was quickly adopted by other paleontologists and the press. When the specimen was formally named in 1990, it was given the specific name "lizziae" in homage to this nickname. However, despite its similar body shape, Westlothiana is not considered a true lizard. Westlothiana's anatomy contained a mixture of both "labyrinthodont" and reptilian features, and was originally regarded as the oldest known reptile or amniote. However, updated studies have shown that this identification is not entirely accurate. Instead of being one of the first amniotes (tetrapods laying hard-shelled eggs, including synapsids, reptiles, and their descendants), Westlothiana was rather a close relative of Amniota. As a result, most paleontologists since the original description place the genus within the group Reptiliomorpha, among other amniote relatives such as diadectomorphs and seymouriamorphs. Later analyses usually place the genus as the earliest diverging member of Lepospondyli, a collection of unusual tetrapods which may be close to amniotes or lissamphibians (modern amphibians like frogs and salamanders), or potentially both at the same time. Description This species probably lived near a freshwater lake, and probably hunted for other small creatures that lived in the same habitat. It was a slender animal, with rather small legs and a long tail. Together with Casineria, another transitional fossil found in Scotland, it is one of the smallest reptile-like amph
https://en.wikipedia.org/wiki/Palatal%20hook
The palatal hook () is a type of hook diacritic formerly used in the International Phonetic Alphabet to represent palatalized consonants. It is a small, leftwards-facing hook joined to the bottom-right side of a letter, and is distinguished from various other hooks indicating retroflexion, etc. It was withdrawn by the IPA in 1989, in favour of a superscript j following the consonant (i.e., ⟨⟩ becomes ⟨⟩). The IPA recommended that esh ⟨⟩ and ezh ⟨⟩) not use the palatal hook, but instead get special curled symbols: ⟨⟩ and ⟨⟩. However, versions with the hook have also been used and are supported by Unicode. Palatal hooks are also used in Lithuanian dialectology by the Lithuanian Phonetic Transcription System (or Lithuanian Phonetic Alphabet) and in the orthography of Nez Perce. Computer encoding Unicode includes both a combining character for the palatal hook, as well as several precomposed characters, including superscript letters with palatal hooks. While LATIN SMALL LETTER T WITH PALATAL HOOK has been in Unicode since 1991, the rest were not added until 2005 or later. As such, font support for the latter characters is much less than for the former.
https://en.wikipedia.org/wiki/Archaeothyris
Archaeothyris is an extinct genus of ophiacodontid synapsid that lived during the Late Carboniferous and is known from Nova Scotia. Dated to 306 million years ago, Archaeothyris, along with a more poorly known synapsid called Echinerpeton, are the oldest undisputed synapsids known. The name means ancient window (Greek), and refers to the opening in the skull, the temporal fenestra, which indicates this is an early synapsid. Protoclepsydrops also from Nova Scotia is slightly older but is known by very fragmentary materials. Description Archaeothyris was more advanced than the early sauropsids, having strong jaws that could open wider than those of the early reptiles. While its sharp teeth were all of the same size & shape, it did possess a pair of enlarged canines, suggesting that it was a carnivore. Archaeothyris' legs were articulated laterally at its pelvis and shoulders, which gave it a sprawling stance. The first toe is smaller than the second. Classification Archaeothyris belonged to the family Ophiacodontidae, a group of early pelycosaurs that evolved early in the Late Carboniferous. It was one of the earliest and most basal synapsids (the group which includes mammals). Below is a cladogram modified from the analysis of Benson (2012): Discovery and paleoecology Fossils of Archaeothyris were first described in 1972 from the Joggins fossil cliffs, the same locality in which the early reptiles Hylonomus and Petrolacosaurus (both of which resemble Archaeothyris) were found. Archaeothyris lived in what is now Nova Scotia, about 306 million years ago in the Carboniferous Period (Pennsylvanian). Nova Scotia at this time was a swamp, similar to today's Everglades in Florida. The "trees" (actually giant club mosses) were very tall, some, such as Lepidodendron, up to tall. Archaeothyris and the other early amniotes lived in the moist vegetation on the forest ground, together with the more terrestrially adapted labyrinthodont amphibians. See also List of pel
https://en.wikipedia.org/wiki/Unknotting%20problem
In mathematics, the unknotting problem is the problem of algorithmically recognizing the unknot, given some representation of a knot, e.g., a knot diagram. There are several types of unknotting algorithms. A major unresolved challenge is to determine if the problem admits a polynomial time algorithm; that is, whether the problem lies in the complexity class P. Computational complexity First steps toward determining the computational complexity were undertaken in proving that the problem is in larger complexity classes, which contain the class P. By using normal surfaces to describe the Seifert surfaces of a given knot, showed that the unknotting problem is in the complexity class NP. claimed the weaker result that unknotting is in AM ∩ co-AM; however, later they retracted this claim. In 2011, Greg Kuperberg proved that (assuming the generalized Riemann hypothesis) the unknotting problem is in co-NP, and in 2016, Marc Lackenby provided an unconditional proof of co-NP membership. The unknotting problem has the same computational complexity as testing whether an embedding of an undirected graph in Euclidean space is linkless. Unknotting algorithms Several algorithms solving the unknotting problem are based on Haken's theory of normal surfaces: Haken's algorithm uses the theory of normal surfaces to find a disk whose boundary is the knot. Haken originally used this algorithm to show that unknotting is decidable, but did not analyze its complexity in more detail. Hass, Lagarias, and Pippenger showed that the set of all normal surfaces may be represented by the integer points in a polyhedral cone and that a surface witnessing the unknottedness of a curve (if it exists) can always be found on one of the extreme rays of this cone. Therefore, vertex enumeration methods can be used to list all of the extreme rays and test whether any of them corresponds to a bounding disk of the knot. Hass, Lagarias, and Pippenger used this method to show that the unknottedness is i
https://en.wikipedia.org/wiki/Arnold%20Walfisz
Arnold Walfisz (2 July 1892 – 29 May 1962) was a Jewish-Polish mathematician working in analytic number theory. Life After the Abitur in Warsaw (Poland), Arnold Walfisz studied (1909−14 and 1918−21) in Germany at Munich, Berlin, Heidelberg and Göttingen. Edmund Landau was his doctoral-thesis supervisor at the University of Göttingen. Walfisz lived in Wiesbaden from 1922 through 1927, then he returned to Warsaw, worked at an insurance company and at the mathematical institute of the university (habilitation in 1930). In 1935, together with , he founded the mathematical journal Acta Arithmetica. In 1936, Walfisz became professor at the University of Tbilisi in the nation of Georgia (at the time a part of the Soviet Union). He wrote approximately 100 mathematical articles and three books. Work By using a theorem by Carl Ludwig Siegel providing an upper bound for the real zeros (see Siegel zero) of Dirichlet L-functions formed with real non-principal characters, Walfisz obtained the Siegel–Walfisz theorem, from which the prime number theorem for arithmetic progressions can be deduced. By using estimates on exponential sums due to I. M. Vinogradov and , Walfisz obtained the currently best O-estimates for the remainder terms of the summatory functions of both the sum-of-divisors function and the Euler function (in: "Weylsche Exponentialsummen in der neueren Zahlentheorie", see below). Works Pell's equation (in Russian), Tbilisi, 1952 Gitterpunkte in mehrdimensionalen Kugeln [Lattice points in multi-dimensional spheres], Panstwowe Wydawnictwo Naukowe, Monografi Matematyczne, vol. 33. Warszawa, 1957, online Weylsche Exponentialsummen in der neueren Zahlentheorie [Weyl exponential sums in the newer number theory], VEB Deutscher Verlag der Wissenschaften, Berlin, 1963. Further reading Ekkehard Krätzel, Christoph Lamm: Von Wiesbaden nach Tiflis – Die wechselvolle Lebensgeschichte des Zahlentheoretikers Arnold Walfisz [From Wiesbaden to Tiflis: The eventful life st
https://en.wikipedia.org/wiki/Eucalyptus%20oil
Eucalyptus oil is the generic name for distilled oil from the leaf of Eucalyptus, a genus of the plant family Myrtaceae native to Australia and cultivated worldwide. Eucalyptus oil has a history of wide application, as a pharmaceutical, antiseptic, repellent, flavouring, fragrance and industrial uses. The leaves of selected Eucalyptus species are steam distilled to extract eucalyptus oil. Types and production Eucalyptus oils in the trade are categorized into three broad types according to their composition and main end-use: medicinal, perfumery and industrial. The most prevalent is the standard cineole-based "oil of eucalyptus", a colourless mobile liquid (yellow with age) with a penetrating, camphoraceous, woody-sweet scent. China produces about 75% of the world trade, but most of this is derived from the cineole fractions of camphor laurel rather than being true eucalyptus oil. Significant producers of true eucalyptus include South Africa, Portugal, Spain, Brazil, Australia, Chile, and Eswatini. Global production is dominated by Eucalyptus globulus. However, Eucalyptus kochii and Eucalyptus polybractea have the highest cineole content, ranging from 80 to 95%. The British Pharmacopoeia states that the oil must have a minimum cineole content of 70% if it is pharmaceutical grade. Rectification is used to bring lower grade oils up to the high cineole standard required. In 1991, global annual production was estimated at 3,000 tonnes for the medicinal eucalyptus oil with another 1,500 tonnes for the main perfumery oil (produced from Eucalyptus citriodora). The eucalyptus genus also produces non-cineole oils, including piperitone, phellandrene, citral, methyl cinnamate and geranyl acetate. Uses Herbal medicine The European Medicines Agency Committee on Herbal Medicinal Products concluded that traditional medicines based on eucalyptus oil can be used for treating cough associated with the common cold, and to relieve symptoms of localized muscle pain. Repellent and b
https://en.wikipedia.org/wiki/Constrained%20Shortest%20Path%20First
Constrained Shortest Path First (CSPF) is an extension of shortest path algorithms. The path computed using CSPF is a shortest path fulfilling a set of constraints. It simply means that it runs shortest path algorithm after pruning those links that violate a given set of constraints. A constraint could be minimum bandwidth required per link (also known as bandwidth guaranteed constraint), end-to-end delay, maximum number of links traversed, include/exclude nodes. CSPF is widely used in MPLS Traffic Engineering. The routing using CSPF is known as Constraint Based Routing (CBR). The path computed using CSPF could be exactly same as that of computed from OSPF and IS-IS, or it could be completely different depending on the set of constraints to be met. Example with bandwidth constraint Consider the network to the right, where a route has to be computed from router-A to the router-C satisfying bandwidth constrained of x- units, and link cost for each link is based on hop-count (i.e., 1). If x = 50 units then CSPF will give path A → B → C. If x = 55 units then CSPF will give path A → D → E → C. If x = 90 units then CSPF will give path A → D → E → F → C. In all of these cases OSPF and IS-IS will result in path A → B → C. However, if the link costs in this topology are different, CSPF may accordingly determine a different path. For example, suppose that as before, hop count is used as link cost for all links but A → B and B → C, for which the cost is 4. In this case: If x = 50 units then CSPF will give path A → D → E → C. If x = 55 units then CSPF will give path A → D → E → C. If x = 90 units then CSPF will give path A → D → E → F → C.
https://en.wikipedia.org/wiki/Acta%20Arithmetica
Acta Arithmetica is a scientific journal of mathematics publishing papers on number theory. It was established in 1935 by Salomon Lubelski and Arnold Walfisz. The journal is published by the Institute of Mathematics of the Polish Academy of Sciences.
https://en.wikipedia.org/wiki/Pulse-per-second%20signal
A pulse per second (PPS or 1PPS) is an electrical signal that has a width of less than one second and a sharply rising or abruptly falling edge that accurately repeats once per second. PPS signals are output by radio beacons, frequency standards, other types of precision oscillators and some GPS receivers. Precision clocks are sometimes manufactured by interfacing a PPS signal generator to processing equipment that aligns the PPS signal to the UTC second and converts it to a useful display. Atomic clocks usually have an external PPS output, although internally they may operate at 9,192,631,770 Hz. PPS signals have an accuracy ranging from a 12 picoseconds to a few microseconds per second, or 2.0 nanoseconds to a few milliseconds per day based on the resolution and accuracy of the device generating the signal. Physical representation PPS signals are usually generated as a TTL signal capable of driving a 1 kilohm load. Some sources use line drivers in order to be capable of driving 50-ohm transmission lines. Because of the broad frequency contents, along the transmission line can have a significant impact on the shape of the PPS signal due to dispersion and after delivery effects of the dielectric of the transmission line. It is common to set t0 at the voltage level of the steepest slope of a PPS signal. PPS signals are therefore notoriously unreliable when time transfer accuracies better than a nanosecond are needed, although the stability of a PPS signal can reach into the picosecond regime depending on the generating device. Uses PPS signals are used for precise timekeeping and time measurement. One increasingly common use is in computer timekeeping, including NTP. Because GPS is considered a stratum-0 source, a common use for the PPS signal is to connect it to a PC using a low-latency, low-jitter wire connection and allow a program to synchronize to it. This makes the PC a stratum-1 time source. Note that because the PPS signal does not specify the time, bu
https://en.wikipedia.org/wiki/Hubble%E2%80%93Reynolds%20law
The Hubble–Reynolds law models the surface brightness of elliptical galaxies as Where is the surface brightness at radius , is the central brightness, and is the radius at which the surface brightness is diminished by a factor of 1/4. It is asymptotically similar to the De Vaucouleurs' law which is a special case of the Sersic profile for elliptical galaxies. The law is named for the astronomers Edwin Hubble and John Henry Reynolds. It was first formulated by Reynolds in 1913 from his observations of galaxies (then still known as nebulae). It was later re-derived by Hubble in 1930 specifically in observations of elliptical galaxies.
https://en.wikipedia.org/wiki/Fraction%20of%20absorbed%20photosynthetically%20active%20radiation
The fraction of absorbed photosynthetically active radiation (FAPAR, sometimes also noted fAPAR or fPAR) is the fraction of the incoming solar radiation in the photosynthetically active radiation spectral region that is absorbed by a photosynthetic organism, typically describing the light absorption across an integrated plant canopy. This biophysical variable is directly related to the primary productivity of photosynthesis and some models use it to estimate the assimilation of carbon dioxide in vegetation in conjunction with the leaf area index. FAPAR can also be used as an indicator of the state and evolution of the vegetation cover; with this function, it advantageously replaces the Normalized Difference Vegetation Index (NDVI), provided it is itself properly estimated. FAPAR can be directly measured on the ground, by putting a spectrometer above and below canopy cover. On a larger scale, however, it is estimated from space measurements in the solar spectral range and a number of state of the art algorithms have been proposed to derive this important environmental variable. Currently, there are some remote sensing products of FAPAR, such as AVHRR and MODIS. FAPAR is one of the 50 Essential Climate Variables recognized by the UN Global Climate Observing System (GCOS) as necessary to characterize the climate of the Earth. GCOS has issued specific recommendations to monitor this variable systematically, both through a reanalysis of existing databases and in the future with current and forthcoming instruments.
https://en.wikipedia.org/wiki/Normal%20cone
In algebraic geometry, the normal cone of a subscheme of a scheme is a scheme analogous to the normal bundle or tubular neighborhood in differential geometry. Definition The normal cone or of an embedding , defined by some sheaf of ideals I is defined as the relative Spec When the embedding i is regular the normal cone is the normal bundle, the vector bundle on X corresponding to the dual of the sheaf . If X is a point, then the normal cone and the normal bundle to it are also called the tangent cone and the tangent space (Zariski tangent space) to the point. When Y = Spec R is affine, the definition means that the normal cone to X = Spec R/I is the Spec of the associated graded ring of R with respect to I. If Y is the product X × X and the embedding i is the diagonal embedding, then the normal bundle to X in Y is the tangent bundle to X. The normal cone (or rather its projective cousin) appears as a result of blow-up. Precisely, let be the blow-up of Y along X. Then, by definition, the exceptional divisor is the pre-image ; which is the projective cone of . Thus, The global sections of the normal bundle classify embedded infinitesimal deformations of Y in X; there is a natural bijection between the set of closed subschemes of , flat over the ring D of dual numbers and having X as the special fiber, and H0(X, NX Y). Properties Compositions of regular embeddings If are regular embeddings, then is a regular embedding and there is a natural exact sequence of vector bundles on X: If are regular embeddings of codimensions and if is a regular embedding of codimension then In particular, if is a smooth morphism, then the normal bundle to the diagonal embedding (r-fold) is the direct sum of copies of the relative tangent bundle . If is a closed immersion and if is a flat morphism such that , then If is a smooth morphism and is a regular embedding, then there is a natural exact sequence of vector bundles on X: (which is a special case of an exa
https://en.wikipedia.org/wiki/Winogradsky%20column
The Winogradsky column is a simple device for culturing a large diversity of microorganisms. Invented in the 1880s by Sergei Winogradsky, the device is a column of pond mud and water mixed with a carbon source such as newspaper (containing cellulose), blackened marshmallows or egg-shells (containing calcium carbonate), and a sulfur source such as gypsum (calcium sulfate) or egg yolk. Incubating the column in sunlight for months results in an aerobic/anaerobic gradient as well as a sulfide gradient. These two gradients promote the growth of different microorganisms such as Clostridium, Desulfovibrio, Chlorobium, Chromatium, Rhodomicrobium, and Beggiatoa, as well as many other species of bacteria, cyanobacteria, and algae. The column provides numerous gradients, depending on additive nutrients, from which the variety of aforementioned organisms can grow. The aerobic water phase and anaerobic mud or soil phase are one such distinction. Because of oxygen's low solubility in water, the water quickly becomes anoxic towards the interface of the mud and water. Anaerobic phototrophs are still present to a large extent in the mud phase, and there is still capacity for biofilm creation and colony expansion, as shown in the images at right. Algae and other aerobic phototrophs are present along the surface and water of the upper half of the columns. Construction The column is a rough mixture of ingredients – exact measurements are not critical. A tall glass (30 cm long, >5 cm wide) is filled one third full of pond mud, omitting any sticks, debris, and air bubbles. Supplementation of ~0.25% w/w calcium carbonate and ~0.50% w/w calcium sulfate or sodium sulfate is required (ground eggshell and egg yolk respectively are rich in these minerals), mixed in with some shredded newspaper, filter paper or hay (for cellulose). An additional anaerobic layer, this time of unsupplemented mud, brings the container to two thirds full. Alternatively, some procedures call for sand to be
https://en.wikipedia.org/wiki/Leading%20Edge%20Products
Leading Edge Products, Inc., was a computer manufacturer in the 1980s and the 1990s. It was based in Canton, Massachusetts. History Leading Edge was founded in 1980 by Thomas Shane and Michael Shane. At the outset, they were a PC peripherals company selling aftermarket products such as Elephant Memory Systems brand floppy disk media ("Elephant. Never forgets") and printer ribbons, and acting as the sole North American distributor/reseller of printers from the Japanese manufacturer C. Itoh, the most memorable being the popular low-end dot-matrix printer, "The Gorilla Banana". In 1984 the company sold the computer aftermarket product line and sales division to Dennison Computer Supplies, a division of Dennison Manufacturing. In 1984, they began to use Daewoo parts, and in 1989, they were acquired by Daewoo, as part of their recovery from Chapter 11 bankruptcy. (Shane declared that the costs of a legal dispute with Mitsubishi led to its bankruptcy). In January 1990, Daewoo hired Al Agbay, a veteran executive from Panasonic to lead the company out of Chapter 11 Bankruptcy. In the three years that followed, Agbay and his executive team repaid dealers approximately $16 million and increased annual revenues to over $250 million before a contract dispute severed Agbay and Daewoo's relationship. In October, 1995, Daewoo sold the company to Manuhold Investment AG, a Swiss electronics company. Leading Edge had sold 185,000 of its PC clones in the United States in 1994, but in 1995 sales fell from 90,000 in the first half to almost none in the second half. By 1997 the company was defunct. Products Hardware The first known computer to be produced by Leading Edge is the Model M, released in 1982. By 1986 it sold for $1695 (US) with a monitor and two floppy drives. It used an Intel 8088-2 processor, running at a maximum of 7.16 MHz on an 8 bit bus, compared to 6 MHz for the IBM PC-AT on a 16 bit bus. The 'M' stands for Mitsubishi, their parts provider. They began produci
https://en.wikipedia.org/wiki/Coarse%20structure
In the mathematical fields of geometry and topology, a coarse structure on a set X is a collection of subsets of the cartesian product X × X with certain properties which allow the large-scale structure of metric spaces and topological spaces to be defined. The concern of traditional geometry and topology is with the small-scale structure of the space: properties such as the continuity of a function depend on whether the inverse images of small open sets, or neighborhoods, are themselves open. Large-scale properties of a space—such as boundedness, or the degrees of freedom of the space—do not depend on such features. Coarse geometry and coarse topology provide tools for measuring the large-scale properties of a space, and just as a metric or a topology contains information on the small-scale structure of a space, a coarse structure contains information on its large-scale properties. Properly, a coarse structure is not the large-scale analog of a topological structure, but of a uniform structure. Definition A on a set is a collection of subsets of (therefore falling under the more general categorization of binary relations on ) called , and so that possesses the identity relation, is closed under taking subsets, inverses, and finite unions, and is closed under composition of relations. Explicitly: Identity/diagonal: The diagonal is a member of —the identity relation. Closed under taking subsets: If and then Closed under taking inverses: If then the inverse (or transpose) is a member of —the inverse relation. Closed under taking unions: If then their union is a member of Closed under composition: If then their product is a member of —the composition of relations. A set endowed with a coarse structure is a . For a subset of the set is defined as We define the of by to be the set also denoted The symbol denotes the set These are forms of projections. A subset of is said to be a if is a controlled set. Intuition Th
https://en.wikipedia.org/wiki/Geoglobus
Geoglobus is a hyperthermophilic member of the Archaeoglobaceae within the Euryarchaeota. It consists of two species, the first, G. ahangari, isolated from the Guaymas Basin hydrothermal system located deep within the Gulf of California. As a hyperthermophile, it grows best at a temperature of 88 °C and cannot grow at temperatures below 65 °C or above 90 °C. It possess an S-layer cell wall and a single flagellum. G. ahangari is an anaerobe, using poorly soluble ferric iron (Fe3+) as a terminal electron acceptor. It can grow either autotrophically using hydrogen gas (H2) or heterotrophically using a large number of organic compounds, including several types of fatty acids, as energy sources. G. ahangari was the first archaeon isolated capable of using hydrogen gas coupled to iron reduction as an energy source and the first anaerobe isolated capable of using long-chain fatty acids as an energy source. A second species was described as G. acetivorans, which also uses iron as its terminal electron acceptor. See also List of Archaea genera
https://en.wikipedia.org/wiki/Leading%20Edge%20Model%20D
The Leading Edge Model D is an IBM clone first released by Leading Edge Hardware in July 1985. It was initially priced at $1,495 and configured with dual 5.25" floppy drives, 256 KB of RAM, and a monochrome monitor. It was manufactured by South Korean conglomerate Daewoo and distributed by Canton, Massachusetts-based Leading Edge. Engineer Stephen Kahng spent about four months designing the Model D at a cost of $200,000. Kahng later became CEO of Macintosh clone maker Power Computing. In August 1986, Leading Edge cut the price of the base model by $200, to $1,295, and increased the base memory of the machine to 512 KB. The Model D was an immediate success, selling 100,000 units in its first year of production. It sold well for several years, until a dispute with dealers forced Leading Edge into bankruptcy in 1989. Hardware The Model D initially featured an Intel 8088 microprocessor at 4.77 MHz, although later models had a switch in the back to run at 4.77 MHz (normal) or 7.16 MHz (high). Earlier models have no turbo switch and run only at 4.77 MHz, while a few of the later ones (seemingly very rare) are 7.16 MHz only. Four models are known: DC-2010, DC-2011, DC-2010E, and DC-2011E. The "E" seems to correlate with the capability of running at 7.16 MHz. The addition of the Intel 8087 floating point unit (FPU) coprocessor is supported in all Leading Edge Model D revisions with an onboard 40-pin DIP socket. Unlike the IBM PC and IBM PC/XT, the Model D integrates video, the disk controller, a battery backed clock (real-time clock or RTC), serial, and parallel ports directly onto the motherboard rather than putting them on plug-in cards. This allows the Model D to be half the size of the IBM PC, with four free ISA expansion slots compared to the PC's one slot after installing necessary cards. The motherboard came in eight different revisions: Revision 1, 5, 7, 8, CC1, CC2, WC1, and WC2. Revisions 1 through 7 are usually found in models DC-2010 and DC-2011, with revi
https://en.wikipedia.org/wiki/Claytonia%20sibirica
Claytonia sibirica, the pink purslane, candy flower, Siberian spring beauty or Siberian miner's lettuce, is a flowering plant in the family Montiaceae, native to the Commander Islands (including Bering Island) of Siberia, and western North America from the Aleutian Islands and coastal Alaska south through Haida Gwaii, Vancouver Island, Cascade and Coast Ranges, to a southern limit in the Santa Cruz Mountains. Populations are also known from the Wallowa Mountains, Klamath Mountains, northern Idaho, and The Kootenai. A synonym is Montia sibirica. The plant was introduced into the United Kingdom by the 18th century, where it has become very widespread. Habitat and description It is found in moist woods. It is long-lived perennial, biennial, or annual with hermaphroditic flowers which are protandrous and self-fertile. The numerous fleshy stems form a rosette and the leaves are linear, lanceolate, or deltate. The flowers are 8–20 mm diameter, with five white, candy-striped, or pink petals; flowering is typically between February and August, but some plants continue to bloom late into autumn. Invasiveness The species is now found in most of the UK, especially the west and north. It continues to spread but is not considered invasive. However, it is reported to cause local problems due to its growth timing. The fleshy leaves appear early in the season and then collapse and may suppress the growth of later species. Uses Much like Claytonia perfoliata, the aboveground portion of the plant can be eaten raw or cooked. Some leaves have a poor taste or aftertaste. The Stewarton flower An example of the variation found in Claytonia sibirica is the subspecies known as the Stewarton flower, so named due to its local abundance in that part of North Ayrshire, Scotland, and recorded as such by the Kilmarnock Glenfield Ramblers. In 1915 it was stated to have been in the Stewarton area for over 60 years and was abundant on the Corsehill Burn. As the plant is very adept at repr
https://en.wikipedia.org/wiki/Transcytosis
Transcytosis (also known as cytopempsis) is a type of transcellular transport in which various macromolecules are transported across the interior of a cell. Macromolecules are captured in vesicles on one side of the cell, drawn across the cell, and ejected on the other side. Examples of macromolecules transported include IgA, transferrin, and insulin. While transcytosis is most commonly observed in epithelial cells, the process is also present elsewhere. Blood capillaries are a well-known site for transcytosis, though it occurs in other cells, including neurons, osteoclasts and M cells of the intestine. Regulation The regulation of transcytosis varies greatly due to the many different tissues in which this process is observed. Various tissue-specific mechanisms of transcytosis have been identified. Brefeldin A, a commonly used inhibitor of ER-to-Golgi apparatus transport, has been shown to inhibit transcytosis in dog kidney cells, which provided the first clues as to the nature of transcytosis regulation. Transcytosis in dog kidney cells has also been shown be regulated at the apical membrane by Rab17, as well as Rab11a and Rab25. Further work on dog kidney cells has shown that a signaling cascade involving the phosphorylation of EGFR by Yes leading to the activation of Rab11FIP5 by MAPK1 upregulates transcytosis. Transcytosis has been shown to be inhibited by the combination of progesterone and estradiol followed by activation mediated by prolactin in the rabbit mammary gland during pregnancy. In the thyroid, follicular cell transcytosis is regulated positively by TSH . The phosphorylation of caveolin 1 induced by hydrogen peroxide has been shown to be critical to the activation of transcytosis in pulmonary vascular tissue. It can therefore be concluded that the regulation of transcytosis is a complex process that varies between tissues. Role in pathogenesis Due to the function of transcytosis as a process that transports macromolecules across cells, it can be a
https://en.wikipedia.org/wiki/Supersampling
Supersampling or supersampling anti-aliasing (SSAA) is a spatial anti-aliasing method, i.e. a method used to remove aliasing (jagged and pixelated edges, colloquially known as "jaggies") from images rendered in computer games or other computer programs that generate imagery. Aliasing occurs because unlike real-world objects, which have continuous smooth curves and lines, a computer screen shows the viewer a large number of small squares. These pixels all have the same size, and each one has a single color. A line can only be shown as a collection of pixels, and therefore appears jagged unless it is perfectly horizontal or vertical. The aim of supersampling is to reduce this effect. Color samples are taken at several instances inside the pixel (not just at the center as normal), and an average color value is calculated. This is achieved by rendering the image at a much higher resolution than the one being displayed, then shrinking it to the desired size, using the extra pixels for calculation. The result is a downsampled image with smoother transitions from one line of pixels to another along the edges of objects. The number of samples determines the quality of the output. Motivation Aliasing is manifested in the case of 2D images as moiré pattern and pixelated edges, colloquially known as "jaggies". Common signal processing and image processing knowledge suggests that to achieve perfect elimination of aliasing, proper spatial sampling at the Nyquist rate (or higher) after applying a 2D Anti-aliasing filter is required. As this approach would require a forward and inverse fourier transformation, computationally less demanding approximations like supersampling were developed to avoid domain switches by staying in the spatial domain ("image domain"). Method Computational cost and adaptive supersampling Supersampling is computationally expensive because it requires much greater video card memory and memory bandwidth, since the amount of buffer used is several time
https://en.wikipedia.org/wiki/NSA%20Suite%20A%20Cryptography
NSA Suite A Cryptography is NSA cryptography which "contains classified algorithms that will not be released." "Suite A will be used for the protection of some categories of especially sensitive information (a small percentage of the overall national security-related information assurance market)." Incomplete list of Suite A algorithms: ACCORDION BATON CDL 1 CDL 2 FFC FIREFLY JOSEKI KEESEE MAYFLY MEDLEY MERCATOR SAVILLE SHILLELAGH WALBURN WEASEL See also Commercial National Security Algorithm Suite NSA Suite B Cryptography
https://en.wikipedia.org/wiki/Otis%20King
Otis Carter Formby King (1876–1944) was an electrical engineer in London who invented and produced a cylindrical slide rule with helical scales, primarily for business uses initially. The product was named Otis King's Patent Calculator, and was manufactured and sold by Carbic Ltd. in London from about 1922 to about 1972. With a log-scale decade length of 66 inches, the Otis King calculator should be about a full digit more accurate than a 6-inch pocket slide rule. But due to inaccuracies in tic-mark placement, some portions of its scales will read off by more than they should. For example, a reading of 4.630 might represent an answer of 4.632, or almost one part in 2000 error, when it should be accurate to one part in 6000 (66"/6000 = 0.011" estimated interpolation accuracy). The Geniac brand cylindrical slide rule sold by Oliver Garfield Company in New York was initially a relabelled Otis King; Garfield later made his own, probably unauthorized version of the Otis King (around 1959). The UK patents covering the mechanical device(s) would have expired in about 1941–1942 (i.e. 20 years after filing of the patent) but copyright in the drawings would typically only expire 70 years after the author's death. Patents UK patent GB 207,762 (1922) UK patent GB 183,723 (1921) UK patent GB 207,856 (1922) US patent US 1,645,009 (1923) Canadian patent CA 241986 Canadian patent CA 241076 French patent FR569985 French patent FR576616 German patent DE 418814 See also Bygrave slide rule Fuller's cylindrical slide rule External links Dick Lyon's Otis King pages Cylindrical rules at Museum of HP Calculators
https://en.wikipedia.org/wiki/Sphenacodontoidea
Sphenacodontoidea is a node-based clade that is defined to include the most recent common ancestor of Sphenacodontidae and Therapsida and its descendants (including mammals). Sphenacodontoids are characterised by a number of synapomorphies concerning proportions of the bones of the skull and the teeth. The sphenacodontoids evolved from earlier sphenacodonts such as Haptodus via a number of transitional stages of small, unspecialised pelycosaurs. Classification The following taxonomy follows Fröbisch et al. (2011) and Benson (2012) unless otherwise noted. Class Synapsida Sphenacodontoidea Family †Sphenacodontidae Therapsida See also Evolution of mammals
https://en.wikipedia.org/wiki/Hua%27s%20lemma
In mathematics, Hua's lemma, named for Hua Loo-keng, is an estimate for exponential sums. It states that if P is an integral-valued polynomial of degree k, is a positive real number, and f a real function defined by then , where lies on a polygonal line with vertices
https://en.wikipedia.org/wiki/Smart%20mine
Smart mine refers to a number of next-generation land mine designs being developed by military forces around the world. Many are designed to self-destruct or self-deactivate at the end of a conflict or a preset period of time. Others are so-called "self-healing" minefields which can detect when a gap in the field has been created and will direct its mines to reposition themselves to eliminate that gap, making it much more difficult and dangerous to create a safe path through the minefield. The development of smart mines began as a response to the International Campaign to Ban Landmines as a way of reducing non-combatant and civilian injury. Critics claim that new technology is unreliable, and that the perception of a "safe mine" will lead to increased deployment of land mines in future conflicts. Current guidelines allow for a 10% failure rate, leaving a significant number of mines to pose a threat. Additionally, in the case of self-destructing mines, civilians still are at risk of injury when the mine self-destructs and are denied access to land which has been mined. As mines are inherently non-discriminate weapons, even smart mines may injure civilians during a time of war. Many critics believe this to be unacceptable under international law. The human security paradigm is outspoken on the issue of the reduction in the use of land mines due to the extremely individual nature of their impact - a facet that is ignored by traditional security concerns which focus on military and state level security issues.
https://en.wikipedia.org/wiki/Glossary%20of%20ichthyology
This glossary of ichthyology is a list of definitions of terms and concepts used in ichthyology, the study of fishes. A B C D E F G H I J K L M N O P R S T U V W
https://en.wikipedia.org/wiki/Satellite%20chromosome
Satellite or SAT chromosomes are chromosomes that contain secondary constructs that serve as identification. They are observed in Acrocentric chromosomes. In addition to the centromere, one or more secondary constrictions can be observed in some chromosomes at metaphase. These chromosomes are called satellite chromosomes. In humans it is usually associated with the short arm of an acrocentric chromosome, such as in the chromosomes 13, 14, 15, 21, & 22. The Y chromosome can also contain satellites, although these are thought to be translocations from autosomes. The secondary constriction always keeps its position, so it can be used as markers to identify specific chromosomes. The name derives from the small chromosomal segment behind the secondary constriction, called a satellite, named by Sergei Navashin, in 1912. Later, Heitz (1931) qualified the secondary constriction as the SAT state (Sine Acido Thymonucleinico, which means "without thymonucleic acid"), because it didn't stain with the Feulgen reaction. With time, the term "SAT-chromosome" simply became a synonym and also an abbreviation for satellite chromosome. The satellite at metaphase appears to be attached to the chromosomes by a thread of chromatin. SAT-chromosomes whose secondary constriction is associated with the formation of the nucleolus are referred to as nucleolar SAT-chromosomes. There are at least 4 SAT chromosomes in each diploid nucleus, and the constriction corresponds to a nucleolar organizer (NOR), a region containing multiple copies of the 18S and 28S ribosomal genes that synthesize ribosomal RNA required by ribosomes. The appearance of secondary constrictions at NORs is thought to be due to rRNA transcription and/or structural features of the nucleolus impeding chromosome condensation.
https://en.wikipedia.org/wiki/Voltinism
Voltinism is a term used in biology to indicate the number of broods or generations of an organism in a year. The term is most often applied to insects, and is particularly in use in sericulture, where silkworm varieties vary in their voltinism. Univoltine (monovoltine) – (adjective) referring to organisms having one brood or generation per year Bivoltine (divoltine) – (adjective) referring to organisms having two broods or generations per year Trivoltine – (adjective) referring to organisms having three broods or generations per year Multivoltine (polyvoltine) – (adjective) referring to organisms having more than two broods or generations per year Semivoltine – There are two meanings: (biology) Less than univoltine; having a brood or generation less often than once per year or (adjective) referring to organisms whose generation time is more than one year. Examples The speckled wood butterfly is univoltine in the northern part of its range, e.g. northern Scandinavia. Adults emerge in late spring, mate, and die shortly after laying eggs; their offspring will grow until pupation, enter diapause in anticipation of the winter, and emerge as adults the following year – thus resulting in a single generation of butterflies per year. In southern Scandinavia, the same species is bivoltine – here, the offspring of spring-emerging adults will develop directly into adults during the summer, mate, and die. Their offspring in turn constitute a second generation, which is the generation that will enter winter diapause and emerge as adults (and mate) in the spring of the following year. This results in a pattern of one short-lived generation (c. 2–3 months) that breeds during the summer, and one long-lived generation (c. 9–10 months) that diapauses through the winter and breeds in the spring. The Rocky Mountain parnassian and the High brown fritillary are more examples of univoltine butterfly species. The bee species Macrotera portalis is bivoltine, and is estimated to ha
https://en.wikipedia.org/wiki/Antichamber
Antichamber is a first-person puzzle-platform game created by Australian developer Alexander "Demruth" Bruce. Many of the puzzles are based on phenomena that occur within impossible objects created by the game engine, such as passages that lead the player to different locations depending on which way they face, and structures that seem otherwise impossible within normal three-dimensional space. The game includes elements of psychological exploration through brief messages of advice to help the player figure out solutions to the puzzles as well as adages for real life. The game was released on Steam for Microsoft Windows on January 31, 2013. A version originally sold with the Humble Indie Bundle 11 in 2014 added support for Linux and OS X. Gameplay and plot In Antichamber, the player controls the unnamed protagonist from a first-person perspective as they wander through levels. Regarding typical notions of Euclidean space, Bruce has stated that "breaking down all those expectations and then remaking them is essentially the core mechanic of the game". The player starts in an antechamber that contains four walls. One is a diegetic menu to set the various game options as well as a countdown timer starting at ninety minutes. A second wall provides a map of the game's space that will fill in as the player visits specific rooms, highlighting passages the player has yet to explore, and allows the player, upon return to this room, to jump to any room they've visited before. A third wall shows a series of cartoonish iconographs and obfuscated hint text that are added as the player finds these on walls of the puzzle space. The fourth wall is a window, showing the ultimate goal, the exit from the space, which the player must figure out how to get to. Puzzle elements in various chambers involve manoeuvring themselves around the spaces, where level elements can change after passing certain points, or even based on which direction the player is facing when traversing the level
https://en.wikipedia.org/wiki/La%20petite%20mort
(; "the little death") is an expression that means "the brief loss or weakening of consciousness" and in modern usage refers specifically to "the sensation of post orgasm as likened to death." The first attested use of the expression in English was in 1572 with the meaning of "fainting fit." It later came to mean "nervous spasm" as well. The first attested use with the meaning of "orgasm" was in 1882. In modern usage, this term has generally been interpreted to describe the post-orgasmic state of unconsciousness that certain people perceive after having some sexual experiences. More widely, it can refer to the spiritual release that comes with orgasm or to a short period of melancholy or transcendence as a result of the expenditure of the "life force". Literary critic Roland Barthes spoke of as the chief objective of reading literature, the feeling one should get when experiencing any great literature. The term does not always apply to sexual experiences. It can also be used when some undesired thing has happened to a person and has affected them so much that "a part of them dies inside." A literary example of this is found in Thomas Hardy's Tess of the D'Urbervilles when he uses the phrase to describe how Tess feels after she comes across a particularly gruesome omen after meeting with her own rapist: She felt the petite mort at this unexpectedly gruesome information, and left the solitary man behind her. The term "little death", a direct translation of , can also be used in English to essentially the same effect. Specifically, it is defined as "a state or event resembling or prefiguring death; a weakening or loss of consciousness, specifically in sleep or during an orgasm," a nearly identical definition to that of the original French. As with , the earlier attested uses are not related to sex or orgasm. See also Georges Bataille Sexual headache Dhat syndrome Post-coital tristesse Postorgasmic illness syndrome Refractory period
https://en.wikipedia.org/wiki/Langmuir%20%28journal%29
Langmuir is a peer-reviewed scientific journal that was established in 1985 and is published by the American Chemical Society. It is the leading journal focusing on the science and application of systems and materials in which the interface dominates structure and function. Research areas covered include surface and colloid chemistry. The total number of citations in 2021 is 129,693 and the 2021 Impact Factor is 4.331. Langmuir publishes original research articles, invited feature articles, perspectives, and editorials. The title honors Irving Langmuir, winner of the 1932 Nobel Prize for Chemistry. The founding editor-in-chief was Arthur W. Adamson. Abstracting and indexing Langmuir is indexed in Chemical Abstracts Service, Scopus, EBSCOhost, British Library, PubMed, Web of Science, and SwetsWise.
https://en.wikipedia.org/wiki/Tandy%20Video%20Information%20System
The Tandy Memorex Video Information System (VIS) is an interactive, multimedia CD-ROM player produced by the Tandy Corporation starting in 1992. It is similar in function to the Philips CD-i and Commodore CDTV systems (particularly the CDTV, since both the VIS and CDTV were adaptations of existing computer platforms and operating systems to the set-top-box design). The VIS systems were sold only at Radio Shack, under the Memorex brand, both of which Tandy owned at the time. Modular Windows Modular Windows is a special version of Microsoft Windows 3.1, designed to run on the Tandy Video Information System. Microsoft intended Modular Windows to be an embedded operating system for various devices, especially those designed to be connected to televisions. However, the VIS is the only known product that actually used this Windows version. It has been claimed that Microsoft created a new, incompatible version of Modular Windows ("1.1") shortly after the VIS shipped. No products are known to have actually used Modular Windows 1.1. Reception The VIS was not a successful product; by some reports Radio Shack only sold 11,000 units during the lifetime of the product. Radio Shack store employees jokingly referred to the VIS as "Virtually Impossible to Sell". Tandy discontinued the product in early 1994 and all remaining units were sold to a liquidator. Spinoffs While Modular Windows was discontinued, other modular, embedded versions of Windows were later released. These include Windows CE and Windows XP Embedded. VIS applications could be written using tools and techniques similar to those used to write software for IBM PC compatible personal computers running Microsoft Windows. This concept was carried forward in the Microsoft Xbox. Specifications Details of the system include: CPU: Intel 286 Video System: Cirrus Logic Sound System: Yamaha Chipset: NCR Corporation CDROM ×2 IDE by Mitsumi OS: Microsoft Modular Windows Additional details: Intel 80286 processo
https://en.wikipedia.org/wiki/Quasi-triangular%20quasi-Hopf%20algebra
A quasi-triangular quasi-Hopf algebra is a specialized form of a quasi-Hopf algebra defined by the Ukrainian mathematician Vladimir Drinfeld in 1989. It is also a generalized form of a quasi-triangular Hopf algebra. A quasi-triangular quasi-Hopf algebra is a set where is a quasi-Hopf algebra and known as the R-matrix, is an invertible element such that for all , where is the switch map given by , and where and . The quasi-Hopf algebra becomes triangular if in addition, . The twisting of by is the same as for a quasi-Hopf algebra, with the additional definition of the twisted R-matrix A quasi-triangular (resp. triangular) quasi-Hopf algebra with is a quasi-triangular (resp. triangular) Hopf algebra as the latter two conditions in the definition reduce the conditions of quasi-triangularity of a Hopf algebra. Similarly to the twisting properties of the quasi-Hopf algebra, the property of being quasi-triangular or triangular quasi-Hopf algebra is preserved by twisting. See also Ribbon Hopf algebra
https://en.wikipedia.org/wiki/OBO%20Foundry
The Open Biological and Biomedical Ontologies (OBO) Foundry is a group of people dedicated to build and maintain ontologies related to the life sciences. The OBO Foundry establishes a set of principles for ontology development for creating a suite of interoperable reference ontologies in the biomedical domain. Currently, there are more than a hundred ontologies that follow the OBO Foundry principles. The OBO Foundry effort makes it easier to integrate biomedical results and carry out analysis in bioinformatics. It does so by offering a structured reference for terms of different research fields and their interconnections (ex: a phenotype in a mouse model and its related phenotype in zebrafish). Introduction The Foundry initiative aims at improving the integration of data in the life sciences. One approach to integration is the annotation of data from different sources using controlled vocabularies. Ideally, such controlled vocabularies take the form of ontologies, which support logical reasoning over the data annotated using the terms in the vocabulary. The formalization of concepts in the biomedical domain is especially known via the work of the Gene Ontology Consortium, a part of the OBO Foundry. This has led to the development of certain proposed principles of good practice in ontology development, which are now being put into practice within the framework of the Open Biomedical Ontologies consortium through its OBO Foundry initiative. OBO ontologies form part of the resources of the National Center for Biomedical Ontology, where they form a central component of the NCBO's BioPortal. Open Biological and Biomedical Ontologies The Open Biological and Biomedical Ontologies (OBO; formerly Open Biomedical Ontologies) is an effort to create ontologies (controlled vocabularies) for use across biological and medical domains. A subset of the original OBO ontologies has started the OBO Foundry, which leads the OBO efforts since 2007. The creation of OBO in 2001 w