source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/National%20Chicken%20Council
The National Chicken Council (NCC) is a non-profit trade association based in Washington, D.C. that represents the interests of the United States chicken industry to the United States Congress and United States federal agencies. The association changed its name to the NCC from the National Broiler Council in 1999. Members of the NCC include chicken producers and processors, poultry distributors, and industry firms. Chicken producers and processors in the NCC account for approximately 95% of the chickens produced in the United States. Issues important to the council include biosecurity in the poultry industry and avian influenza. The council sponsors EatChicken.com, a website providing chicken recipes, cooking tips, and food safety information. In October 2011, Lampkin Butts, the President and Chief Executive Officer of Sanderson Farms, was named the Chairman of the National Chicken Council. He served for one year.
https://en.wikipedia.org/wiki/Plasma%20Physics%20Laboratory%20%28Saskatchewan%29
The Plasma Physics Laboratory at the University of Saskatchewan was established in 1959 by H. M. Skarsgard. Early work centered on research with a Betatron. Facilities STOR-1M STOR-1M is Canada's first tokamak built in 1983. In 1987 STOR-1M was the world’s first demonstration of alternating current in a tokamak. STOR-M STOR-M stands for Saskatchewan Torus-Modified. STOR-M is a tokamak located at the University of Saskatchewan. STOR-M is a small tokamak (major radius = 46 cm, minor radius = 12.5 cm) designed for studying plasma heating, anomalous transport and developing novel tokamak operation modes and advanced diagnostics. STOR-M is capable of a 30–40 millisecond plasma discharge with a toroidal magnetic field of between 0.5 and 1 tesla and a plasma current of between 20 and 50 kiloamperes. STOR-M has also demonstrated improved confinement induced by a turbulent heating pulse, electrode biasing and compact torus injection.
https://en.wikipedia.org/wiki/Range%20%28particle%20radiation%29
In passing through matter, charged particles ionize and thus lose energy in many steps, until their energy is (almost) zero. The distance to this point is called the range of the particle. The range depends on the type of particle, on its initial energy and on the material through which it passes. For example, if the ionising particle passing through the material is a positive ion like an alpha particle or proton, it will collide with atomic electrons in the material via Coulombic interaction. Since the mass of the proton or alpha particle is much greater than that of the electron, there will be no significant deviation from the radiation's incident path and very little kinetic energy will be lost in each collision. As such, it will take many successive collisions for such heavy ionising radiation to come to a halt within the stopping medium or material. Maximum energy loss will take place in a head-on collision with an electron. Since large angle scattering is rare for positive ions, a range may be well defined for that radiation, depending on its energy and charge, as well as the ionisation energy of the stopping medium. Since the nature of such interactions is statistical, the number of collisions required to bring a radiation particle to rest within the medium will vary slightly with each particle (i.e., some may travel further and undergo fewer collisions than others). Hence, there will be a small variation in the range, known as straggling. The energy loss per unit distance (and hence, the density of ionization), or stopping power also depends on the type and energy of the particle and on the material. Usually, the energy loss per unit distance increases while the particle slows down. The curve describing this fact is called the Bragg curve. Shortly before the end, the energy loss passes through a maximum, the Bragg Peak, and then drops to zero (see the figures in Bragg Peak and in stopping power). This fact is of great practical importance for radiation th
https://en.wikipedia.org/wiki/Flag%20of%20Mars
A flag of Mars is a concept of a possible flag design, meant to symbolize the planet Mars or to represent a fictional Martian government, in works of fiction. Proposed flags Thomas O. Paine's design Thomas O. Paine, who served as the third Administrator of NASA, designed a Mars flag in 1984. Paine's Mars flag includes a sliver of Earth near the hoist side of the flag "as a reminder of where we came from, and a star near to the other side, to remind us of where we are going. In the center of the field is a representation of the Mars planet symbol, with its arrow pointing out to the star, acknowledging that Mars is not our destination, merely a way station on a journey that has no ending". Paine's flag design was illustrated by artist Carter Emmart. That illustration was published on the cover of a periodical titled The Planetary Report. According to Emmart, Paine "created the Mars flag as an award to the person or organization that he felt had contributed most to advancing the human exploration of Mars". On November 12, 2005, Ray Bradbury received a Mars flag as a part of the "Thomas O. Paine Award for the Advancement of Human Exploration of Mars". The award was presented to Bradbury during The Planetary Society's 25th Anniversary Awards Dinner. Pascal Lee's design Pascal Lee, a former NASA research engineer designed a tricolor flag for Mars in 1999. It was flown into space on STS-103 by astronaut John M. Grunsfeld. The sequence of colors, from red, to green, and finally blue, represent the transformation of Mars from a lifeless planet to one teeming with life, as inspired by Kim Stanley Robinson's Mars trilogy of novels. It is also flown at the Flashline Mars Arctic Research Station, on behalf of the Mars Society. In science fiction In the 1953 Chuck Jones animated cartoon featuring Daffy Duck, Duck Dodgers in the 24½th Century, the character Marvin the Martian carries a pink triangular flag with a red circle. In other depictions, the flag may be rectangu
https://en.wikipedia.org/wiki/Self-Protecting%20Digital%20Content
Self Protecting Digital Content (SPDC), is a copy protection (digital rights management) architecture which allows restriction of access to, and copying of, the next generation of optical discs and streaming/downloadable content. Overview Designed by Cryptography Research, Inc. of San Francisco, SPDC executes code from the encrypted content on the DVD player, enabling the content providers to change DRM systems in case an existing system is compromised. It adds functionality to make the system "dynamic", as opposed to "static" systems in which the system and keys for encryption and decryption do not change, thus enabling one compromised key to decode all content released using that encryption system. "Dynamic" systems attempt to make future content released immune to existing methods of circumvention. Playback method If a method of playback used in previously released content is revealed to have a weakness, either by review or because it has already been exploited, code embedded into content released in the future will change the method, and any attackers will have to start over and attack it again. Targeting compromised players If a certain model of players are compromised, code specific to the model can be activated to verify that the particular player has not been compromised. The player can be "fingerprinted" if found to be compromised and the information can be used later. Forensic marking Code inserted into content can add information to the output that specifically identifies the player, and in a large-scale distribution of the content, can be used to trace the player. This may include the fingerprint of a specific player. Weaknesses If an entire class of players is compromised, it is infeasible to revoke the ability to use the content on the entire class because many customers may have purchased players in the class. A fingerprint may be used to try to work around this limitation, but an attacker with access to multiple sources of video may "s
https://en.wikipedia.org/wiki/4-Androstene-3%2C6%2C17-trione
4-Androstene-3,6,17-trione (4-AT; also marketed as 6-OXO or 4-etioallocholen-3,6,17-trione) is a drug or nutritional supplement that may increase the testosterone-estrogen ratio, but has no proven effect on body composition. Its use can be detected in urine. 4-AT is a potent irreversible aromatase inhibitor that inhibits estrogen biosynthesis by permanently binding and inactivating aromatase in adipose and peripheral tissue. Aromatase is responsible for the conversion of testosterone to estradiol. Blocking aromatase causes the body to decrease in levels of estradiol, which then results in increase of LH and consequently, testosterone. Since testosterone has myotropic activity and estradiol does not, elevated testosterone levels increase muscle mass. However, there appear to be no human or animal studies testing the hypothesis that 4-AT will produce an anabolic effect. 4-AT is used by steroid or prohormone users to counteract estrogen level increases caused by aromatization during their steroid cycle. This helps minimize side effects such as gynecomastia but can lead to acne. Also, after a steroid cycle, the compound may be used to shorten the recovery from the testicular suppression that can be the result of the use of steroids. Baylor University conducted an eight-week study to determine the effects of 300 mg or 600 mg of 4-AT in resistance-trained males. Compared to baseline, free testosterone increased by 90% for 300 mg group and 84% for 600 mg group, respectively. Also dihydrotestosterone and the ratio of free testosterone to estradiol increased significantly. The report concluded that "[t]he results of this study indicate that eight weeks of 6-OXO supplementation had no effect on body composition or clinical safety markers, but incompletely inhibited aromatase activity and significantly increased endogenous DHT levels that were attenuated after a three-week washout period". This study did not utilize a control group and was funded in part by two producers of
https://en.wikipedia.org/wiki/Isle%20of%20Wight%20Garlic%20Festival
The Isle of Wight Garlic Festival is a fundraising event that is held annually on the Isle of Wight to support the island's garlic industry, as well as fundraising for other agricultural farms on the island. History The Garlic Festival has been held every year since 1983, except 2020-21 when officials cited the COVID-19 pandemic that caused its cancellation. 2022 saw its resumption. From 1985 to 2006, the Newchurch Parish Sports & Community Association organised the annual Garlic Festival, achieving their major fundraising goals. It has recently drawn 20,000 visitors a year. Further entertainment has included live music from artists such as The Wurzels, Chas & Dave, Alvin Stardust, the Glitter Band, Chesney Hawkes, Kiki Dee, and Jim Diamond. See also Gilroy Garlic Festival
https://en.wikipedia.org/wiki/Clinical%20supervision
Supervision is used in counselling, psychotherapy, and other mental health disciplines as well as many other professions engaged in working with people. Supervision may be applied as well to practitioners in somatic disciplines for their preparatory work for patients as well as collateral with patients. Supervision is a replacement instead of formal retrospective inspection, delivering evidence about the skills of the supervised practitioners. It consists of the practitioner meeting regularly with another professional, not necessarily more senior, but normally with training in the skills of supervision, to discuss casework and other professional issues in a structured way. This is often known as clinical or counselling supervision (consultation differs in being optional advice from someone without a supervisor's formal authority). The purpose is to assist the practitioner to learn from his or her experience and progress in expertise, as well as to ensure good service to the client or patient. Learning shall be applied to planning work as well as to diagnostic work and therapeutic work. Milne (2007) defined clinical supervision as: "The formal provision, by approved supervisors, of a relationship-based education and training that is work-focused and which manages, supports, develops and evaluates the work of colleague/s". The main methods that supervisors use are corrective feedback on the supervisee's performance, teaching, and collaborative goal-setting. It therefore differs from related activities, such as mentoring and coaching, by incorporating an evaluative component. Supervision's objectives are "normative" (e.g. quality control), "restorative" (e.g. encourage emotional processing) and "formative" (e.g. maintaining and facilitating supervisees' competence, capability and general effectiveness). Some practitioners (e.g. art, music and drama therapists, chaplains, psychologists, and mental health occupational therapists) have used this practice for many years
https://en.wikipedia.org/wiki/Stopping%20power%20%28particle%20radiation%29
In nuclear and materials physics, stopping power is the retarding force acting on charged particles, typically alpha and beta particles, due to interaction with matter, resulting in loss of particle kinetic energy. Stopping power is also interpreted as the rate at which a material absorbs the kinetic energy of a charged particle. Its application is important in a wide range of thermodynamic areas such as radiation protection, ion implantation and nuclear medicine. Definition and Bragg curve Both charged and uncharged particles lose energy while passing through matter. Positive ions are considered in most cases below. The stopping power depends on the type and energy of the radiation and on the properties of the material it passes. Since the production of an ion pair (usually a positive ion and a (negative) electron) requires a fixed amount of energy (for example, 33.97 eV in dry air), the number of ionizations per path length is proportional to the stopping power. The stopping power of the material is numerically equal to the loss of energy per unit path length, : The minus sign makes positive. The force usually increases toward the end of range and reaches a maximum, the Bragg peak, shortly before the energy drops to zero. The curve that describes the force as function of the material depth is called the Bragg curve. This is of great practical importance for radiation therapy. The equation above defines the linear stopping power which in the international system is expressed in N but is usually indicated in other units like MeV/mm or similar. If a substance is compared in gaseous and solid form, then the linear stopping powers of the two states are very different just because of the different density. One therefore often divides the force by the density of the material to obtain the mass stopping power which in the international system is expressed in m4/s2 but is usually found in units like MeV/(mg/cm2) or similar. The mass stopping power then depends
https://en.wikipedia.org/wiki/Getting%20the%20wind%20knocked%20out%20of%20you
Getting the wind knocked out of you is an idiom that refers to the difficulty of breathing and temporary paralysis of the diaphragm caused by reflex diaphragmatic spasm when sudden force is applied to the upper central region of the abdomen and solar plexus. This often happens in contact sports, from a forceful blow to the abdomen, or by falling on the back. The sensation of being unable to breathe can lead to anxiety and there may be residual pain from the original blow, but the condition typically clears spontaneously in a minute or two. Victims of such a "winding" episode often groan in a strained manner until normal breathing resumes. Loosening restrictive garments and flexing the hips and knees can help relieve the symptoms.
https://en.wikipedia.org/wiki/Tallyman
A tallyman is an individual who keeps a numerical record with tally marks, historically often on tally sticks. Vote counter In Ireland, it is common for political parties to provide private observers when ballot boxes are opened. These tallymen keep a tally of the preferences of visible voting papers and allow an early initial estimate of which candidates are likely to win in the drawn-out single transferable vote counting process. Since the public voting process is by then complete, it is usual for tallymen from different parties to share information. Head counter Another possible definition is a person who called to literally do a head count, presumably on behalf of either the town council or the house owners. This is rumoured to have occurred in Liverpool, in the years after the First World War. Mechanical tally counters can make such head counts easier, by removing the need to make any marks. Debt collector In poorer parts of England (including the north and the East End of London), the tallyman was the hire purchase collector, who visited each week to collect the payments for goods purchased on the 'never never', or hire purchase. These people still had such employment up until the 1960s. The title tallyman extended to the keeper of a village pound as animals were often held against debts, and tally sticks were used to prove they could be released. In popular culture "'The tallyman,' Mum told me, 'slice off the top of the stems of the bunches as they take them in. Then him count the little stubs he just sliced off and pay the farmer.'" explains a Ms. Wade in Andrea Levy’s novel "Fruit of the Lemon". Harry Belafonte addresses the tallyman in "Day-O (The Banana Boat Song)." In 1967 Graham Gouldman wrote a song called "Tallyman," which was recorded by Jeff Beck and reached #30 on the British charts. Heavy metal singer Udo Dirkschneider produced a song called "Tallyman." The Tally Man is the name of two super villains in the DC Universe, usually ene
https://en.wikipedia.org/wiki/Maxwell%20bridge
A Maxwell bridge is a modification to a Wheatstone bridge used to measure an unknown inductance (usually of low Q value) in terms of calibrated resistance and inductance or resistance and capacitance. When the calibrated components are a parallel resistor and capacitor, the bridge is known as a Maxwell bridge. It is named for James C. Maxwell, who first described it in 1873. It uses the principle that the positive phase angle of an inductive impedance can be compensated by the negative phase angle of a capacitive impedance when put in the opposite arm and the circuit is at resonance; i.e., no potential difference across the detector (an AC voltmeter or ammeter)) and hence no current flowing through it. The unknown inductance then becomes known in terms of this capacitance. With reference to the picture, in a typical application and are known fixed entities, and and are known variable entities. and are adjusted until the bridge is balanced. and can then be calculated based on the values of the other components: To avoid the difficulties associated with determining the precise value of a variable capacitance, sometimes a fixed-value capacitor will be installed and more than one resistor will be made variable. It cannot be used for the measurement of high Q values. It is also unsuited for the coils with low Q values, less than one, because of balance convergence problem. Its use is limited to the measurement of low Q values from 1 to 10. The frequency of the AC current used to assess the unknown inductor should match the frequency of the circuit the inductor will be used in - the impedance and therefore the assigned inductance of the component varies with frequency. For ideal inductors, this relationship is linear, so that the inductance value at an arbitrary frequency can be calculated from the inductance value measured at some reference frequency. Unfortunately, for real components, this relationship is not linear, and using a derived or calculated v
https://en.wikipedia.org/wiki/Multiple%20complex%20developmental%20disorder
Multiple complex developmental disorder (MCDD) is a research category, proposed to involve several neurological and psychological symptoms where at least some symptoms are first noticed during early childhood and persist throughout life. It was originally suggested to be a subtype of pervasive developmental disorders (PDD) with co-morbid schizophrenia or another psychotic disorder; however, there is some controversy that not everyone with MCDD meets criteria for both PDD and psychosis. The term multiplex developmental disorder was coined by Donald J. Cohen in 1986. Diagnostic criteria The current diagnostic criteria for MCDD are a matter of debate due to it not being in the DSM-V or ICD-10. Various websites contain various diagnostic criteria. At least three of the following categories should be present. Co-occurring clusters of symptoms must also not be better explained by being symptoms of another disorder such as experiencing mood swings due to autism, cognitive difficulties due to schizophrenia, and so on. The exact diagnostic criteria for MCDD remain unclear but may be a useful diagnosis for people who do not fall into any specific category. It could also be argued that MCDD is a vague and unhelpful term for these patients. Psychotic symptoms Criteria are met for a psychotic disorder. Some symptoms may include: Delusions, such as thought insertion, paranoid preoccupations, fantasies of personal omnipotence, over engagement with fantasy figures, grandiose fantasies of special powers, referential ideation, and confusion between fantasy and real life. Hallucinations and/or unusual perceptual experiences. Negative symptoms (anhedonia, affective flattening, alogia, avolition) Disorganized behavior and/or speech such as thought disorder, easy confusability, inappropriate emotions/facial expressions, uncontrollable laughter, etc. Catatonic behavior. Affective and behavioral symptoms These symptoms are not due to situations such as, person is depressed because of
https://en.wikipedia.org/wiki/Biomolecular%20structure
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids. The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University. Primary structure The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides. The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end. The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
https://en.wikipedia.org/wiki/Mitotic%20recombination
Mitotic recombination is a type of genetic recombination that may occur in somatic cells during their preparation for mitosis in both sexual and asexual organisms. In asexual organisms, the study of mitotic recombination is one way to understand genetic linkage because it is the only source of recombination within an individual. Additionally, mitotic recombination can result in the expression of recessive alleles in an otherwise heterozygous individual. This expression has important implications for the study of tumorigenesis and lethal recessive alleles. Mitotic homologous recombination occurs mainly between sister chromatids subsequent to replication (but prior to cell division). Inter-sister homologous recombination is ordinarily genetically silent. During mitosis the incidence of recombination between non-sister homologous chromatids is only about 1% of that between sister chromatids. Discovery The discovery of mitotic recombination came from the observation of twin spotting in Drosophila melanogaster. This twin spotting, or mosaic spotting, was observed in D. melanogaster as early as 1925, but it was only in 1936 that Curt Stern explained it as a result of mitotic recombination. Prior to Stern's work, it was hypothesized that twin spotting happened because certain genes had the ability to eliminate the chromosome on which they were located. Later experiments uncovered when mitotic recombination occurs in the cell cycle and the mechanisms behind recombination. Occurrence Mitotic recombination can happen at any locus but is observable in individuals that are heterozygous at a given locus. If a crossover event between non-sister chromatids affects that locus, then both homologous chromosomes will have one chromatid containing each genotype. The resulting phenotype of the daughter cells depends on how the chromosomes line up on the metaphase plate. If the chromatids containing different alleles line up on the same side of the plate, then the resulting
https://en.wikipedia.org/wiki/SiRFstarIII
SiRFstarIII is a range of high sensitivity GPS microcontroller chips manufactured by SiRF Technology. GPS microcontroller chips interpret signals from GPS satellites and determine the position of the GPS receiver. It was announced in 2004. Features SiRFstarIII features: A 20-channel receiver, which can process the signals of all visible GPS and WAAS satellites simultaneously. Power consumption of 62 mW during continuous operation. Assisted GPS client capability can reduce TTFF to less than one second. Receiver sensitivity of -159 dBm while tracking. SBAS (WAAS, MSAS, EGNOS) support
https://en.wikipedia.org/wiki/Yojimbo%20%28software%29
Yojimbo is a personal information manager for MacOS by Bare Bones Software. It can store notes, images and media, URLs, web pages, and passwords. Yojimbo can also encrypt any of its contents and store the password in the Keychain. It is Bare Bones' second Cocoa application. History Yojimbo was first released on January 23, 2006. At the time, Bare Bones called it "a completely new information organizer". Yojimbo 1.1 Yojimbo 1.3 Yojimbo 1.5 Yojimbo 2.0 Yojimbo 2.1 Yojimbo 2.2 Yojimbo 3 and new iPad version Yojimbo 4 Yojimbo 4.5 In 2007, another developer, Adrian Ross, created Webjimbo, a web interface through which users can access their Yojimbo libraries. Like other developers, Bare Bones Software faced difficulties adding iCloud sync due to early limitations in Apple's service. Tech reporter Christophe Laporte criticized Yojimbo's transition to iCloud as bungled, and expressed frustration at the lack of updates to the app.
https://en.wikipedia.org/wiki/Ku%20%28protein%29
Ku is a dimeric protein complex that binds to DNA double-strand break ends and is required for the non-homologous end joining (NHEJ) pathway of DNA repair. Ku is evolutionarily conserved from bacteria to humans. The ancestral bacterial Ku is a homodimer (two copies of the same protein bound to each other). Eukaryotic Ku is a heterodimer of two polypeptides, Ku70 (XRCC6) and Ku80 (XRCC5), so named because the molecular weight of the human Ku proteins is around 70 kDa and 80 kDa. The two Ku subunits form a basket-shaped structure that threads onto the DNA end. Once bound, Ku can slide down the DNA strand, allowing more Ku molecules to thread onto the end. In higher eukaryotes, Ku forms a complex with the DNA-dependent protein kinase catalytic subunit (DNA-PKcs) to form the full DNA-dependent protein kinase, DNA-PK. Ku is thought to function as a molecular scaffold to which other proteins involved in NHEJ can bind, orienting the double-strand break for ligation. The Ku70 and Ku80 proteins consist of three structural domains. The N-terminal domain is an alpha/beta domain. This domain only makes a small contribution to the dimer interface. The domain comprises a six-stranded beta sheet of the Rossmann fold. The central domain of Ku70 and Ku80 is a DNA-binding beta-barrel domain. Ku makes only a few contacts with the sugar-phosphate backbone, and none with the DNA bases, but it fits sterically to major and minor groove contours forming a ring that encircles duplex DNA, cradling two full turns of the DNA molecule. By forming a bridge between the broken DNA ends, Ku acts to structurally support and align the DNA ends, to protect them from degradation, and to prevent promiscuous binding to unbroken DNA. Ku effectively aligns the DNA, while still allowing access of polymerases, nucleases and ligases to the broken DNA ends to promote end joining. The C-terminal arm is an alpha helical region which embraces the central beta-barrel domain of the opposite subunit. In some ca
https://en.wikipedia.org/wiki/Aircrack-ng
Aircrack-ng is a network software suite consisting of a detector, packet sniffer, WEP and WPA/WPA2-PSK cracker and analysis tool for 802.11 wireless LANs. It works with any wireless network interface controller whose driver supports raw monitoring mode and can sniff 802.11a, 802.11b and 802.11g traffic. Packages are released for Linux and Windows. Aircrack-ng is a fork of the original Aircrack project. It can be found as a preinstalled tool in many security-focused Linux distributions such as Kali Linux or Parrot Security OS, which share common attributes as they are developed under the same project (Debian). Development Aircrack was originally developed by French security researcher Christophe Devine, its main goal was to recover 802.11 wireless networks WEP keys using an implementation of the Fluhrer, Mantin and Shamir (FMS) attack alongside the ones shared by a hacker named KoreK. Aircrack was forked by Thomas D'Otreppe in February 2006 and released as Aircrack-ng (Aircrack Next Generation). Wi-Fi security history WEP Wired Equivalent Privacy was the first security algorithm to be released, with the intention of providing data confidentiality comparable to that of a traditional wired network. It was introduced in 1997 as part of the IEEE 802.11 technical standard and based on the RC4 cipher and the CRC-32 checksum algorithm for integrity. Due to U.S. restrictions on the export of cryptographic algorithms, WEP was effectively limited to 64-bit encryption. Of this, 40 bits were allocated to the key and 24 bits to the initialization vector (IV), to form the RC4 key. After the restrictions were lifted, versions of WEP with a stronger encryption were released with 128 bits: 104 bits for the key size and 24 bits for the initialization vector, known as WEP2. The initialization vector works as a seed, which is prepended to the key. Via the key-scheduling algorithm (KSA), the seed is used to initialize the RC4 cipher's state. The output of RC4's pseudo random ge
https://en.wikipedia.org/wiki/Non-interference%20%28security%29
Noninterference is a strict multilevel security policy model, first described by Goguen and Meseguer in 1982, and amplified further in 1984. Introduction In simple terms, a computer is modeled as a machine with inputs and outputs. Inputs and outputs are classified as either low (low sensitivity, not highly classified) or high (sensitive, not to be viewed by uncleared individuals). A computer has the noninterference property if and only if any sequence of low inputs will produce the same low outputs, regardless of what the high level inputs are. That is, if a low (uncleared) user is working on the machine, it will respond in exactly the same manner (on the low outputs) whether or not a high (cleared) user is working with sensitive data. The low user will not be able to acquire any information about the activities (if any) of the high user. Formal expression Let be a memory configuration, and let and be the projection of the memory to the low and high parts, respectively. Let be the function that compares the low parts of the memory configurations, i.e., iff . Let be the execution of the program starting with memory configuration and terminating with the memory configuration . The definition of noninterference for a deterministic program is the following: Limitations Strictness This is a very strict policy, in that a computer system with covert channels may comply with, say, the Bell–LaPadula model, but will not comply with noninterference. The reverse could be true (under reasonable conditions, being that the system should have labelled files, etc.) except for the "No classified information at startup" exceptions noted below. However, noninterference has been shown to be stronger than nondeducibility. This strictness comes with a price. It is very difficult to make a computer system with this property. There may be only one or two commercially available products that have been verified to comply with this policy, and these would essentially be as
https://en.wikipedia.org/wiki/Recursive%20grammar
In computer science, a grammar is informally called a recursive grammar if it contains production rules that are recursive, meaning that expanding a non-terminal according to these rules can eventually lead to a string that includes the same non-terminal again. Otherwise it is called a non-recursive grammar. For example, a grammar for a context-free language is left recursive if there exists a non-terminal symbol A that can be put through the production rules to produce a string with A (as the leftmost symbol). All types of grammars in the Chomsky hierarchy can be recursive and it is recursion that allows the production of infinite sets of words. Properties A non-recursive grammar can produce only a finite language; and each finite language can be produced by a non-recursive grammar. For example, a straight-line grammar produces just a single word. A recursive context-free grammar that contains no useless rules necessarily produces an infinite language. This property forms the basis for an algorithm that can test efficiently whether a context-free grammar produces a finite or infinite language.
https://en.wikipedia.org/wiki/FLAGS%20register
The FLAGS register is the status register that contains the current state of an x86 CPU. The size and meanings of the flag bits are architecture dependent. It usually reflects the result of arithmetic operations as well as information about restrictions placed on the CPU operation at the current time. Some of those restrictions may include preventing some interrupts from triggering, prohibition of execution of a class of "privileged" instructions. Additional status flags may bypass memory mapping and define what action the CPU should take on arithmetic overflow. The carry, parity, auxiliary carry (or half carry), zero and sign flags are included in many architectures. In the i286 architecture, the register is 16 bits wide. Its successors, the EFLAGS and RFLAGS registers, are 32 bits and 64 bits wide, respectively. The wider registers retain compatibility with their smaller predecessors. FLAGS Note: The mask column in the table is the AND bitmask (as hexadecimal value) to query the flag(s) within FLAGS register value. Usage All FLAGS registers contain the condition codes, flag bits that let the results of one machine-language instruction affect another instruction. Arithmetic and logical instructions set some or all of the flags, and conditional jump instructions take variable action based on the value of certain flags. For example, jz (Jump if Zero), jc (Jump if Carry), and jo (Jump if Overflow) depend on specific flags. Other conditional jumps test combinations of several flags. FLAGS registers can be moved from or to the stack. This is part of the job of saving and restoring CPU context, against a routine such as an interrupt service routine whose changes to registers should not be seen by the calling code. Here are the relevant instructions: The PUSHF and POPF instructions transfer the 16-bit FLAGS register. PUSHFD/POPFD (introduced with the i386 architecture) transfer the 32-bit double register EFLAGS. PUSHFQ/POPFQ (introduced with the x64 architecture) tr
https://en.wikipedia.org/wiki/Gempack
GEMPACK (General Equilibrium Modelling PACKage) is a modeling system for CGE economic models, used at the Centre of Policy Studies (CoPS) in Melbourne, Australia, and sold to other CGE modellers. Some of the more well-known CGE models solved using GEMPACK are the GTAP model of world trade, and the MONASH, MMRF, ORANI-G and TERM models used at CoPS. All these models share a distinctive feature: they are formulated as a system of differential equations in percentage change form; however, this is not required by GEMPACK. Main features A characteristic feature of CGE models is that an initial solution for the model can be readily constructed from a table of transaction values (such as an input-output table or a social accounting matrix) that satisfies certain basic accounting restrictions. GEMPACK builds on this feature by formulating the CGE model as an initial value problem which is solved using standard techniques. The GEMPACK user specifies her model by constructing a text file listing model equations and variables, and showing how variables relate to value flows stored on an initial data file. GEMPACK translates this file into a computer program which solves the model, i.e., computes how model variables might change in response to an external shock. The original equation system is linearized (reformulated as a system of first-order partial differential equations). If most variables are expressed in terms of percentage changes (akin to log changes) the coefficients of the linearized system are usually very simple functions of database value flows. Computer algebra is used at this point to greatly reduce (by substitution) the size of the system. Then it is solved by multistep methods such as the Euler method, midpoint method or Gragg's modified Midpoint method. These all require solution of a large system of linear equations; accomplished by sparse matrix techniques. Richardson extrapolation is used to improve accuracy. The final result is an accurate soluti
https://en.wikipedia.org/wiki/Pixelplus
Pixel Plus, is a proprietary digital filter image processing technology developed by Philips, who claims that it enhances the display of analogue broadcast signals on their TVs. Pixel Plus interpolates the broadcast signal to increase the picture size by one third, from 625 lines to 833 lines. It also doubles the horizontal resolution, although each horizontal line is analogue. Other features include motion interpolation, a processing technique that interpolates (or creates) video fields (or frames) by analyzing fields (or frames) before and after the insertion point. This process is primarily focused on film based content which is filmed in either 24fps or 25fps. The motion interpolation function of Pixel Plus is an alternative to 3:2 pulldown processing which is the standard process of converting film to video. In 2005, Pixelplus 2 was launched. This version was the first to be able to perform motion reinterpolation on 480p and 576p material. In 2006, Pixelplus 3 was launched. This version was the first to be able to perform motion reinterpolation on 720p and 1080i material, except for US products. In 2007, Pixel Perfect HD Engine was launched. This version was the first to be able to perform motion reinterpolation on 1080p material, and introduced 720p and 1080i motion interpolation in US products. Not to be confused with Pixelplus Co., Ltd. (Nasdaq: PXPL) : a fabless semiconductor company in Korea that designs, develops, and markets CMOS image sensors for various consumer electronics applications.
https://en.wikipedia.org/wiki/Vinca%20alkaloid
Vinca alkaloids are a set of anti-mitotic and anti-microtubule alkaloid agents originally derived from the periwinkle plant Catharanthus roseus (basionym Vinca rosea) and other vinca plants. They block beta-tubulin polymerization in a dividing cell. Sources The Madagascan periwinkle Catharanthus roseus L. is the source for a number of important natural products, including catharanthine and vindoline and the vinca alkaloids it produces from them: leurosine and the chemotherapy agents vinblastine and vincristine, all of which can be obtained from the plant. The newer semi-synthetic chemotherapeutic agent vinorelbine is used in the treatment of non-small-cell lung cancer and is not known to occur naturally. However, it can be prepared either from vindoline and catharanthine or from leurosine, in both cases by synthesis of anhydrovinblastine, which "can be considered as the key intermediate for the synthesis of vinorelbine." The leurosine pathway uses the Nugent–RajanBabu reagent in a highly chemoselective de-oxygenation of leurosine. Anhydrovinblastine is then reacted sequentially with N-bromosuccinimide and trifluoroacetic acid followed by silver tetrafluoroborate to yield vinorelbine. Applications Vinca alkaloids are used in chemotherapy for cancer. They are a class of cell cycle–specific cytotoxic drugs that work by inhibiting the ability of cancer cells to divide: Acting upon tubulin, they prevent it from forming into microtubules, a necessary component for cellular division. The vinca alkaloids thus prevent microtubule polymerization, as opposed to the mechanism of action of taxanes. Vinca alkaloids are now produced synthetically and used as drugs in cancer therapy and as immunosuppressive drugs. These compounds include vinblastine, vincristine, vindesine, and vinorelbine. Additional researched vinca alkaloids include vincaminol, vineridine, and vinburnine. Vinpocetine is a semi-synthetic derivative of vincamine (sometimes described as "a synthetic ethyl
https://en.wikipedia.org/wiki/List%20of%20desktop%20publishing%20software
The following is a list of major desktop publishing software. A wide range of related software tools exist in this field including many plug-ins and tools related to the applications listed below. Several software directories provide more comprehensive listings of desktop publishing software, including VersionTracker and Tucows. Free software This section lists free software which does desktop publishing. All of these are required to be open-source. While not required, the software listed in this section is available free of charge. (In principle, in rare cases, free software is sold without being distributed over the Internet.) Desktop publishing software for Windows, macOS, Linux and other operating systems Collabora Online Draw and Collabora Online Writer. The applications for Windows, macOS, Linux and ChromeOS are also known as Collabora Office. LibreOffice Draw and LibreOffice Writer for Windows, macOS, Linux, BSDs and others LyX for Windows, MacOS, Linux, UNIX, OS/2 and Haiku, based on the LaTeX typesetting system, initial release in 1995 Scribus for Windows, macOS, Linux, BSD, Unix, Haiku, OS/2, based on the free Qt toolkit, initial release in 2003 Online desktop publishing software Collabora Online Draw and Collabora Online Writer Scenari, open source single-source publishing tool with support for chain publication Proprietary Desktop publishing software for Windows XEditpro Automated Publishing Tool - DiacriTech, 1997 Adobe InDesign Adobe FrameMaker Adobe PageMaker, discontinued in 2004 Affinity Publisher CatBase Calamus CorelDRAW Corel Ventura, previously Ventura Publisher, originally developed by Xerox, now owned by Corel FrameMaker, now owned by Adobe InPage - DTP which works with English + Urdu, Arabic, Persian, Pashto etc. MadCap Flare Microsoft Publisher PageStream, formerly known as Publishing Partner Prince XML, by YesLogic QuarkXPress RagTime Ready, Set, Go! Xara Designer Pro X Xara Page & Layout Designer Deskt
https://en.wikipedia.org/wiki/Initiation%20factor
Initiation factors are proteins that bind to the small subunit of the ribosome during the initiation of translation, a part of protein biosynthesis. Initiation factors can interact with repressors to slow down or prevent translation. They have the ability to interact with activators to help them start or increase the rate of translation. In bacteria, they are simply called IFs (i.e.., IF1, IF2, & IF3) and in eukaryotes they are known as eIFs (i.e.., eIF1, eIF2, eIF3). Translation initiation is sometimes described as three step process which initiation factors help to carry out. First, the tRNA carrying a methionine amino acid binds to the small ribosome, then binds to the mRNA, and finally joins together with the large ribosome. The initiation factors that help with this process each have different roles and structures. Types The initiation factors are divided into three major groups by taxonomic domains. There are some homologies shared (click the domain names to see the domain-specific factors): Structure and function Many structural domains have been conserved through evolution, as prokaryotic initiation factors share similar structures with eukaryotic factors. The prokaryotic initiation factor, IF3, assists with start site specificity, as well as mRNA binding. This is in comparison with the eukaryotic initiation factor, eIF1, who also performs these functions. The elF1 structure is similar to the C-terminal domain of IF3, as they each contain a five-stranded beta sheet against two alpha helices. The prokaryotic initiation factors IF1 and IF2 are also homologs of the eukaryotic initiation factors eIF1A and eIF5B. IF1 and eIF1A, both containing an OB-fold, bind to the A site and assist in the assembly of initiation complexes at the start codon. IF2 and eIF5B assist in the joining of the small and large ribosomal subunits. The eIF5B factor also contains elongation factors. Domain IV of eIF5B is closely related to the C-terminal domain of IF2, as they both c
https://en.wikipedia.org/wiki/Mercury%28II%29%20iodide
Mercury(II) iodide is a chemical compound with the molecular formula HgI2. It is typically produced synthetically but can also be found in nature as the extremely rare mineral coccinite. Unlike the related mercury(II) chloride it is hardly soluble in water (<100 ppm). Production Mercury(II) iodide is produced by adding an aqueous solution of potassium iodide to an aqueous solution of mercury(II) chloride with stirring; the precipitate is filtered off, washed and dried at 70 °C. HgCl2 + 2 KI → HgI2 + 2 KCl Properties Mercury(II) iodide displays thermochromism; when heated above 126 °C (400 K) it undergoes a phase transition, from the red alpha crystalline form to a pale yellow beta form. As the sample cools, it gradually reacquires its original colour. It has often been used for thermochromism demonstrations. A third form, which is orange, is also known; this can be formed by recrystallisation and is also metastable, eventually converting back to the red alpha form. The various forms can exist in a diverse range of crystal structures and as a result mercury(II) iodide possesses a surprisingly complex phase diagram. Uses Mercury(II) iodide is used for preparation of Nessler's reagent, used for detection of presence of ammonia. Mercury(II) iodide is a semiconductor material, used in some x-ray and gamma ray detection and imaging devices operating at room temperatures. In veterinary medicine, mercury(II) iodide is used in blister ointments in exostoses, bursal enlargement, etc. It can appear as a precipitate in many reactions. See also Mercury(I) iodide, Hg2I2
https://en.wikipedia.org/wiki/Computable%20general%20equilibrium
Computable general equilibrium (CGE) models are a class of economic models that use actual economic data to estimate how an economy might react to changes in policy, technology or other external factors. CGE models are also referred to as AGE (applied general equilibrium) models. Overview A CGE model consists of equations describing model variables and a database (usually very detailed) consistent with these model equations. The equations tend to be neoclassical in spirit, often assuming cost-minimizing behaviour by producers, average-cost pricing, and household demands based on optimizing behaviour. However, most CGE models conform only loosely to the theoretical general equilibrium paradigm. For example, they may allow for: non-market clearing, especially for labour (unemployment) or for commodities (inventories) imperfect competition (e.g., monopoly pricing) demands not influenced by price (e.g., government demands) A CGE model database consists of: tables of transaction values, showing, for example, the value of coal used by the iron industry. Usually the database is presented as an input-output table or as a social accounting matrix (SAM). In either case, it covers the whole economy of a country (or even the whole world), and distinguishes a number of sectors, commodities, primary factors and perhaps types of households. Sectoral coverage ranges from relatively simple representations of capital, labor and intermediates to highly detailed representations of specific sub-sectors (e.g., the electricity sector in GTAP-Power.) elasticities: dimensionless parameters that capture behavioural response. For example, export demand elasticities specify by how much export volumes might fall if export prices went up. Other elasticities may belong to the constant elasticity of substitution class. Amongst these are Armington elasticities, which show whether products of different countries are close substitutes, and elasticities measuring how easily inputs to productio
https://en.wikipedia.org/wiki/Park%20Grass%20Experiment
The Park Grass Experiment is a biological study originally set up to test the effect of fertilizers and manures on hay yields. The scientific experiment is located at the Rothamsted Research in the English county of Hertfordshire, and is notable as one of the longest-running experiments of modern science, as it was initiated in 1856 and has been continually monitored ever since. The experiment was originally designed to answer agricultural questions but has since proved an invaluable resource for studying natural selection and biodiversity. The treatments under study were found to be affecting the botanical make-up of the plots and the ecology of the field and it has been studied ever since. In spring, the field is a colourful tapestry of flowers and grasses, some plots still having the wide range of plants that most meadows probably contained hundreds of years ago. Over its history, Park Grass has: demonstrated that conventional field trials probably underestimate threats to plant biodiversity from long term changes, such as soil acidification, shown how plant species richness, biomass and pH are related, demonstrated that competition between plants can make the effects of climatic variation on communities more extreme, provided one of the first demonstrations of local evolutionary change under different selection pressures and endowed us with an archive of soil and hay samples that have been used to track the history of atmospheric pollution, including nuclear fallout. Bibliography Rothamsted Research: Classical Experiments Biodiversity Ecological experiments Grasslands
https://en.wikipedia.org/wiki/Applied%20general%20equilibrium
In mathematical economics, applied general equilibrium (AGE) models were pioneered by Herbert Scarf at Yale University in 1967, in two papers, and a follow-up book with Terje Hansen in 1973, with the aim of empirically estimating the Arrow–Debreu model of general equilibrium theory with empirical data, to provide "“a general method for the explicit numerical solution of the neoclassical model” (Scarf with Hansen 1973: 1) Scarf's method iterated a sequence of simplicial subdivisions which would generate a decreasing sequence of simplices around any solution of the general equilibrium problem. With sufficiently many steps, the sequence would produce a price vector that clears the market. Brouwer's Fixed Point theorem states that a continuous mapping of a simplex into itself has at least one fixed point. This paper describes a numerical algorithm for approximating, in a sense to be explained below, a fixed point of such a mapping (Scarf 1967a: 1326). Scarf never built an AGE model, but hinted that “these novel numerical techniques might be useful in assessing consequences for the economy of a change in the economic environment” (Kehoe et al. 2005, citing Scarf 1967b). His students elaborated the Scarf algorithm into a tool box, where the price vector could be solved for any changes in policies (or exogenous shocks), giving the equilibrium ‘adjustments’ needed for the prices. This method was first used by Shoven and Whalley (1972 and 1973), and then was developed through the 1970s by Scarf’s students and others. Most contemporary applied general equilibrium models are numerical analogs of traditional two-sector general equilibrium models popularized by James Meade, Harry Johnson, Arnold Harberger, and others in the 1950s and 1960s. Earlier analytic work with these models has examined the distortionary effects of taxes, tariffs, and other policies, along with functional incidence questions. More recent applied models, including those discussed here, provide numerical
https://en.wikipedia.org/wiki/Potassium%20acetate
Potassium acetate (also called potassium ethanoate), (CH3COOK) is the potassium salt of acetic acid. It is a hygroscopic solid at room temperature. Preparation It can be prepared by treating a potassium-containing base such as potassium hydroxide or potassium carbonate with acetic acid: CH3COOH + KOH → CH3COOK + H2O This sort of reaction is known as an acid-base neutralization reaction. The sesquihydrate in water solution (CH3COOK·1½H2O) begins to form semihydrate at 41.3 °C. Applications Deicing Potassium acetate (as a substitute for calcium chloride or magnesium chloride) can be used as a deicer to remove ice or prevent its formation. It offers the advantage of being less aggressive on soils and much less corrosive: for this reason, it is preferred for airport runways although it is more expensive. Fire extinguishing Potassium acetate is the extinguishing agent used in Class K fire extinguishers because of its ability to cool and form a crust over burning oils. Food additive Potassium acetate is used in processed foods as a preservative and acidity regulator. In the European Union, it is labeled by the E number E261; it is also approved for usage in the USA, Australia, and New Zealand. Potassium hydrogen diacetate (CAS #) with formula KH(OOCCH3)2 is a related food additive with the same E number as potassium acetate. Medicine and biochemistry In medicine, potassium acetate is used as part of electrolyte replacement protocols in the treatment of diabetic ketoacidosis because of its ability to break down to bicarbonate to help neutralize the acidotic state. In molecular biology, potassium acetate is used to precipitate dodecyl sulfate (DS) and DS-bound proteins to extract ethanol from DNA. Potassium acetate is used in mixtures applied for tissue preservation, fixation, and mummification. Most museums today use a formaldehyde-based method recommended by Kaiserling in 1897 which contains potassium acetate. This process was used to soak Lenin's corpse. Use
https://en.wikipedia.org/wiki/Government%20Paperwork%20Elimination%20Act
The Government Paperwork Elimination Act (GPEA, Title XVII) requires that, when practicable, federal agencies use electronic forms, electronic filing, and electronic signatures to conduct official business with the public by 2003. In doing this, agencies will create records with business, legal and, in some cases, historical value. This guidance focuses on records management issues involving records that have been created using electronic signature technology. The Act requires agencies, by October 21, 2003, to allow individuals or entities that deal with the agencies the option to submit information or transact with the agency electronically, when practicable, and to maintain records electronically, when practicable. The Act specifically states that electronic records and their related electronic signatures are not to be denied legal effect, validity, or enforceability merely because they are in electronic form, and encourages Federal government use of a range of electronic signature alternatives. The Act seeks to "preclude agencies or courts from systematically treating electronic documents and signatures less favorably than their paper counterparts", so that citizens can interact with the Federal government electronically. It requires Federal agencies, by October 21, 2003, to provide individuals or entities that deal with agencies the option to submit information or transact with the agency electronically, and to maintain records electronically, when practicable. It also addresses the matter of private employers being able to use electronic means to store, and file with Federal agencies, information pertaining to their employees. GPEA states that electronic records and their related electronic signatures are not to be denied legal effect, validity, or enforceability merely because they are in electronic form. It also encourages Federal government use of a range of electronic signature alternatives. The Act is technology-neutral, meaning that the act does not r
https://en.wikipedia.org/wiki/Postage%20stamp%20problem
The postage stamp problem is a mathematical riddle that asks what is the smallest postage value which cannot be placed on an envelope, if the latter can hold only a limited number of stamps, and these may only have certain specified face values. For example, suppose the envelope can hold only three stamps, and the available stamp values are 1 cent, 2 cents, 5 cents, and 20 cents. Then the solution is 13 cents; since any smaller value can be obtained with at most three stamps (e.g. 4 = 2 + 2, 8 = 5 + 2 + 1, etc.), but to get 13 cents one must use at least four stamps. Mathematical definition Mathematically, the problem can be formulated as follows: Given an integer m and a set V of positive integers, find the smallest integer z that cannot be written as the sum v1 + v2 + ··· + vk of some number k ≤ m of (not necessarily distinct) elements of V. Complexity This problem can be solved by brute force search or backtracking with maximum time proportional to |V |m, where |V | is the number of distinct stamp values allowed. Therefore, if the capacity of the envelope m is fixed, it is a polynomial time problem. If the capacity m is arbitrary, the problem is known to be NP-hard. See also Coin problem Knapsack problem Subset sum problem
https://en.wikipedia.org/wiki/Metaplasticity
Metaplasticity is a term originally coined by W.C. Abraham and M.F. Bear to refer to the plasticity of synaptic plasticity. Until that time synaptic plasticity had referred to the plastic nature of individual synapses. However this new form referred to the plasticity of the plasticity itself, thus the term meta-plasticity. The idea is that the synapse's previous history of activity determines its current plasticity. This may play a role in some of the underlying mechanisms thought to be important in memory and learning such as long-term potentiation (LTP), long-term depression (LTD) and so forth. These mechanisms depend on current synaptic "state", as set by ongoing extrinsic influences such as the level of synaptic inhibition, the activity of modulatory afferents such as catecholamines, and the pool of hormones affecting the synapses under study. Recently, it has become clear that the prior history of synaptic activity is an additional variable that influences the synaptic state, and thereby the degree, of LTP or LTD produced by a given experimental protocol. In a sense, then, synaptic plasticity is governed by an activity-dependent plasticity of the synaptic state; such plasticity of synaptic plasticity has been termed metaplasticity. There is little known about metaplasticity, and there is much research currently underway on the subject, despite its difficulty of study, because of its theoretical importance in brain and cognitive science. Most research of this type is done via cultured hippocampus cells or hippocampal slices. Hebbian plasticity The brain is "plastic", meaning it can be molded and formed. This plasticity is what allows you to learn throughout your lifetime; your synapses change based on your experience. New synapses can be made, old ones destroyed, or existing ones can be strengthened or weakened. The original theory of plasticity is called "Hebbian plasticity", named after Donald Hebb in 1949. A quick but effective summary of Hebbian theory is t
https://en.wikipedia.org/wiki/Unihemispheric%20slow-wave%20sleep
Unihemispheric slow-wave sleep (USWS) is sleep where one half of the brain rests while the other half remains alert. This is in contrast to normal sleep where both eyes are shut and both halves of the brain show unconsciousness. In USWS, also known as asymmetric slow-wave sleep, one half of the brain is in deep sleep, a form of non-rapid eye movement sleep and the eye corresponding to this half is closed while the other eye remains open. When examined by low-voltage electroencephalography (EEG), the characteristic slow-wave sleep tracings are seen from one side while the other side shows a characteristic tracing of wakefulness. The phenomenon has been observed in a number of terrestrial, aquatic and avian species. Unique physiology, including the differential release of the neurotransmitter acetylcholine, has been linked to the phenomenon. USWS offers a number of benefits, including the ability to rest in areas of high predation or during long migratory flights. The behaviour remains an important research topic because USWS is possibly the first animal behaviour which uses different regions of the brain to simultaneously control sleep and wakefulness. The greatest theoretical importance of USWS is its potential role in elucidating the function of sleep by challenging various current notions. Researchers have looked to animals exhibiting USWS to determine if sleep must be essential; otherwise, species exhibiting USWS would have eliminated the behaviour altogether through evolution. The amount of time spent sleeping during the unihemispheric slow-wave stage is considerably less than the bilateral slow-wave sleep. In the past, aquatic animals, such as dolphins and seals, had to regularly surface in order to breathe and regulate body temperature. USWS might have been generated by the need to perform these vital activities simultaneously with sleep. On land, birds can switch between sleeping with both hemispheres to one hemisphere. Due to their poorly webbed feet and
https://en.wikipedia.org/wiki/Trivers%E2%80%93Willard%20hypothesis
In evolutionary biology and evolutionary psychology, the Trivers–Willard hypothesis, formally proposed by Robert Trivers and Dan Willard in 1973, suggests that female mammals adjust the sex ratio of offspring in response to maternal condition, so as to maximize their reproductive success (fitness). For example, it may predict greater parental investment in males by parents in "good conditions" and greater investment in females by parents in "poor conditions" (relative to parents in good conditions). The reasoning for this prediction is as follows: Assume that parents have information on the sex of their offspring and can influence their survival differentially. While selection pressures exist to maintain a 1:1 sex ratio, evolution will favor local deviations from this if one sex has a likely greater reproductive payoff than is usual. Trivers and Willard also identified a circumstance in which reproducing individuals might experience deviations from expected offspring reproductive value—namely, varying maternal condition. In polygynous species, males may mate with multiple females, and low-condition males will achieve fewer or no matings. Parents in relatively good condition would then be under selection for mutations causing production and investment in sons (rather than daughters), because of the increased chance of mating experienced by these good-condition sons. Mating with multiple females conveys a large reproductive benefit, whereas daughters could translate their condition into only smaller benefits. An opposite prediction holds for poor-condition parents—selection will favor production and investment in daughters, so long as daughters are likely to be mated, while sons in poor condition are likely to be out-competed by other males and end up with zero mates (i.e., those sons will be a reproductive dead end). The hypothesis was used to explain why, for example, red deer mothers would produce more sons when they are in good condition, and more daughters when
https://en.wikipedia.org/wiki/Firefly%20luciferin
Firefly luciferin (also known as beetle luciferin) is the luciferin, or light-emitting compound, used for the firefly (Lampyridae), railroad worm (Phengodidae), starworm (Rhagophthalmidae), and click-beetle (Pyrophorini) bioluminescent systems. It is the substrate of luciferase (EC 1.13.12.7), which is responsible for the characteristic yellow light emission from many firefly species. As with all other luciferins, oxygen is required to elicit light; however, it has also been found adenosine triphosphate (ATP) and magnesium are required for light emission. History Much of the early work on the chemistry of the firefly luminescence was done in the lab of William D. McElroy at Johns Hopkins University. The luciferin was first isolated and purified in 1949, though it would be several years until a procedure was developed to crystallize the compound in high yield. This, along with the synthesis and structure elucidation, was accomplished by Dr. Emil H. White at the Johns Hopkins University, Department of Chemistry. The procedure was an acid-base extraction, given the carboxylic acid group on the luciferin. The luciferin could be effectively extracted using ethyl acetate at low pH from powder of approximately 15,000 firefly lanterns. The structure was later confirmed by combined use of infrared spectroscopy, UV-vis spectroscopy and synthetic methods to degrade the compound into identifiable fragments. Properties Crystal luciferin was found to be fluorescent, absorbing ultraviolet light with a peak at 327 nm and emitting light with a peak at 530 nm. Visible emission occurs upon relaxation of the oxyluciferin from a singlet excited state down to its ground state. Alkaline solutions caused a redshift of the absorption likely due to deprotonation of the hydroxyl group on the benzothiazole, but did not affect the fluorescence emission. It was found that the luciferyl adenylate (the AMP ester of luciferin) spontaneously emits light in solution. Different species of fireflie
https://en.wikipedia.org/wiki/Gyration%20tensor
In physics, the gyration tensor is a tensor that describes the second moments of position of a collection of particles where is the Cartesian coordinate of the position vector of the particle. The origin of the coordinate system has been chosen such that i.e. in the system of the center of mass . Where Another definition, which is mathematically identical but gives an alternative calculation method, is: Therefore, the x-y component of the gyration tensor for particles in Cartesian coordinates would be: In the continuum limit, where represents the number density of particles at position . Although they have different units, the gyration tensor is related to the moment of inertia tensor. The key difference is that the particle positions are weighted by mass in the inertia tensor, whereas the gyration tensor depends only on the particle positions; mass plays no role in defining the gyration tensor. Diagonalization Since the gyration tensor is a symmetric 3x3 matrix, a Cartesian coordinate system can be found in which it is diagonal where the axes are chosen such that the diagonal elements are ordered . These diagonal elements are called the principal moments of the gyration tensor. Shape descriptors The principal moments can be combined to give several parameters that describe the distribution of particles. The squared radius of gyration is the sum of the principal moments divided by the number of particles N: The asphericity is defined by which is always non-negative and zero only when the three principal moments are equal, λx = λy = λz. This zero condition is met when the distribution of particles is spherically symmetric (hence the name asphericity) but also whenever the particle distribution is symmetric with respect to the three coordinate axes, e.g., when the particles are distributed uniformly on a cube, tetrahedron or other Platonic solid. Similarly, the acylindricity is defined by which is always non-negative and zero only when
https://en.wikipedia.org/wiki/Active-pixel%20sensor
An active-pixel sensor (APS) is an image sensor, which was invented by Peter J.W. Noble in 1968, where each pixel sensor unit cell has a photodetector (typically a pinned photodiode) and one or more active transistors. In a metal–oxide–semiconductor (MOS) active-pixel sensor, MOS field-effect transistors (MOSFETs) are used as amplifiers. There are different types of APS, including the early NMOS APS and the now much more common complementary MOS (CMOS) APS, also known as the CMOS sensor. CMOS sensors are used in digital camera technologies such as cell phone cameras, web cameras, most modern digital pocket cameras, most digital single-lens reflex cameras (DSLRs), mirrorless interchangeable-lens cameras (MILCs), and lensless imaging for cells. CMOS sensors emerged as an alternative to charge-coupled device (CCD) image sensors and eventually outsold them by the mid-2000s. The term active pixel sensor is also used to refer to the individual pixel sensor itself, as opposed to the image sensor. In this case, the image sensor is sometimes called an active pixel sensor imager, or active-pixel image sensor. History Background While researching metal–oxide–semiconductor (MOS) technology, Willard Boyle and George E. Smith realized that an electric charge could be stored on a tiny MOS capacitor, which became the basic building block of the charge-couple device (CCD), which they invented in 1969. An issue with CCD technology was its need for nearly perfect charge transfer in read out, which, "makes their radiation [tolerance?] 'soft', difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response." At RCA Laboratories, a research team including Paul K. Weimer, W.S. Pike and G. Sadasiv in 1969 proposed a solid-state image sensor with sca
https://en.wikipedia.org/wiki/LCS35
LCS35 is a cryptographic challenge and a puzzle set by Ron Rivest in 1999. The challenge is to calculate the value where t is a 14-digit (or 47-bit) integer, namely 79685186856218, and n is a 616 digit (or 2048 bit) integer which is the product of two large primes (which are not given). The value of w can then be used to decrypt the ciphertext z, another 616 digit integer. The plaintext provides the concealed information about the factorisation of n, allowing the solution to be easily verified. The idea behind the challenge is that the only known way to find the value of w without knowing the factorisation of n is by t successive squarings. The value of t was chosen to make this brute force calculation take about 35 years using 1999 chip speeds as a starting point and taking into account Moore's law. Rivest notes that "just as a failure of Moore's Law could make the puzzle harder than intended, a breakthrough in the art of factoring would make the puzzle easier than intended." The challenge was set at (and takes its name from) the 35th anniversary celebrations of the MIT Laboratory for Computer Science, now part of MIT Computer Science and Artificial Intelligence Laboratory. The LCS35 challenge was solved on April 15, 2019, twenty years later, by Programmer Bernard Fabrot. The actual text was a "!!! Happy Birthday LCS !!!" message. On May 14, 2019, Ronald L. Rivest published a new version of LCS35 (named CSAIL2019) to extend the puzzle out to the year 2034.
https://en.wikipedia.org/wiki/God%27s%20algorithm
God's algorithm is a notion originating in discussions of ways to solve the Rubik's Cube puzzle, but which can also be applied to other combinatorial puzzles and mathematical games. It refers to any algorithm which produces a solution having the fewest possible moves. The allusion to the deity is based on the notion that an omniscient being would know an optimal step from any given configuration. Scope Definition The notion applies to puzzles that can assume a finite number of "configurations", with a relatively small, well-defined arsenal of "moves" that may be applicable to configurations and then lead to a new configuration. Solving the puzzle means to reach a designated "final configuration", a singular configuration, or one of a collection of configurations. To solve the puzzle a sequence of moves is applied, starting from some arbitrary initial configuration. Solution An algorithm can be considered to solve such a puzzle if it takes as input an arbitrary initial configuration and produces as output a sequence of moves leading to a final configuration (if the puzzle is solvable from that initial configuration, otherwise it signals the impossibility of a solution). A solution is optimal if the sequence of moves is as short as possible. The highest value of this, among all initial configurations, is known as God's number, or, more formally, the minimax value. God's algorithm, then, for a given puzzle, is an algorithm that solves the puzzle and produces only optimal solutions. Some writers, such as David Joyner, consider that for an algorithm to be properly referred to as "God's algorithm", it should also be practical, meaning that the algorithm does not require extraordinary amounts of memory or time. For example, using a giant lookup table indexed by initial configurations would allow solutions to be found very quickly, but would require an extraordinary amount of memory. Instead of asking for a full solution, one can equivalently ask for a single move f
https://en.wikipedia.org/wiki/HTTP%20cookie
HTTP cookies (also called web cookies, Internet cookies, browser cookies, or simply cookies) are small blocks of data created by a web server while a user is browsing a website and placed on the user's computer or other device by the user's web browser. Cookies are placed on the device used to access a website, and more than one cookie may be placed on a user's device during a session. Cookies serve useful and sometimes essential functions on the web. They enable web servers to store stateful information (such as items added in the shopping cart in an online store) on the user's device or to track the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to save for subsequent use information that the user previously entered into form fields, such as names, addresses, passwords, and payment card numbers. Authentication cookies are commonly used by web servers to authenticate that a user is logged in, and with which account they are logged in. Without the cookie, users would need to authenticate themselves by logging in on each page containing sensitive information that they wish to access. The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by an attacker, used to gain access to user data, or used to gain access (with the user's credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site request forgery for examples). Tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals' browsing histories a potential privacy concern that prompted European and U.S. lawmakers to take action in 2011. European law requires that all websites targeting European Union member states gain "informed
https://en.wikipedia.org/wiki/Aristaeus%20the%20Elder
Aristaeus the Elder (; 370 – 300 BC) was a Greek mathematician who worked on conic sections. He was a contemporary of Euclid. Life Only little is known of his life. The mathematician Pappus of Alexandria refers to him as Aristaeus the Elder. Pappus gave Aristaeus great credit for a work entitled Five Books concerning Solid Loci which was used by Pappus but has been lost. He may have also authored the book Concerning the Comparison of Five Regular Solids. This book has also been lost; it is known through a reference by the Greek mathematician Hypsicles. Heath 1921 notes, "Hypsicles (who lived in Alexandria) says also that Aristaeus, in a work entitled Comparison of the five figures, proved that the same circle circumscribes both the pentagon of the dodecahedron and the triangle of the icosahedron inscribed in the same sphere; whether this Aristaeus is the same as the Aristaeus of the Solid Loci, the elder contemporary of Euclid, we do not know."
https://en.wikipedia.org/wiki/Buccopharyngeal%20membrane
The region where the crescentic masses of the ectoderm and endoderm come into direct contact with each other constitutes a thin membrane, the buccopharyngeal membrane (or oropharyngeal membrane), which forms a septum between the primitive mouth and pharynx. In front of the buccopharyngeal area, where the lateral crescents of mesoderm fuse in the middle line, the pericardium is afterward developed, and this region is therefore designated the pericardial area. The buccopharyngeal membranes serve as a respiratory surface in a wide variety of amphibians and reptiles. In this type of respiration, membranes in the mouth and throat are permeable to oxygen and carbon dioxide. In some species that remain submerged in water for long periods, gas exchange by this route can be significant.
https://en.wikipedia.org/wiki/Widom%20scaling
Widom scaling (after Benjamin Widom) is a hypothesis in statistical mechanics regarding the free energy of a magnetic system near its critical point which leads to the critical exponents becoming no longer independent so that they can be parameterized in terms of two values. The hypothesis can be seen to arise as a natural consequence of the block-spin renormalization procedure, when the block size is chosen to be of the same size as the correlation length. Widom scaling is an example of universality. Definitions The critical exponents and are defined in terms of the behaviour of the order parameters and response functions near the critical point as follows , for , for where measures the temperature relative to the critical point. Near the critical point, Widom's scaling relation reads . where has an expansion , with being Wegner's exponent governing the approach to scaling. Derivation The scaling hypothesis is that near the critical point, the free energy , in dimensions, can be written as the sum of a slowly varying regular part and a singular part , with the singular part being a scaling function, i.e., a homogeneous function, so that Then taking the partial derivative with respect to H and the form of M(t,H) gives Setting and in the preceding equation yields for Comparing this with the definition of yields its value, Similarly, putting and into the scaling relation for M yields Hence Applying the expression for the isothermal susceptibility in terms of M to the scaling relation yields Setting H=0 and for (resp. for ) yields Similarly for the expression for specific heat in terms of M to the scaling relation yields Taking H=0 and for (or for yields As a consequence of Widom scaling, not all critical exponents are independent but they can be parameterized by two numbers with the relations expressed as The relations are experimentally well verified for magnetic systems and fluids.
https://en.wikipedia.org/wiki/Square%20lattice%20Ising%20model
In statistical mechanics, the two-dimensional square lattice Ising model is a simple lattice model of interacting magnetic spins. The model is notable for having nontrivial interactions, yet having an analytical solution. The model was solved by Lars Onsager for the special case that the external magnetic field H = 0. An analytical solution for the general case for has yet to be found. Defining the partition function Consider a 2D Ising model on a square lattice with N sites and periodic boundary conditions in both the horizontal and vertical directions, which effectively reduces the topology of the model to a torus. Generally, the horizontal coupling and the vertical coupling are not equal. With and absolute temperature and Boltzmann's constant , the partition function Critical temperature The critical temperature can be obtained from the Kramers–Wannier duality relation. Denoting the free energy per site as , one has: where Assuming there is only one critical line in the (K,L) plane, the duality relation implies that this is given by: For the isotropic case , one finds the famous relation for the critical temperature Dual lattice Consider a configuration of spins on the square lattice . Let r and s denote the number of unlike neighbours in the vertical and horizontal directions respectively. Then the summand in corresponding to is given by Construct a dual lattice as depicted in the diagram. For every configuration , a polygon is associated to the lattice by drawing a line on the edge of the dual lattice if the spins separated by the edge are unlike. Since by traversing a vertex of the spins need to change an even number of times so that one arrives at the starting point with the same charge, every vertex of the dual lattice is connected to an even number of lines in the configuration, defining a polygon. This reduces the partition function to summing over all polygons in the dual lattice, where r and s are the number of horizontal and vert
https://en.wikipedia.org/wiki/Glossary%20of%20entomology%20terms
This glossary of entomology describes terms used in the formal study of insect species by entomologists. A–C A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. Though its phytotoxicity is low, solvents in some formulations may damage certain crops. cf. the related Dieldrin, Endrin, Isodrin D–F A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. cf. the related Aldrin, Endrin, Isodrin A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. Though its phytotoxicity is low, solvents in some formulations may damage certain crops. cf. the related Dieldrin, Aldrin, Isodrin G–L A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. Though its phytotoxicity is low, solvents in some formulations may damage certain crops. cf. the related Dieldrin, Aldrin, Endrin M–O P–R S–Z Figures See also Anatomical terms of location Butterfly Caterpillar Comstock–Needham system External morphology of Lepidoptera Glossary of ant terms Glossary of spider terms Glossary of scientific names Insect wing Pupa
https://en.wikipedia.org/wiki/Singapore%20Mathematical%20Olympiad
The Singapore Mathematical Olympiad (SMO) is a mathematics competition organised by the Singapore Mathematical Society. It comprises three sections, Junior, Senior and Open, each of which is open to all pre-university students studying in Singapore who meet the age requirements for the particular section. The competition is held annually, and the first round of each section is usually held in late May or early June. The second round is usually held in late June or early July. History The Singapore Mathematical Society (SMS) has been organising mathematical competitions since the 1950's, launching the first inter-school Mathematical Competition in 1956. The Mathematical Competition was renamed to Singapore Mathematical Olympiad in 1995. In 2016, the SMS attempted to make the SMO more inviting to students by aligning questions more closely with school curriculum, although solutions still require considerable insight and creativity in addition to sound mathematical knowledge. In 2020 and 2021, the written round (Round 1) in all sections were postponed to September due to the COVID-19 pandemic, while the invitational round (Round 2) in all sections were cancelled. The normal competition timeline was resumed in 2022. Junior Section There are two rounds in the Junior Section: a written round (Round 1) and an invitational round (Round 2). The paper in Round 1 comprises 5 multiple-choice questions, each with five options, and 20 short answer questions. The Junior section is geared towards Lower Secondary students, and topics tested include number theory, combinatorics, geometry, algebra, and probability. Beginning in 2006, a second round was added, based on the Senior Invitational Round, in the form of a 5-question, 3-hour long paper requiring full-length solutions. Only the top 10% of students from Round 1 are eligible to take Round 2. Senior Section There are two rounds in the Senior Section: a written round (Round 1) and an invitational round (Round 2). The
https://en.wikipedia.org/wiki/Mawsonites
Mawsonites is a fossil genus dating to the Ediacaran Period from 635 – 539 million years ago during the Precambrian era. The fossils consist of a rounded diamond shape, made up from lobes radiating out from a central circle roughly 12 cm in diameter. There are about 19 radiations from the central circle. The type species is Mawsonites spriggi, named after Douglas Mawson, and Reg Sprigg. It was named by Martin Glaessner and Mary Wade in 1966. Its biological affinities were called into question amidst suggestions that it might represent a mud volcano or other sedimentary structure, but further research showed that these structures could not satisfactorily account for its complexity. The fossil has been theorized to represent algae holdfasts, jellyfish (although this is considered unlikely), a filter feeder, a burrow, a microbial colony or invertebrate tracks. Several of these possibilities would indicate that Mawsonites represents a trace fossil, not an organism. See also List of Ediacaran genera
https://en.wikipedia.org/wiki/Mass%20distribution
In physics and mechanics, mass distribution is the spatial distribution of mass within a solid body. In principle, it is relevant also for gases or liquids, but on Earth their mass distribution is almost homogeneous. Astronomy In astronomy mass distribution has decisive influence on the development e.g. of nebulae, stars and planets. The mass distribution of a solid defines its center of gravity and influences its dynamical behaviour - e.g. the oscillations and eventual rotation. Mathematical modelling A mass distribution can be modeled as a measure. This allows point masses, line masses, surface masses, as well as masses given by a volume density function. Alternatively the latter can be generalized to a distribution. For example, a point mass is represented by a delta function defined in 3-dimensional space. A surface mass on a surface given by the equation may be represented by a density distribution , where is the mass per unit area. The mathematical modelling can be done by potential theory, by numerical methods (e.g. a great number of mass points), or by theoretical equilibrium figures. Geology In geology the aspects of rock density are involved. Rotating solids Rotating solids are affected considerably by the mass distribution, either if they are homogeneous or inhomogeneous - see Torque, moment of inertia, wobble, imbalance and stability. See also Bouguer plate Gravity Mass function Mass concentration (astronomy) External links Mass distribution of the Earth Mechanics Celestial mechanics Geophysics Mass
https://en.wikipedia.org/wiki/Sterile%20male%20plant
Sterile male plants are plants which are incapable of producing pollen. This is sometimes attributed to mutations in the mitochondrial DNA which affects the Tapetum cells in anthers which are responsible for nursing developing pollen. The mutations cause the breakdown of the mitochondria in these specific cells and result in cell death and so pollen production is interrupted. These observations have now led to transgenic sterile male plants to be made in order to create hybrid seeds, by inserting transgenes which are specifically poisonous to Tapetum cells. Plant reproduction
https://en.wikipedia.org/wiki/Venezuelan%20equine%20encephalitis%20virus
Venezuelan equine encephalitis virus is a mosquito-borne viral pathogen that causes Venezuelan equine encephalitis or encephalomyelitis (VEE). VEE can affect all equine species, such as horses, donkeys, and zebras. After infection, equines may suddenly die or show progressive central nervous system disorders. Humans also can contract this disease. Healthy adults who become infected by the virus may experience flu-like symptoms, such as high fevers and headaches. People with weakened immune systems and the young and the elderly can become severely ill or die from this disease. The virus that causes VEE is transmitted primarily by mosquitoes that bite an infected animal and then bite and feed on another animal or human. The speed with which the disease spreads depends on the subtype of the VEE virus and the density of mosquito populations. Enzootic subtypes of VEE are diseases endemic to certain areas. Generally these serotypes do not spread to other localities. Enzootic subtypes are associated with the rodent-mosquito transmission cycle. These forms of the virus can cause human illness but generally do not affect equine health. Epizootic subtypes, on the other hand, can spread rapidly through large populations. These forms of the virus are highly pathogenic to equines and can also affect human health. Equines, rather than rodents, are the primary animal species that carry and spread the disease. Infected equines develop an enormous quantity of virus in their circulatory system. When a blood-feeding insect feeds on such animals, it picks up this virus and transmits it to other animals or humans. Although other animals, such as cattle, swine, and dogs, can become infected, they generally do not show signs of the disease or contribute to its spread. The virion is spherical and approximately 70 nm in diameter. It has a lipid membrane with glycoprotein surface proteins spread around the outside. Surrounding the nuclear material is a nucleocapsid that has an icosahedral
https://en.wikipedia.org/wiki/Jonathan%20Partington
Jonathan Richard Partington (born 4 February 1955) is an English mathematician who is Emeritus Professor of pure mathematics at the University of Leeds. Education Professor Partington was educated at Gresham's School, Holt, and Trinity College, Cambridge, where he completed his PhD thesis entitled "Numerical ranges and the Geometry of Banach Spaces" under the supervision of Béla Bollobás. Career Partington works in the area of functional analysis, sometimes applied to control theory, and is the author of several books in this area. He was formerly editor-in-chief of the Journal of the London Mathematical Society, a position he held jointly with his Leeds colleague John Truss. Partington's extra-mathematical activities include the invention of the March March march, an annual walk starting at March, Cambridgeshire. He is also known as a writer or co-writer of some of the earliest British text-based computer games, including Acheton, Hamil, Murdac, Avon, Fyleet, Crobe, Sangraal, and SpySnatcher, which started life on the Phoenix computer system at the University of Cambridge Computer Laboratory. These are still available on the IF Archive. Books External links Professor Jonathan R. Partington at the University of Leeds 1955 births Living people People from Holt, Norfolk 20th-century English mathematicians 21st-century English mathematicians Mathematical analysts People educated at Gresham's School Alumni of Trinity College, Cambridge Fellows of Pembroke College, Cambridge Fellows of Fitzwilliam College, Cambridge Academics of the University of Leeds
https://en.wikipedia.org/wiki/Fibre%20Channel%20network%20protocols
Communication between devices in a fibre channel network uses different elements of Fibre Channel standards. Transmission words and ordered sets All Fibre Channel communication is done in units of four 10-bit codes. This group of 4 codes is called a transmission word. An ordered set is a transmission word that includes some combination of control (K) codes and data (D) codes. AL_PAs Each device has an Arbitrated Loop Physical Address (AL_PA). These addresses are defined by an 8-bit field but must have neutral disparity as defined in the 8b/10b coding scheme. That reduces the number of possible values from 256 to 134. The 134 possible values have been divided between the fabric, FC_AL ports, and other special purposes as follows: Meta-data In addition to the transfer of data, it is necessary for Fibre Channel communication to include some metadata. This allows for the setting up of links, sequence management, and other control functions. The meta-data falls into two types, primitives which consist of a 4 character transmission word and non-data frames which are more complex structures. Primitives All primitives are four characters in length. They begin with the control character K28.5, followed by three data characters. In some primitives the three data characters are fixed, in others they can be varied to change the meaning or to act as parameters for the primitive. In some cases the last two parameter characters are identical. Parameters are shown in the table below in the form of their hexadecimal 8-bit values. This is clearer than their full 10-bit (Dxx.x) form as shown in the Fibre Channel standards: Note 1: The first parameter byte of the EOF primitive can have one of four different values (8A, 95, AA, or B5). This is done so that the EOF primitive can rebalance the disparity of the whole frame. The remaining two parameter bytes define whether the frame is ending normally, terminating the transfer, or is to be aborted due to an error. Note 2: The
https://en.wikipedia.org/wiki/Gerchberg%E2%80%93Saxton%20algorithm
The Gerchberg–Saxton (GS) algorithm is an iterative phase retrieval algorithm for retrieving the phase of a complex-valued wavefront from two intensity measurements acquired in two different planes. Typically, the two planes are the image plane and the far field (diffraction) plane, and the wavefront propagation between these two planes is given by the Fourier transform. The original paper by Gerchberg and Saxton considered image and diffraction pattern of a sample acquired in an electron microscope. It is often necessary to know only the phase distribution from one of the planes, since the phase distribution on the other plane can be obtained by performing a Fourier transform on the plane whose phase is known. Although often used for two-dimensional signals, the GS algorithm is also valid for one-dimensional signals. The pseudocode below performs the GS algorithm to obtain a phase distribution for the plane "Source", such that its Fourier transform would have the amplitude distribution of the plane "Target". Pseudocode algorithm Let: FT – forward Fourier transform IFT – inverse Fourier transform i – the imaginary unit, √−1 (square root of −1) exp – exponential function (exp(x) = ex) Target and Source be the Target and Source Amplitude planes respectively A, B, C & D be complex planes with the same dimension as Target and Source Amplitude – Amplitude-extracting function: e.g. for complex z = x + iy, amplitude(z) = sqrt(x·x + y·y) for real x, amplitude(x) = |x| Phase – Phase extracting function: e.g. Phase(z) = arctan(y / x) end Let algorithm Gerchberg–Saxton(Source, Target, Retrieved_Phase) is A := IFT(Target) while error criterion is not satisfied B := Amplitude(Source) × exp(i × Phase(A)) C := FT(B) D := Amplitude(Target) × exp(i × Phase(C)) A := IFT(D) end while Retrieved_Phase = Phase(A) This is just one of the many ways to implement the GS algorithm. Aside from op
https://en.wikipedia.org/wiki/Lattice%20Boltzmann%20methods
The lattice Boltzmann methods (LBM), originated from the lattice gas automata (LGA) method (Hardy-Pomeau-Pazzis and Frisch-Hasslacher-Pomeau models), is a class of computational fluid dynamics (CFD) methods for fluid simulation. Instead of solving the Navier–Stokes equations directly, a fluid density on a lattice is simulated with streaming and collision (relaxation) processes. The method is versatile as the model fluid can straightforwardly be made to mimic common fluid behaviour like vapour/liquid coexistence, and so fluid systems such as liquid droplets can be simulated. Also, fluids in complex environments such as porous media can be straightforwardly simulated, whereas with complex boundaries other CFD methods can be hard to work with. Algorithm Unlike CFD methods that solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, LBM models the fluid consisting of fictive particles, and such particles perform consecutive propagation and collision processes over a discrete lattice. Due to its particulate nature and local dynamics, LBM has several advantages over other conventional CFD methods, especially in dealing with complex boundaries, incorporating microscopic interactions, and parallelization of the algorithm. A different interpretation of the lattice Boltzmann equation is that of a discrete-velocity Boltzmann equation. The numerical methods of solution of the system of partial differential equations then give rise to a discrete map, which can be interpreted as the propagation and collision of fictitious particles. In an algorithm, there are collision and streaming steps. These evolve the density of the fluid , for the position and the time. As the fluid is on a lattice, the density has a number of components equal to the number of lattice vectors connected to each lattice point. As an example, the lattice vectors for a simple lattice used in simulations in two dimensions is shown here. This lattice is usu
https://en.wikipedia.org/wiki/Level%20of%20free%20convection
The level of free convection (LFC) is the altitude in the atmosphere where an air parcel lifted adiabatically until saturation becomes warmer than the environment at the same level, so that positive buoyancy can initiate self-sustained convection. Finding the LFC The usual way of finding the LFC is to lift a parcel from a lower level along the dry adiabatic lapse rate until it crosses the saturated mixing ratio line of the parcel: this is the lifted condensation level (LCL). From there on, follow the moist adiabatic lapse rate until the temperature of the parcel reaches the air mass temperature, at the equilibrium level (EL). If the temperature of the parcel along the moist adiabat is warmer than the environment on further lift, one has found the LFC. Use Since the volume of the parcel is larger than the surrounding air after LFC by the ideal gas law (PV = nRT), it is less dense and becomes buoyant rising until its temperature (at EL) equals the surrounding airmass. If the airmass has one or many LFC, it is potentially unstable and may lead to convective clouds like cumulus and thunderstorms. From the level of free convection to the point where the ascending parcel again becomes colder than its surroundings, the equilibrium level (EL), any air parcel gain kinetic energy which is calculated by its Convective available potential energy (CAPE), giving the potential for severe weather.
https://en.wikipedia.org/wiki/Sorption
Sorption is a physical and chemical process by which one substance becomes attached to another. Specific cases of sorption are treated in the following articles: Absorption "the incorporation of a substance in one state into another of a different state" (e.g., liquids being absorbed by a solid or gases being absorbed by a liquid); Adsorption The physical adherence or bonding of ions and molecules onto the surface of another phase (e.g., reagents adsorbed to a solid catalyst surface); Ion exchange An exchange of ions between two electrolytes or between an electrolyte solution and a complex. The reverse of sorption is desorption. Sorption rate The adsorption and absorption rate of a diluted solute in gas or liquid solution to a surface or interface can be calculated using Fick's laws of diffusion. See also Sorption isotherm
https://en.wikipedia.org/wiki/Siegel%E2%80%93Walfisz%20theorem
In analytic number theory, the Siegel–Walfisz theorem was obtained by Arnold Walfisz as an application of a theorem by Carl Ludwig Siegel to primes in arithmetic progressions. It is a refinement both of the prime number theorem and of Dirichlet's theorem on primes in arithmetic progressions. Statement Define where denotes the von Mangoldt function, and let φ denote Euler's totient function. Then the theorem states that given any real number N there exists a positive constant CN depending only on N such that whenever (a, q) = 1 and Remarks The constant CN is not effectively computable because Siegel's theorem is ineffective. From the theorem we can deduce the following bound regarding the prime number theorem for arithmetic progressions: If, for (a, q) = 1, by we denote the number of primes less than or equal to x which are congruent to a mod q, then where N, a, q, CN and φ are as in the theorem, and Li denotes the logarithmic integral.
https://en.wikipedia.org/wiki/Prime%20gap
A prime gap is the difference between two successive prime numbers. The n-th prime gap, denoted gn or g(pn) is the difference between the (n + 1)-st and the n-th prime numbers, i.e. We have g1 = 1, g2 = g3 = 2, and g4 = 4. The sequence (gn) of prime gaps has been extensively studied; however, many questions and conjectures remain unanswered. The first 60 prime gaps are: 1, 2, 2, 4, 2, 4, 2, 4, 6, 2, 6, 4, 2, 4, 6, 6, 2, 6, 4, 2, 6, 4, 6, 8, 4, 2, 4, 2, 4, 14, 4, 6, 2, 10, 2, 6, 6, 4, 6, 6, 2, 10, 2, 4, 2, 12, 12, 4, 2, 4, 6, 2, 10, 6, 6, 6, 2, 6, 4, 2, ... . By the definition of gn every prime can be written as Simple observations The first, smallest, and only odd prime gap is the gap of size 1 between 2, the only even prime number, and 3, the first odd prime. All other prime gaps are even. There is only one pair of consecutive gaps having length 2: the gaps g2 and g3 between the primes 3, 5, and 7. For any integer n, the factorial n! is the product of all positive integers up to and including n. Then in the sequence the first term is divisible by 2, the second term is divisible by 3, and so on. Thus, this is a sequence of consecutive composite integers, and it must belong to a gap between primes having length at least n. It follows that there are gaps between primes that are arbitrarily large, that is, for any integer N, there is an integer m with . However, prime gaps of n numbers can occur at numbers much smaller than n!. For instance, the first prime gap of size larger than 14 occurs between the primes 523 and 541, while 15! is the vastly larger number 1307674368000. The average gap between primes increases as the natural logarithm of these primes, and therefore the ratio of the prime gap to the primes involved decreases (and is asymptotically zero). This is a consequence of the prime number theorem. From a heuristic view, we expect the probability that the ratio of the length of the gap to the natural logarithm is greater than or equal to a fixed
https://en.wikipedia.org/wiki/Pseudonormal%20space
In mathematics, in the field of topology, a topological space is said to be pseudonormal if given two disjoint closed sets in it, one of which is countable, there are disjoint open sets containing them. Note the following: Every normal space is pseudonormal. Every pseudonormal space is regular. An example of a pseudonormal Moore space that is not metrizable was given by , in connection with the conjecture that all normal Moore spaces are metrizable.
https://en.wikipedia.org/wiki/Perfect%20set
In general topology, a subset of a topological space is perfect if it is closed and has no isolated points. Equivalently: the set is perfect if , where denotes the set of all limit points of , also known as the derived set of . In a perfect set, every point can be approximated arbitrarily well by other points from the set: given any point of and any neighborhood of the point, there is another point of that lies within the neighborhood. Furthermore, any point of the space that can be so approximated by points of belongs to . Note that the term perfect space is also used, incompatibly, to refer to other properties of a topological space, such as being a Gδ space. As another possible source of confusion, also note that having the perfect set property is not the same as being a perfect set. Examples Examples of perfect subsets of the real line are the empty set, all closed intervals, the real line itself, and the Cantor set. The latter is noteworthy in that it is totally disconnected. Whether a set is perfect or not (and whether it is closed or not) depends on the surrounding space. For instance, the set is perfect as a subset of the space but not perfect as a subset of the space . Connection with other topological properties Every topological space can be written in a unique way as the disjoint union of a perfect set and a scattered set. Cantor proved that every closed subset of the real line can be uniquely written as the disjoint union of a perfect set and a countable set. This is also true more generally for all closed subsets of Polish spaces, in which case the theorem is known as the Cantor–Bendixson theorem. Cantor also showed that every non-empty perfect subset of the real line has cardinality , the cardinality of the continuum. These results are extended in descriptive set theory as follows: If X is a complete metric space with no isolated points, then the Cantor space 2ω can be continuously embedded into X. Thus X has cardinality at leas
https://en.wikipedia.org/wiki/Machinist%20calculator
A machinist calculator is a hand-held calculator programmed with built-in formulas making it easy and quick for machinists to establish speeds, feeds and time without guesswork or conversion charts. Formulas may include revolutions per minute (RPM), surface feet per minute (SFM), inches per minute (IPM), feed per tooth (FPT). A cut time (CT) function takes the user, step-by-step, through a calculation to determine cycle time (execution time) for a given tool motion. Other features may include a metric-English conversion function, a stop watch/timer function and a standard math calculator. This type of calculator is useful for machinists, programmers, inspectors, estimators, supervisors, and students. When Handheld Machinist calculators first came to market they were complicated to use due to their small liquid-crystal displays and were fairly expensive with a price of around $70-$80. These older units were missing many features and could not be upgraded. With the invention of the smartphone, Machinist Calculators now have many more features and are ever evolving with constant software upgrades. One popular example of a Machinist calculator is an application called "CNC Machinist Calculator Pro". This machinist calculator has 35 subsections of machining calculations which include turning, milling (machining), drilling, tapping, Grinding (abrasive cutting), gun drilling, GD&T, M-codes, G-codes, thread data, Threading (manufacturing), Position tolerance, bolt circle, surface finish, over wire thread pitch dimensions, center drill dimensions, triangle solver, machinability data with Surface feet per minute (SFM) and RPM conversions, List of materials properties, Brinell scale material hardenability, hardness conversions, scientific calculator functions, etc.. Modern Machinist Speed and Feed Calculators Because early Machinist Calculators were limited by the analog user interface and computing power, Speeds and Feeds were fairly rudimentary, most of the time providin
https://en.wikipedia.org/wiki/Chip%20PC%20Technologies
Chip PC Technologies is a developer and manufacturer of thin client solutions and management software for server-based computing; where in a network architecture applications are deployed, managed and can be fully executed on the server. History Chip PC was founded in 2000 by Aviv Soffer and Ora Meir Soffer and raised its first round of financing from R.H. Technologies Ltd. (), an electronics contract manufacturing group. In 2005 Elbit Systems acquired 20% of the company. Later, the company established partnerships with Dell, which distributes its products, and Microsoft. In June 2007, it raised NIS 26 million in stocks, bonds, and warrants in an IPO on the Tel Aviv Stock Exchange. In November 2007, the company won Europe's largest Thin client tender thus far, to supply 20,000 Thin client PC's and management software to RZF, the tax authority of the State of North Rhine-Westphalia in Germany. Overview Chip PC supplies thin clients to Multinational & Public sector organizations, recently winning 1st place in an independent Thin-Clients Evaluation among 26 thin clients from 9 vendors worldwide. Among Chip PC customers are top organizations from various verticals, such as Healthcare, Finance, Defense (Israeli Navy), Government (US Police), and Education. Although the company's main target markets are enterprises and large organizations, it modifies and customizes models to fit other markets; such as the Networked home, SOHO (Small-Office-Home-Office), Point of sale and others. See also Thin client Mini PC Jack PC
https://en.wikipedia.org/wiki/Ugly%20duckling%20theorem
The ugly duckling theorem is an argument showing that classification is not really possible without some sort of bias. More particularly, it assumes finitely many properties combinable by logical connectives, and finitely many objects; it asserts that any two different objects share the same number of (extensional) properties. The theorem is named after Hans Christian Andersen's 1843 story "The Ugly Duckling", because it shows that a duckling is just as similar to a swan as two swans are to each other. It was derived by Satosi Watanabe in 1969. Mathematical formula Suppose there are n things in the universe, and one wants to put them into classes or categories. One has no preconceived ideas or biases about what sorts of categories are "natural" or "normal" and what are not. So one has to consider all the possible classes that could be, all the possible ways of making a set out of the n objects. There are such ways, the size of the power set of n objects. One can use that to measure the similarity between two objects, and one would see how many sets they have in common. However, one cannot. Any two objects have exactly the same number of classes in common if we can form any possible class, namely (half the total number of classes there are). To see this is so, one may imagine each class is a represented by an n-bit string (or binary encoded integer), with a zero for each element not in the class and a one for each element in the class. As one finds, there are such strings. As all possible choices of zeros and ones are there, any two bit-positions will agree exactly half the time. One may pick two elements and reorder the bits so they are the first two, and imagine the numbers sorted lexicographically. The first numbers will have bit #1 set to zero, and the second will have it set to one. Within each of those blocks, the top will have bit #2 set to zero and the other will have it as one, so they agree on two blocks of or on half of all the cases, no matter
https://en.wikipedia.org/wiki/Dice%20notation
Dice notation (also known as dice algebra, common dice notation, RPG dice notation, and several other titles) is a system to represent different combinations of dice in wargames and tabletop role-playing games using simple algebra-like notation such as d8+2. Standard notation In most tabletop role-playing games, die rolls required by the system are given in the form AdX. A and X are variables, separated by the letter d, which stands for die or dice. The letter d is most commonly lower-case, but some forms of notation use upper-case D (non-English texts can use the equivalent form of the first letter of the given language's word for "dice", but also often use the English "d"). A is the number of dice to be rolled (usually omitted if 1). X is the number of faces of each dice. For example, if a game calls for a roll of d4 or 1d4, it means "roll one 4-sided die." If the final number is omitted, it is typically assumed to be a six, but in some contexts, other defaults are used. 3d6 would mean "roll three six-sided dice." Commonly, these dice are added together, but some systems could direct the player use them in some other way, such as choosing the best die rolled. To this basic notation, an additive modifier can be appended, yielding expressions of the form AdX+B. The plus sign is sometimes replaced by a minus sign ("−") to indicate subtraction. B is a number to be added to the sum of the rolls. So, 1d20−10 would indicate a roll of a single 20-sided die with 10 being subtracted from the result. These expressions can also be chained (e.g. 2d6+1d8), though this usage is less common. Additionally, notation such as AdX−L is not uncommon, the L (or H, less commonly) being used to represent "the lowest result" (or "the highest result"). For instance, 4d6−L means a roll of 4 six-sided dice, dropping the lowest result. This application skews the probability curve towards the higher numbers, as a result a roll of 3 can only occur when all four dice come up 1 (probabili
https://en.wikipedia.org/wiki/Black%20hole%20%28networking%29
In networking, a black hole, also known as a block hole, refers to a place in the network where incoming or outgoing traffic is silently discarded (or "dropped"), without informing the source that the data did not reach its intended recipient. When examining the topology of the network, the black holes themselves are invisible, and can only be detected by monitoring the lost traffic; hence the name as astronomical black holes cannot be directly observed. Dead addresses The most common form of black hole is simply an IP address that specifies a host machine that is not running or an address to which no host has been assigned. Even though TCP/IP provides a means of communicating the delivery failure back to the sender via ICMP, traffic destined for such addresses is often just dropped. Note that a dead address will be undetectable only to protocols that are both connectionless and unreliable (e.g., UDP). Connection-oriented or reliable protocols (TCP, RUDP) will either fail to connect to a dead address or will fail to receive expected acknowledgements. For IPv6, the black hole prefix is described by . For IPv4, no black hole address is explicitly defined, however the reserved IP addresses can help achieve a similar effect. For example, is reserved for use in documentation and examples by ; while the RFC advises that the addresses in this range are not routed, this is not a requirement. Firewalls and "stealth" ports Most firewalls (and routers for household use) can be configured to silently discard packets addressed to forbidden hosts or ports, resulting in small or large "black holes" in the network. Personal firewalls that do not respond to ICMP echo requests ("ping") have been designated by some vendors as being in "stealth mode". Despite this, in most networks the IP addresses of hosts with firewalls configured in this way are easily distinguished from invalid or otherwise unreachable IP addresses: On encountering the latter, a router will generall
https://en.wikipedia.org/wiki/Regular%20tree%20grammar
In theoretical computer science and formal language theory, a regular tree grammar is a formal grammar that describes a set of directed trees, or terms. A regular word grammar can be seen as a special kind of regular tree grammar, describing a set of single-path trees. Definition A regular tree grammar G is defined by the tuple G = (N, Σ, Z, P), where N is a finite set of nonterminals, Σ is a ranked alphabet (i.e., an alphabet whose symbols have an associated arity) disjoint from N, Z is the starting nonterminal, with , and P is a finite set of productions of the form A → t, with , and , where TΣ(N) is the associated term algebra, i.e. the set of all trees composed from symbols in according to their arities, where nonterminals are considered nullary. Derivation of trees The grammar G implicitly defines a set of trees: any tree that can be derived from Z using the rule set P is said to be described by G. This set of trees is known as the language of G. More formally, the relation ⇒G on the set TΣ(N) is defined as follows: A tree can be derived in a single step into a tree (in short: t1 ⇒G t2), if there is a context S and a production such that: t1 = S[A], and t2 = S[t]. Here, a context means a tree with exactly one hole in it; if S is such a context, S[t] denotes the result of filling the tree t into the hole of S. The tree language generated by G is the language . Here, TΣ denotes the set of all trees composed from symbols of Σ, while ⇒G* denotes successive applications of ⇒G. A language generated by some regular tree grammar is called a regular tree language. Examples Let G1 = (N1,Σ1,Z1,P1), where N1 = {Bool, BList } is our set of nonterminals, Σ1 = { true, false, nil, cons(.,.) } is our ranked alphabet, arities indicated by dummy arguments (i.e. the symbol cons has arity 2), Z1 = BList is our starting nonterminal, and the set P1 consists of the following productions: Bool → false Bool → true BList → nil BList → cons(Bool,BList) A
https://en.wikipedia.org/wiki/WiiConnect24
WiiConnect24 was a feature of the Nintendo Wi-Fi Connection for the Wii console. It was first announced at Electronic Entertainment Expo (E3) in mid-2006 by Nintendo. It enabled the user to remain connected to the Internet while the console was on standby. For example, in Animal Crossing: City Folk, a friend could send messages to another player without the recipient being present in the game at the same time as the sender. On June 27, 2013, WiiConnect24 service features were globally terminated. Consequently, the Wii channels that required it, online data exchange via Wii Message Board, and passive online features for certain games (the latter two of which made use of 16-digit Wii Friend Codes) have all been rendered unusable. The Wii U does not officially support WiiConnect24, so most preloaded and downloadable Wii channels were unavailable on the Wii U's Wii Mode menu and Wii Shop Channel respectively, even prior to WiiConnect24's termination. On the discontinuation date, the defunct downloadable Wii channels were removed from the Wii Shop Channel. WiiConnect24 has been succeeded by SpotPass, a different trademark name for similar content-pushing functions that the Nintendo Network service can perform for the newer Nintendo 3DS and Wii U consoles. In 2015, a fan-made service called RiiConnect24 was established as a replacement for WiiConnect24, aiming to bring back WiiConnect24 to those who have a homebrewed Wii console. As of today, the service offers online access to all of the Wii's channels released in America and Europe (other than video on demand services) as well as sending messages to other users in the Wii Message Board. Another notable example of a homebrew application meant to bring back WiiConnect24 functionality is WiiLink. Service WiiConnect24 was used to receive content such as Wii Message Board messages sent from other Wii consoles, Miis, emails, updated channel and game content, and notifications of software updates. If the Standby Connect
https://en.wikipedia.org/wiki/Walam%20Olum
The Walam Olum, Walum Olum or Wallam Olum, usually translated as "Red Record" or "Red Score", is purportedly a historical narrative of the Lenape (Delaware) Native American tribe. The document has provoked controversy as to its authenticity since its publication in the 1830s by botanist and antiquarian Constantine Samuel Rafinesque. Ethnographic studies in the 1980s and analysis in the 1990s of Rafinesque's manuscripts have produced significant evidence that the document may be a hoax. The work In 1836 in his first volume of The American Nations, Rafinesque published what he represented as an English translation of the entire text of the Walam Olum, as well as a portion in the Lenape language. The Walam Olum includes a creation myth, a deluge myth, and the narrative of a series of migrations. Rafinesque and others claimed or interpreted the migrations to have begun in Asia. The Walam Olum suggested a migration over the Bering Strait took place 3,600 years ago. The text included a long list of chiefs' names, which appears to provide a timescale for the epic. According to Rafinesque, the chiefs appeared as early as 1600 BCE. The story in summary The narrative begins with the formation of the universe, the shaping of the Earth, and the creation of the first people, by the Great Manitou. Then, as the Great Manitou creates more creatures, an evil manitou creates others, such as flies. Although all is harmonious at first, an evil being brings unhappiness, sickness, disasters and death. A great snake attacked the people and drove them from their homes. The snake flooded the land and made monsters in the water, but the Creator made a giant turtle, on which the surviving people rode out the flood, and prayed for the waters to recede. When land emerged again, they were in a place of snow and cold, so they developed their skills of house-building and hunting, and began explorations to find more temperate lands. Eventually, they chose to head east from the land of the Turtl
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20quantum%20computer
Nuclear magnetic resonance quantum computing (NMRQC) is one of the several proposed approaches for constructing a quantum computer, that uses the spin states of nuclei within molecules as qubits. The quantum states are probed through the nuclear magnetic resonances, allowing the system to be implemented as a variation of nuclear magnetic resonance spectroscopy. NMR differs from other implementations of quantum computers in that it uses an ensemble of systems, in this case molecules, rather than a single pure state. Initially the approach was to use the spin properties of atoms of particular molecules in a liquid sample as qubits - this is known as liquid state NMR (LSNMR). This approach has since been superseded by solid state NMR (SSNMR) as a means of quantum computation. Liquid state NMR The ideal picture of liquid state NMR (LSNMR) quantum information processing (QIP) is based on a molecule in which some of its atom's nuclei behave as spin-½ systems. Depending on which nuclei we are considering they will have different energy levels and different interaction with its neighbours and so we can treat them as distinguishable qubits. In this system we tend to consider the inter-atomic bonds as the source of interactions between qubits and exploit these spin-spin interactions to perform 2-qubit gates such as CNOTs that are necessary for universal quantum computation. In addition to the spin-spin interactions native to the molecule an external magnetic field can be applied (in NMR laboratories) and these impose single qubit gates. By exploiting the fact that different spins will experience different local fields we have control over the individual spins. The picture described above is far from realistic since we are treating a single molecule. NMR is performed on an ensemble of molecules, usually with as many as 10^15 molecules. This introduces complications to the model, one of which is introduction of decoherence. In particular we have the problem of an open quant
https://en.wikipedia.org/wiki/Cuzick%E2%80%93Edwards%20test
In statistics, the Cuzick–Edwards test is a significance test whose aim is to detect the possible clustering of sub-populations within a clustered or non-uniformly-spread overall population. Possible applications of the test include examining the spatial clustering of childhood leukemia and lymphoma within the general population, given that the general population is spatially clustered. The test is based on: using control locations within the general population as the basis of a second or "control" sub-population in addition to the original "case" sub-population; using "nearest-neighbour" analyses to form statistics based on either: the number of other "cases" among the neighbours of each case; the number "cases" which are nearer to each given case than the k-th nearest "control" for that case. An example application of this test was to spatial clustering of leukaemias and lymphomas among young people in New Zealand. See also Clustering (demographics)
https://en.wikipedia.org/wiki/Extraordinary%20optical%20transmission
Extraordinary optical transmission (EOT) is the phenomenon of greatly enhanced transmission of light through a subwavelength aperture in an otherwise opaque metallic film which has been patterned with a regularly repeating periodic structure. Generally when light of a certain wavelength falls on a subwavelength aperture, it is diffracted isotropically in all directions evenly, with minimal far-field transmission. This is the understanding from classical aperture theory as described by Bethe. In EOT however, the regularly repeating structure enables much higher transmission efficiency to occur, up to several orders of magnitude greater than that predicted by classical aperture theory. It was first described in 1998. This phenomenon that was fully analyzed with a microscopic scattering model is partly attributed to the presence of surface plasmon resonances and constructive interference. A surface plasmon (SP) is a collective excitation of the electrons at the junction between a conductor and an insulator and is one of a series of interactions between light and a metal surface called Plasmonics. Currently, there is experimental evidence of EOT out of the optical range. Analytical approaches also predict EOT on perforated plates with a perfect conductor model. Holes can somewhat emulate plasmons at other regions of the electromagnetic spectrum where they do not exist. Then, the plasmonic contribution is a very particular peculiarity of the EOT resonance and should not be taken as the main contribution to the phenomenon. More recent work has shown a strong contribution from overlapping evanescent wave coupling, which explains why surface plasmon resonance enhances the EOT effect on both sides of a metallic film at optical frequencies, but accounts for the terahertz-range transmission. Simple analytical explanations of this phenomenon have been elaborated, emphasizing the similarity between arrays of particles and arrays of holes, and establishing that the phenomen
https://en.wikipedia.org/wiki/Electromechanical%20coupling%20coefficient
The electromechanical coupling coefficient is a numerical measure of the conversion efficiency between electrical and acoustic energy in piezoelectric materials. Qualitatively the electromechanical coupling coefficient, k, can be described as:
https://en.wikipedia.org/wiki/Autapomorphy
In phylogenetics, an autapomorphy is a distinctive feature, known as a derived trait, that is unique to a given taxon. That is, it is found only in one taxon, but not found in any others or outgroup taxa, not even those most closely related to the focal taxon (which may be a species, family or in general any clade). It can therefore be considered an apomorphy in relation to a single taxon. The word autapomorphy, introduced in 1950 by German entomologist Willi Hennig, is derived from the Greek words αὐτός, autos "self"; ἀπό, apo "away from"; and μορφή, morphḗ = "shape". Discussion Because autapomorphies are only present in a single taxon, they do not convey information about relationship. Therefore, autapomorphies are not useful to infer phylogenetic relationships. However, autapomorphy, like synapomorphy and plesiomorphy is a relative concept depending on the taxon in question. An autapomorphy at a given level may well be a synapomorphy at a less-inclusive level. An example of an autapomorphy can be described in modern snakes. Snakes have lost the two pairs of legs that characterize all of Tetrapoda, and the closest taxa to Ophidia – as well as their common ancestors – all have two pairs of legs. Therefore, the Ophidia taxon presents an autapomorphy with respect to its absence of legs. The autapomorphic species concept is one of many methods that scientists might use to define and distinguish species from one another. This definition assigns species on the basis of amount of divergence associated with reproductive incompatibility, which is measured essentially by number of autapomorphies. This grouping method is often referred to as the "monophyletic species concept" or the "phylospecies" concept and was popularized by D.E. Rosen in 1979. Within this definition, a species is seen as "the least inclusive monophyletic group definable by at least one autapomorphy". While this model of speciation is useful in that it avoids non-monophyletic groupings, it has its cr
https://en.wikipedia.org/wiki/Prelink
In computing, prebinding, also called prelinking, is a method for optimizing application load times by resolving library symbols prior to launch. Background Most computer programs consist of code that requires external shared libraries to execute. These libraries are normally integrated with the program at run time by a loader, in a process called dynamic linking. While dynamic linking has advantages in code size and management, there are drawbacks as well. Every time a program is run, the loader needs to resolve (find) the relevant libraries. Since libraries move around in memory, there is a performance penalty for resolution. This penalty increases for each additional library needing resolution. Prelinking reduces this penalty by resolving libraries in advance. Afterward, resolution only occurs if the libraries have changed since being prelinked, such as following perhaps an upgrade. Mac OS Mac OS stores executables in the Mach-O file format. Mac OS X Mac OS X performs prebinding in the "Optimizing" stage of installing system software or certain applications. Prebinding has changed a few times within the Mac OS X series. Before 10.2, prebinding only happened during the installation procedure (the aforementioned "Optimizing" stage). From 10.2 through 10.3 the OS checked for prebinding at launch time for applications, and the first time an application ran it would be prebound, making subsequent launches faster. This could also be manually run, which some OS-level installs did. In 10.4, only OS libraries were prebound. In 10.5 and later, Apple replaced prebinding with a dyld shared cache mechanism, which provided better OS performance. Linux On Linux, prelinking is accomplished via the prelink program, a free program written by Jakub Jelínek of Red Hat for ELF binaries. Performance results have been mixed, but it seems to aid systems with a large number of libraries, such as KDE. prelink randomization When run with the "-R" option, prelink will randomly
https://en.wikipedia.org/wiki/Kurchatov%20Medal
The Kurchatov Medal, or the Gold Medal in honour of Igor Kurchatov is an award given for outstanding achievements in nuclear physics and in the field of nuclear energy. The USSR Academy of Sciences established this award on February 9, 1960 in honour of Igor Kurchatov and in recognition of his lifetime contributions to the fields of nuclear physics, nuclear energy and nuclear engineering. In the USSR, the Kurchatov Medal award was given every three years starting in 1962. Honorarium was included as part of the award through 1989. Later in Russia, the Kurchatov Gold Medal award has been resumed, and the medal has been given since 1998. Soviet award recipients Source: Russian Academy of Sciences 1962: Pyotr Spivak and Yuri Prokoviev 1965: Yuriy Prokoshkin, Vladimir Rykalin, Valentin Petruhin and Anatoly Danubians 1968: Anatoly Aleksandrov 1971: Isaak Kikoin 1974: Julii Khariton and Savely Moiseevich Feinberg 1977: Yakov Zeldovich and 1980: Isai Izrailevich Gurevich and Boris Nikolsky 1981: William d'Haeseleer 1983: Vladimir Mostovoy 1986: Venedikt Dzhelepov and Leonid Ponomarev 1989: Georgy Flyorov and Yuri Oganessian Russian awards 1998: Aleksey Ogloblin 2000: Nikolay Dollezhal 2003: Yuri Trutnev 2008: Oleg Gennadievich Filatov 2013: Yevgeny Avrorin 2018: Nikolay Evgenievich Kukharkin See also Awards and decorations of the Russian Federation Medal "For Merit in the Development of Nuclear Energy" List of physics awards External links Kurchatov Gold Medal. The Russian Academy of Sciences official listing Orders, decorations, and medals of the Soviet Union Nuclear physics Physics awards Awards established in 1960 Orders, decorations, and medals of Russia 1960 establishments in the Soviet Union Nuclear history of the Soviet Union Awards of the Russian Academy of Sciences Nuclear energy in Russia USSR Academy of Sciences
https://en.wikipedia.org/wiki/Remote%20error%20indication
Remote error indication (REI) or formerly far end block error (FEBE) is an alarm signal used in synchronous optical networking (SONET). It indicates to the transmitting node that the receiver has detected a block error. Overview REI or FEBE errors are mostly seen on DS3 circuits, however they are known to be present on other types (SONET/T1s etc.). Each terminating device (router or otherwise) monitors the incoming signal for CP-bit path errors. If an error is detected on the incoming DS3, the terminating elements transmit a FEBE bit on the outgoing direction of the DS3. Network monitoring equipment located anywhere along the path then measures these FEBEs in each direction to gauge the quality of the circuit while in service. If you have a DS3 running from New York to Atlanta, and there's a problem within one of the central offices in Virginia. The errors are being generated by a device in the central office, and being detected by the terminating device (a NID, M13 Mux or router). The terminating device then sends the 'FEBE' error signal outbound to alert further devices there were problems. So, errors are generated on the incoming side of the loop, the device terminating that end picks up the errors, and transmits a 'FEBE errors' message on the outgoing side. This specific setup of error reporting is what causes the confusion between many technicians trying to perform repairs. Technical jargon: An error detected by extracting the 4-bit FEBE field from the path status byte (G1). The legal range for the 4-bit field is between 0000 and 1000, representing zero to eight errors. Any other value is interpreted as zero errors. The DS-3 M-frame uses P bits to check the line parity. The M-subframe uses C bits in a format called C-bit parity, which copies the result of the P bits at the source and checks the result at the destination. An ATM interface reports detected C-bit parity errors back to the source via a far-end block error (FEBE). ( Cisco.com all rights res
https://en.wikipedia.org/wiki/Juvenile%20%28organism%29
A juvenile is an individual organism (especially an animal) that has not yet reached its adult form, sexual maturity or size. Juveniles can look very different from the adult form, particularly in colour, and may not fill the same niche as the adult form. In many organisms the juvenile has a different name from the adult (see List of animal names). Some organisms reach sexual maturity in a short metamorphosis, such as ecdysis in many insects and some other arthropods. For others, the transition from juvenile to fully mature is a more prolonged process—puberty in humans and other species (like higher primates and whales), for example. In such cases, juveniles during this transformation are sometimes called subadults. Many invertebrates cease development upon reaching adulthood. The stages of such invertebrates are larvae or nymphs. In vertebrates and some invertebrates (e.g. spiders), larval forms (e.g. tadpoles) are usually considered a development stage of their own, and "juvenile" refers to a post-larval stage that is not fully grown and not sexually mature. In amniotes, the embryo represents the larval stage. Here, a "juvenile" is an individual in the time between hatching/birth/germination and reaching maturity. Examples For animal larval juveniles, see larva Juvenile birds or bats can be called fledglings For cat juveniles, see kitten For dog juveniles, see puppy For human juvenile life stages, see childhood and adolescence, an intermediary period between the onset of puberty and full physical, psychological, and social adulthood
https://en.wikipedia.org/wiki/Normal%20form%20%28abstract%20rewriting%29
In abstract rewriting, an object is in normal form if it cannot be rewritten any further, i.e. it is irreducible. Depending on the rewriting system, an object may rewrite to several normal forms or none at all. Many properties of rewriting systems relate to normal forms. Definitions Stated formally, if (A,→) is an abstract rewriting system, x∈A is in normal form if no y∈A exists such that x→y, i.e. x is an irreducible term. An object a is weakly normalizing if there exists at least one particular sequence of rewrites starting from a that eventually yields a normal form. A rewriting system has the weak normalization property or is (weakly) normalizing (WN) if every object is weakly normalizing. An object a is strongly normalizing if every sequence of rewrites starting from a eventually terminates with a normal form. An abstract rewriting system is strongly normalizing, terminating, noetherian, or has the (strong) normalization property (SN), if each of its objects is strongly normalizing. A rewriting system has the normal form property (NF) if for all objects a and normal forms b, b can be reached from a by a series of rewrites and inverse rewrites only if a reduces to b. A rewriting system has the unique normal form property (UN) if for all normal forms a, b ∈ S, a can be reached from b by a series of rewrites and inverse rewrites only if a is equal to b. A rewriting system has the unique normal form property with respect to reduction (UN→) if for every term reducing to normal forms a and b, a is equal to b. Results This section presents some well known results. First, SN implies WN. Confluence (abbreviated CR) implies NF implies UN implies UN→. The reverse implications do not generally hold. {a→b,a→c,c→c,d→c,d→e} is UN→ but not UN as b=e and b,e are normal forms. {a→b,a→c,b→b} is UN but not NF as b=c, c is a normal form, and b does not reduce to c. {a→b,a→c,b→b,c→c} is NF as there are no normal forms, but not CR as a reduces to b and c, and b,c have no comm
https://en.wikipedia.org/wiki/Causes%20and%20origins%20of%20Tourette%20syndrome
Causes and origins of Tourette syndrome have not been fully elucidated. Tourette syndrome (abbreviated as Tourette's or TS) is an inherited neurodevelopmental disorder that begins in childhood or adolescence, characterized by the presence of multiple motor tics and at least one phonic tic, which characteristically wax and wane. Tourette's syndrome occurs along a spectrum of tic disorders, which includes transient tics and chronic tics. The exact cause of Tourette's is unknown, but it is well established that both genetic and environmental factors are involved. The overwhelming majority of cases of Tourette's are inherited, although the exact mode of inheritance is not yet known, and no gene has been identified. Tics are believed to result from dysfunction in the thalamus, basal ganglia, and frontal cortex of the brain, involving abnormal activity of the brain chemical, or neurotransmitter, dopamine. In addition to dopamine, multiple neurotransmitters, like serotonin, GABA, glutamate, and histamine (H3-receptor), are involved. Non-genetic factors—while not causing Tourette's—can influence the severity of the disorder. Some forms of Tourette's may be genetically linked to obsessive-compulsive disorder (OCD), while the relationship between Tourette's and attention-deficit hyperactivity disorder (ADHD) is not yet fully understood. Genetic factors The exact cause of Tourette's is unknown, but it is well established that both genetic and environmental factors are involved. Genetic epidemiology studies have shown that Tourette's is highly heritable, and 10 to 100 times more likely to be found among close family members than in the general population. The exact mode of inheritance is not known; no single gene has been identified, and hundreds of genes are likely involved. Genome-wide association studies were published in 2013 and 2015 in which no finding reached a threshold for significance. Twin studies show that 50 to 77% of identical twins share a TS diagnosis, w
https://en.wikipedia.org/wiki/Mini-DVI
The Mini-DVI connector is used on certain Apple computers as a digital alternative to the Mini-VGA connector. Its size is between the full-sized DVI and the tiny Micro-DVI. It is found on the 12-inch PowerBook G4 (except the original 12-inch 867 MHz PowerBook G4, which used Mini-VGA), the Intel-based iMac, the MacBook Intel-based laptop, the Intel-based Xserve, the 2009 Mac mini, and some late model eMacs. In October 2008, Apple announced the company was phasing Mini-DVI out in favor of Mini DisplayPort. Mini-DVI connectors on Apple hardware are capable of carrying DVI, VGA, or TV signals through the use of adapters, detected with EDID (Extended display identification data) via DDC. This connector is often used in place of a DVI connector in order to save physical space on devices. Mini-DVI does not support dual-link connections and hence cannot support resolutions higher than 1920×1200 @60 Hz. There are various types of Mini-DVI adapter: Apple Mini-DVI to VGA Adapter Apple part number M9320G/A (discontinued) Apple Mini-DVI to Video Adapter Apple part number M9319G/A, provided both S-Video and Composite video connectors (discontinued) Apple Mini-DVI to DVI Adapter (DVI-D) Apple part number M9321G/B (discontinued) Non-OEM Mini-DVI to HDMI adapters are also available at online stores such as eBay and Amazon, and from some retail stores, but were not sold by Apple. The physical connector is similar to Mini-VGA, but is differentiated by having four rows of pins arranged in two vertically stacked slots rather than the two rows of pins in the Mini-VGA. Connecting to a DVI-I connector requires a Mini-DVI to DVI-D cable plus a DVI-D to DVI-I adapter. Criticisms Apple's Mini-DVI to DVI-D cable does not carry the analog signal coming from the mini-DVI port on the Apple computer. This means that it is not possible to use this cable with an inexpensive DVI-to-VGA adapter for VGA output; Apple's mini-DVI to VGA cable must be used instead. This could be avoided if Apple pro
https://en.wikipedia.org/wiki/Quantities%2C%20Units%20and%20Symbols%20in%20Physical%20Chemistry
Quantities, Units and Symbols in Physical Chemistry, also known as the Green Book, is a compilation of terms and symbols widely used in the field of physical chemistry. It also includes a table of physical constants, tables listing the properties of elementary particles, chemical elements, and nuclides, and information about conversion factors that are commonly used in physical chemistry. The Green Book is published by the International Union of Pure and Applied Chemistry (IUPAC) and is based on published, citeable sources. Information in the Green Book is synthesized from recommendations made by IUPAC, the International Union of Pure and Applied Physics (IUPAP) and the International Organization for Standardization (ISO), including recommendations listed in the IUPAP Red Book Symbols, Units, Nomenclature and Fundamental Constants in Physics and in the ISO 31 standards. History, list of editions, and translations to non-English languages The third edition of the Green Book () was first published by IUPAC in 2007. A second printing of the third edition was released in 2008; this printing made several minor revisions to the 2007 text. A third printing of the third edition was released in 2011. The text of the third printing is identical to that of the second printing. A Japanese translation of the third edition of the Green Book () was published in 2009. A French translation of the third edition of the Green Book () was published in 2012. A Portuguese translation (Brazilian Portuguese and European Portuguese) of the third edition of the Green Book () was published in 2018, with updated values of the physical constants and atomic weights; it is referred to as the "Livro Verde". A concise four-page summary of the most important material in the Green Book was published in the July–August 2011 issue of Chemistry International, the IUPAC news magazine. The second edition of the Green Book () was first published in 1993. It was reprinted in 1995, 1996, and 1998.
https://en.wikipedia.org/wiki/Fracton
A fracton is a collective quantized vibration on a substrate with a fractal structure. Fractons are the fractal analog of phonons. Phonons are the result of applying translational symmetry to the potential in a Schrödinger equation. Fractal self-similarity can be thought of as a symmetry somewhat comparable to translational symmetry. Translational symmetry is symmetry under displacement or change of position, and fractal self-similarity is symmetry under change of scale. The quantum mechanical solutions to such a problem in general lead to a continuum of states with different frequencies. In other words, a fracton band is comparable to a phonon band. The vibrational modes are restricted to part of the substrate and are thus not fully delocalized, unlike phonon vibrational modes. Instead, there is a hierarchy of vibrational modes that encompass smaller and smaller parts of the substrate.
https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds%20debate
The Tanenbaum–Torvalds debate was a written debate between Andrew S. Tanenbaum and Linus Torvalds, regarding the Linux kernel and kernel architecture in general. Tanenbaum, the creator of Minix, began the debate in 1992 on the Usenet discussion group , arguing that microkernels are superior to monolithic kernels and therefore Linux was, even in 1992, obsolete. The debate has sometimes been considered a flame war. The debate While the debate initially started out as relatively moderate, with both parties involved making only banal statements about kernel design, it grew progressively more detailed and sophisticated with every round of posts. Besides just kernel design, the debate branched into several other topics, such as which microprocessor architecture would win out over others in the future. Besides Tanenbaum and Torvalds, several other people joined the debate, including Peter MacDonald, an early Linux kernel developer and creator of one of the first distributions, Softlanding Linux System; David S. Miller, one of the core developers of the Linux kernel; and Theodore Ts'o, the first North American Linux kernel developer. The debate opened on January 29, 1992, when Tanenbaum first posted his criticism on the Linux kernel to , noting how the monolithic design was detrimental to its abilities, in a post titled "LINUX is obsolete". While he initially did not go into great technical detail to explain why he felt that the microkernel design was better, he did suggest that it was mostly related to portability, arguing that the Linux kernel was too closely tied to the x86 line of processors to be of any use in the future, as this architecture would be superseded by then. To put things into perspective, he mentioned how writing a monolithic kernel in 1991 is "a giant step back into the 1970s". Since the criticism was posted in a public newsgroup, Torvalds was able to respond to it directly. He did so a day later, arguing that MINIX has inherent design flaws (naming
https://en.wikipedia.org/wiki/Cellular%20compartment
Cellular compartments in cell biology comprise all of the closed parts within the cytosol of a eukaryotic cell, usually surrounded by a single or double lipid layer membrane. These compartments are often, but not always, defined as membrane-bound organelles. The formation of cellular compartments is called compartmentalization. Both organelles, the mitochondria and chloroplasts (in photosynthetic organisms), are compartments that are believed to be of endosymbiotic origin. Other compartments such as peroxisomes, lysosomes, the endoplasmic reticulum, the cell nucleus or the Golgi apparatus are not of endosymbiotic origin. Smaller elements like vesicles, and sometimes even microtubules can also be counted as compartments. It was thought that compartmentalization is not found in prokaryotic cells., but the discovery of carboxysomes and many other metabolosomes revealed that prokaryotic cells are capable of making compartmentalized structures, albeit these are in most cases not surrounded by a lipid bilayer, but of pure proteinaceous built. Types In general there are 4 main cellular compartments, they are: The nuclear compartment comprising the nucleus The intercisternal space which comprises the space between the membranes of the endoplasmic reticulum (which is continuous with the nuclear envelope) Organelles (the mitochondrion in all eukaryotes and the plastid in phototrophic eukaryotes) The cytosol Function Compartments have three main roles. One is to establish physical boundaries for biological processes that enables the cell to carry out different metabolic activities at the same time. This may include keeping certain biomolecules within a region, or keeping other molecules outside. Within the membrane-bound compartments, different intracellular pH, different enzyme systems, and other differences are isolated from other organelles and cytosol. With mitochondria, the cytosol has an oxidizing environment which converts NADH to NAD+. With these cases, the
https://en.wikipedia.org/wiki/Willi%20Apel
Willi Apel (10 October 1893 – 14 March 1988) was a German-American musicologist and noted author of a number of books devoted to music. Among his most important publications are the 1944 edition of The Harvard Dictionary of Music and French Secular Music of the Late Fourteenth Century. Life and career Apel was born in Konitz, West Prussia, now Chojnice in Poland. He studied mathematics from 1912 to 1914, and then again after World War I from 1918 to 1922, in various universities in Weimar Germany. Throughout his studies, he had an interest in music and taught piano lessons. He then turned to music full-time, and essentially taught himself about musicology. He received his Ph.D. in 1936 in Berlin (with a dissertation on 15th and 16th century tonality) and immigrated to the USA the same year. He taught at Harvard from 1938 to 1942, but moved on to spend twenty years at Indiana University beginning in 1950. In 1972 he was awarded an honorary doctorate by the university. Apel's work of the 1940s included books of broad scope, such as The Harvard Dictionary of Music (1944), which he edited, and Historical Anthology of Music (1947–1950, co-authored with Archibald Thompson Davison). His approach was to give as much attention to Medieval, Renaissance and world music as was given to familiar subjects such as Mozart and Beethoven; this influenced the higher music education in the USA. His book on the notation of early polyphonic music was also written in the 1940s, and still serves as one of the essential works on the subject. In 1950 Apel's interest in early polyphonic notation resulted in an important edition, French Secular Music of the Late Fourteenth Century. In 1958 he published a large work on plainchant, which provided a comprehensive guide of the repertoire and its sources. In early 1960s he founded the Corpus of Early Keyboard Music (CEKM), a series of editions devoted to early keyboard music. Over the years, CEKM presented the music of less known composers such
https://en.wikipedia.org/wiki/Wolf%20Barth
Wolf Paul Barth (20 October 1942, Wernigerode – 30 December 2016, Nuremberg) was a German mathematician who discovered Barth surfaces and whose work on vector bundles has been important for the ADHM construction. Until 2011 Barth was working in the Department of Mathematics at the University of Erlangen-Nuremberg in Germany. Barth received a PhD degree in 1967 from the University of Göttingen. His dissertation, written under the direction of Reinhold Remmert and Hans Grauert, was entitled Einige Eigenschaften analytischer Mengen in kompakten komplexen Mannigfaltigkeiten (Some properties of analytic sets in compact, complex manifolds). Publications See also Barth surfaces Barth–Nieto quintic
https://en.wikipedia.org/wiki/Acute%20abdomen
An acute abdomen refers to a sudden, severe abdominal pain. It is in many cases a medical emergency, requiring urgent and specific diagnosis. Several causes need immediate surgical treatment. Differential diagnosis The differential diagnosis of acute abdomen includes: Acute appendicitis Acute peptic ulcer and its complications Acute cholecystitis Acute pancreatitis Acute intestinal ischemia (see section below) Acute diverticulitis Ectopic pregnancy with tubal rupture Ovarian torsion Acute peritonitis (including hollow viscus perforation) Acute ureteric colic Bowel volvulus Bowel obstruction Acute pyelonephritis Adrenal crisis Biliary colic Abdominal aortic aneurysm Familial Mediterranean fever Hemoperitoneum Ruptured spleen Kidney stone Sickle cell anaemia Carcinoid Peritonitis Acute abdomen is occasionally used synonymously with peritonitis. While this is not entirely incorrect, peritonitis is the more specific term, referring to inflammation of the peritoneum. It manifests on physical examination as rebound tenderness, or pain upon removal of pressure more than on application of pressure to the abdomen. Peritonitis may result from several of the above diseases, notably appendicitis and pancreatitis. While rebound tenderness is commonly associated with peritonitis, the most specific finding is rigidity. Ischemic acute abdomen Vascular disorders are more likely to affect the small bowel than the large bowel. Arterial supply to the intestines is provided by the superior and inferior mesenteric arteries (SMA and IMA respectively), both of which are direct branches of the aorta. The superior mesenteric artery supplies: Small bowel Ascending and proximal two-thirds of the transverse colon The inferior mesenteric artery supplies: Distal one-third of the transverse colon Descending colon Sigmoid colon Of note, the splenic flexure, or the junction between the transverse and descending colon, is supplied by the most distal portions o
https://en.wikipedia.org/wiki/Bitopological%20space
In mathematics, a bitopological space is a set endowed with two topologies. Typically, if the set is and the topologies are and then the bitopological space is referred to as . The notion was introduced by J. C. Kelly in the study of quasimetrics, i.e. distance functions that are not required to be symmetric. Continuity A map from a bitopological space to another bitopological space is called continuous or sometimes pairwise continuous if is continuous both as a map from to and as map from to . Bitopological variants of topological properties Corresponding to well-known properties of topological spaces, there are versions for bitopological spaces. A bitopological space is pairwise compact if each cover of with , contains a finite subcover. In this case, must contain at least one member from and at least one member from A bitopological space is pairwise Hausdorff if for any two distinct points there exist disjoint and with and . A bitopological space is pairwise zero-dimensional if opens in which are closed in form a basis for , and opens in which are closed in form a basis for . A bitopological space is called binormal if for every -closed and -closed sets there are -open and -open sets such that , and Notes
https://en.wikipedia.org/wiki/Reference%20atmospheric%20model
A reference atmospheric model describes how the ideal gas properties (namely: pressure, temperature, density, and molecular weight) of an atmosphere change, primarily as a function of altitude, and sometimes also as a function of latitude, day of year, etc. A static atmospheric model has a more limited domain, excluding time. A standard atmosphere is defined by the World Meteorological Organization as "a hypothetical vertical distribution of atmospheric temperature, pressure and density which, by international agreement, is roughly representative of year-round, midlatitude conditions." Typical usages are as a basis for pressure altimeter calibrations, aircraft performance calculations, aircraft and rocket design, ballistic tables, and meteorological diagrams." For example, the U.S. Standard Atmosphere derives the values for air temperature, pressure, and mass density, as a function of altitude above sea level. Other static atmospheric models may have other outputs, or depend on inputs besides altitude. Basic assumptions The gas which comprises an atmosphere is usually assumed to be an ideal gas, which is to say: Where ρ is mass density, M is average molecular weight, P is pressure, T is temperature, and R is the ideal gas constant. The gas is held in place by so-called "hydrostatic" forces. That is to say, for a particular layer of gas at some altitude: the downward (towards the planet) force of its weight, the downward force exerted by pressure in the layer above it, and the upward force exerted by pressure in the layer below, all sum to zero. Mathematically this is: Finally, these variables describing the system do not change with time; i.e. it is a static system. g_0, gravitational acceleration is used here as a constant, with same value as standard gravity (average acceleration due to gravity on the surface of the Earth or other big body). For the basis of simplicity it doesn't vary with latitude, altitude or location. The variation due to all these
https://en.wikipedia.org/wiki/Debt-to-capital%20ratio
A company's debt-to-capital ratio or D/C ratio is the ratio of its total debt to its total capital, its debt and equity combined. The ratio measures a company's capital structure, financial solvency, and degree of leverage, at a particular point in time. The data to calculate the ratio are found on the balance sheet. Practitioners use different definitions of debt: Any interest-bearing liability to qualify. All liabilities, including accounts payable and deferred income. Long-term debt and its associated currently due portion (measures capital structure). Companies alter their D/C ratio by issuing more shares, buying back shares, issuing additional debt, or retiring debt. Definition and Details A measurement of a company's financial leverage, calculated as the company's debt divided by its total capital. Debt includes all short-term and long-term obligations. Total capital includes the company's debt and shareholders' equity, which includes common stock, preferred stock, minority interest and net debt. Calculated as: Debt-To-Capital Ratio = Debt / (Shareholder's Equity + Debt) Companies can finance their operations through either debt or equity. The debt-to-capital ratio gives users an idea of a company's financial structure, or how it is financing its operations, along with some insight into its financial strength. Because this is a non-GAAP measure, in practice, there are many variations of this ratio. Therefore, it is important to pay close attention when reading what is or isn't included in the ratio on a company's financial statements. Note: Above section is copied from Investopedia website. Interpretation The higher the debt-to-capital ratio, the more debt the company has compared to its equity. This tells investors whether a company is more prone to using debt financing or equity financing. A company with high debt-to-capital ratios, compared to a general or industry average, may show weak financial strength because the cost of these debts may we
https://en.wikipedia.org/wiki/Ahmad%20al-Buni
Sharaf al-Din or Shihab al-Din or Muḥyi al-Din Abu al-Abbas Aḥmad ibn Ali ibn Yusuf al-Qurashi al-Sufi, better known as Ahmad al-Buni (, ), was a mathematician and philosopher and a well known Sufi. Very little is known about him. His writings deal with the esoteric value of letters and topics relating to mathematics, sihr (sorcery) and spirituality. Born in Buna (present-day Annaba, Algeria), al-Buni lived in Egypt and learned from many eminent Sufi masters of his time. A contemporary of Ibn Arabi, he is best known for writing one of the most important books of his era; the Shams al-Ma'arif, a book that is still regarded as the foremost occult text on talismans and divination. Contributions Theurgy Instead of sihr (Sorcery), this kind of magic was called Ilm al-Hikmah (Knowledge of the Wisdom), Ilm al-simiyah (Study of the Divine Names) and Ruhaniyat (Spirituality). Most of the so-called mujarrabât ("time-tested methods") books on sorcery in the Muslim world are simplified excerpts from the Shams al-ma`ârif. The book remains the seminal work on Theurgy and esoteric arts to this day. Mathematics and science In c. 1200, Ahmad al-Buni showed how to construct magic squares using a simple bordering technique, but he may not have discovered the method himself. Al-Buni wrote about Latin squares and constructed, for example, 4 x 4 Latin squares using letters from one of the 99 names of Allah. His works on traditional healing remains a point of reference among Yoruba Muslim healers in Nigeria and other areas of the Muslim world. Influence His work is said to have influenced the Hurufis and the New Lettrist International. Denis MacEoin in a 1985 article in Studia Iranica said that Al-Buni may also have indirectly influenced the late Shi'i movement of Babism. MacEoin said that Babis made widespread use of talismans and magical letters. Writings Shams al-Maʿārif al-Kubrā (The Great Sun of Gnosis), Cairo, 1928. Sharḥ Ism Allāh al-aʿẓam fī al-rūḥānī, printed in 1357
https://en.wikipedia.org/wiki/Stinespring%20dilation%20theorem
In mathematics, Stinespring's dilation theorem, also called Stinespring's factorization theorem, named after W. Forrest Stinespring, is a result from operator theory that represents any completely positive map on a C*-algebra A as a composition of two completely positive maps each of which has a special form: A *-representation of A on some auxiliary Hilbert space K followed by An operator map of the form T ↦ V*TV. Moreover, Stinespring's theorem is a structure theorem from a C*-algebra into the algebra of bounded operators on a Hilbert space. Completely positive maps are shown to be simple modifications of *-representations, or sometimes called *-homomorphisms. Formulation In the case of a unital C*-algebra, the result is as follows: Theorem. Let A be a unital C*-algebra, H be a Hilbert space, and B(H) be the bounded operators on H. For every completely positive there exists a Hilbert space K and a unital *-homomorphism such that where is a bounded operator. Furthermore, we have Informally, one can say that every completely positive map can be "lifted" up to a map of the form . The converse of the theorem is true trivially. So Stinespring's result classifies completely positive maps. Sketch of proof We now briefly sketch the proof. Let . For , define and extend by semi-linearity to all of K. This is a Hermitian sesquilinear form because is compatible with the * operation. Complete positivity of is then used to show that this sesquilinear form is in fact positive semidefinite. Since positive semidefinite Hermitian sesquilinear forms satisfy the Cauchy–Schwarz inequality, the subset is a subspace. We can remove degeneracy by considering the quotient space . The completion of this quotient space is then a Hilbert space, also denoted by . Next define and . One can check that and have the desired properties. Notice that is just the natural algebraic embedding of H into K. One can verify that holds. In particular holds so that is an isometry
https://en.wikipedia.org/wiki/Ramp%20function
The ramp function is a unary real function, whose graph is shaped like a ramp. It can be expressed by numerous definitions, for example "0 for negative inputs, output equals input for non-negative inputs". The term "ramp" can also be used for other functions obtained by scaling and shifting, and the function in this article is the unit ramp function (slope 1, starting at 0). In mathematics, the ramp function is also known as the positive part. In machine learning, it is commonly known as a ReLU activation function or a rectifier in analogy to half-wave rectification in electrical engineering. In statistics (when used as a likelihood function) it is known as a tobit model. This function has numerous applications in mathematics and engineering, and goes by various names, depending on the context. There are differentiable variants of the ramp function. Definitions The ramp function () may be defined analytically in several ways. Possible definitions are: A piecewise function: The max function: The mean of an independent variable and its absolute value (a straight line with unity gradient and its modulus): this can be derived by noting the following definition of , for which and The Heaviside step function multiplied by a straight line with unity gradient: The convolution of the Heaviside step function with itself: The integral of the Heaviside step function: Macaulay brackets: The positive part of the identity function: Applications The ramp function has numerous applications in engineering, such as in the theory of digital signal processing. In finance, the payoff of a call option is a ramp (shifted by strike price). Horizontally flipping a ramp yields a put option, while vertically flipping (taking the negative) corresponds to selling or being "short" an option. In finance, the shape is widely called a "hockey stick", due to the shape being similar to an ice hockey stick. In statistics, hinge functions of multivariate adaptive regression
https://en.wikipedia.org/wiki/African%20swine%20fever%20virus
African swine fever virus (ASFV) is a large, double-stranded DNA virus in the Asfarviridae family. It is the causative agent of African swine fever (ASF). The virus causes a hemorrhagic fever with high mortality rates in domestic pigs; some isolates can cause death of animals as quickly as a week after infection. It persistently infects its natural hosts, warthogs, bushpigs, and soft ticks of the genus Ornithodoros, which likely act as a vector, with no disease signs. It does not cause disease in humans. ASFV is endemic to sub-Saharan Africa and exists in the wild through a cycle of infection between ticks and wild pigs, bushpigs, and warthogs. The disease was first described after European settlers brought pigs into areas endemic with ASFV, and as such, is an example of an emerging infectious disease. ASFV replicates in the cytoplasm of infected cells. It is the only virus with a double-stranded DNA genome known to be transmitted by arthropods. Virology ASFV is a large (175–215 nm), icosahedral, double-stranded DNA virus with a linear genome of 189 kilobases containing more than 180 genes. The number of genes differs slightly among different isolates of the virus. ASFV has similarities to the other large DNA viruses, e.g., poxvirus, iridovirus, and mimivirus. In common with other viral hemorrhagic fevers, the main target cells for replication are those of monocyte, macrophage lineage. Entry of the virus into the host cell is receptor-mediated, but the precise mechanism of endocytosis is presently unclear. The virus encodes enzymes required for replication and transcription of its genome, including elements of a base excision repair system, structural proteins, and many proteins that are not essential for replication in cells, but instead have roles in virus survival and transmission in its hosts. Virus replication takes place in perinuclear factory areas. It is a highly orchestrated process with at least four stages of transcription—immediate-early, early, in
https://en.wikipedia.org/wiki/Fritz%20Carlson
Fritz David Carlson (23 July 1888 – 28 November 1952) was a Swedish mathematician. After the death of Torsten Carleman, he headed the Mittag-Leffler Institute. Carlson's contributions to analysis include Carlson's theorem, the Polyá–Carlson theorem on rational functions, and Carlson's inequality In number theory, his results include Carlson's theorem on Dirichlet series. Hans Rådström, Germund Dahlquist, and Tord Ganelius were among his students. Notes External links 1888 births 1952 deaths 20th-century Swedish mathematicians Academic staff of the KTH Royal Institute of Technology Mathematical analysts Directors of the Mittag-Leffler Institute People from Vimmerby Municipality
https://en.wikipedia.org/wiki/Kathleen%20Antonelli
Kathleen Rita Antonelli ( McNulty; formerly Mauchly; 12 February 1921 – 20 April 2006), known as Kay McNulty, was an Irish computer programmer and one of the six original programmers of the ENIAC, one of the first general-purpose electronic digital computers. The other five ENIAC programmers were Betty Holberton, Ruth Teitelbaum, Frances Spence, Marlyn Meltzer, and Jean Bartik. Early life and education She was born Kathleen Rita McNulty in Feymore, part of the small village of Creeslough in what was then a Gaeltacht area (Irish-speaking region) of County Donegal in Ulster, the northern province in Ireland, on February 12, 1921, during the Irish War of Independence. She was the third of six children of James and Anne (née Nelis) McNulty. On the night of her birth, her father, an Irish Republican Army training officer, was arrested and imprisoned in Derry Gaol for two years as he was a suspected member of the IRA. On his release, the family emigrated to the United States in October 1924 and settled in the Chestnut Hill section of Philadelphia, Pennsylvania, where he found work as a stonemason. At the time, Kathleen McNulty was unable to speak any English, only Irish; she would remember prayers in Irish for the rest of her life. She attended parochial grade school in Chestnut Hill (1927–1933) and J. W. Hallahan Catholic Girls High School (1933–1938) in Philadelphia. In high school, she had taken a year of algebra, a year of plane geometry, a second year of algebra, and a year of trigonometry and solid geometry. After graduating high school, she enrolled in Chestnut Hill College for Women. During her studies, she took every mathematics course offered, including spherical trigonometry, differential calculus, projective geometry, partial differential equations, and statistics. She graduated with a degree in mathematics in June 1942, one of only a few mathematics majors out of a class of 92 women. During her third year of college, McNulty was looking for relevant job
https://en.wikipedia.org/wiki/Flag%20of%20Australia
The flag of Australia, also known as the Australian Blue Ensign, is based on the British Blue Ensign—a blue field with the Union Jack in the upper hoist quarter—augmented with a large white seven-pointed star (the Commonwealth Star) and a representation of the Southern Cross constellation, made up of five white stars (one small five-pointed star and four, larger, seven-pointed stars). Australia also has a number of other official flags representing its people and core functions of government. Its original design (with a six-pointed Commonwealth Star) was chosen in 1901 from entries in a competition held following Federation, and was first flown in Melbourne on 3 September 1901, the date proclaimed in 1996 as Australian National Flag Day. A slightly different design was approved by King Edward VII in 1903. The current seven-pointed Commonwealth Star version was introduced by a proclamation dated 8 December 1908. The dimensions were formally gazetted in 1934, and in 1954 the flag became recognised by, and legally defined in, the Flags Act 1953 as the Australian National Flag. Design Devices The Australian flag uses three prominent symbols: the Southern Cross, the Union Jack and the Commonwealth Star. In its original usage as the flag of the United Kingdom of Great Britain and Ireland, the Union Flag combined three heraldic crosses which represent the constituent countries of the United Kingdom (as constituted in 1801): The red St George's Cross of England The white diagonal St Andrew's Cross of Scotland The red diagonal St Patrick's Cross of Ireland The Union Flag acknowledge the history of British settlement in Australia. Historically, it was included in the design as a demonstration of loyalty to the British Empire. The Commonwealth Star, also known as the Federation Star, originally had six points, representing the six federating colonies. In 1908, a seventh point was added to symbolise the Territory of Papua, and any future territories. Another rationale f