source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Hartley%20function
The Hartley function is a measure of uncertainty, introduced by Ralph Hartley in 1928. If a sample from a finite set A uniformly at random is picked, the information revealed after the outcome is known is given by the Hartley function where denotes the cardinality of A. If the base of the logarithm is 2, then the unit of uncertainty is the shannon (more commonly known as bit). If it is the natural logarithm, then the unit is the nat. Hartley used a base-ten logarithm, and with this base, the unit of information is called the hartley (aka ban or dit) in his honor. It is also known as the Hartley entropy or max-entropy. Hartley function, Shannon entropy, and Rényi entropy The Hartley function coincides with the Shannon entropy (as well as with the Rényi entropies of all orders) in the case of a uniform probability distribution. It is a special case of the Rényi entropy since: But it can also be viewed as a primitive construction, since, as emphasized by Kolmogorov and Rényi, the Hartley function can be defined without introducing any notions of probability (see Uncertainty and information by George J. Klir, p. 423). Characterization of the Hartley function The Hartley function only depends on the number of elements in a set, and hence can be viewed as a function on natural numbers. Rényi showed that the Hartley function in base 2 is the only function mapping natural numbers to real numbers that satisfies (additivity) (monotonicity) (normalization) Condition 1 says that the uncertainty of the Cartesian product of two finite sets A and B is the sum of uncertainties of A and B. Condition 2 says that a larger set has larger uncertainty. Derivation of the Hartley function We want to show that the Hartley function, log2(n), is the only function mapping natural numbers to real numbers that satisfies (additivity) (monotonicity) (normalization) Let f be a function on positive integers that satisfies the above three properties. From the additive proper
https://en.wikipedia.org/wiki/Mouse%20%28set%20theory%29
In set theory, a mouse is a small model of (a fragment of) Zermelo–Fraenkel set theory with desirable properties. The exact definition depends on the context. In most cases, there is a technical definition of "premouse" and an added condition of iterability (referring to the existence of wellfounded iterated ultrapowers): a mouse is then an iterable premouse. The notion of mouse generalizes the concept of a level of Gödel's constructible hierarchy while being able to incorporate large cardinals. Mice are important ingredients of the construction of core models. The concept was isolated by Ronald Jensen in the 1970s and has been used since then in core model constructions of many authors.
https://en.wikipedia.org/wiki/Core%20model
In set theory, the core model is a definable inner model of the universe of all sets. Even though set theorists refer to "the core model", it is not a uniquely identified mathematical object. Rather, it is a class of inner models that under the right set-theoretic assumptions have very special properties, most notably covering properties. Intuitively, the core model is "the largest canonical inner model there is" (Ernest Schimmerling and John R. Steel) and is typically associated with a large cardinal notion. If Φ is a large cardinal notion, then the phrase "core model below Φ" refers to the definable inner model that exhibits the special properties under the assumption that there does not exist a cardinal satisfying Φ. The core model program seeks to analyze large cardinal axioms by determining the core models below them. History The first core model was Kurt Gödel's constructible universe L. Ronald Jensen proved the covering lemma for L in the 1970s under the assumption of the non-existence of zero sharp, establishing that L is the "core model below zero sharp". The work of Solovay isolated another core model L[U], for U an ultrafilter on a measurable cardinal (and its associated "sharp", zero dagger). Together with Tony Dodd, Jensen constructed the Dodd–Jensen core model ("the core model below a measurable cardinal") and proved the covering lemma for it and a generalized covering lemma for L[U]. Mitchell used coherent sequences of measures to develop core models containing multiple or higher-order measurables. Still later, the Steel core model used extenders and iteration trees to construct a core model below a Woodin cardinal. Construction of core models Core models are constructed by transfinite recursion from small fragments of the core model called mice. An important ingredient of the construction is the comparison lemma that allows giving a wellordering of the relevant mice. At the level of strong cardinals and above, one constructs an intermediate count
https://en.wikipedia.org/wiki/Digital%20rhetoric
Digital rhetoric can be generally defined as communication that exists in the digital sphere. As such, digital rhetoric can be expressed in many different forms, including text, images, videos, and software. Due to the increasingly mediated nature of our contemporary society, there are no longer clear distinctions between digital and non-digital environments. This has expanded the scope of digital rhetoric to account for the increased fluidity with which humans interact with technology. The field of digital rhetoric has not yet become well-established. Digital rhetoric largely draws its theory and practices from the tradition of rhetoric as both an analytical tool and a production guide. As a whole, it can be structured as a type of meta-discipline. Due to evolving study, digital rhetoric has held various meanings to different scholars over time. Similarly, digital rhetoric can take on a variety of meanings based on what is being analyzed—which depends on the concept, forms or objects of study, or rhetorical approach. Digital rhetoric can also be analyzed through the lenses of different social movements. This approach allows the reach of digital rhetoric to expand our understanding of its influence. The term “digital rhetoric” differentiates from the term “rhetoric” because the latter term has been one to be debated amongst many scholars. Only a few scholars like Elizabeth Losh and Ian Bogost have taken the time to really come up with a definition for digital rhetoric. One of the most straightforward definitions for “digital rhetoric” is that it is the application of rhetorical theory (Eyman, 13). Definition Evolving definition of 'digital rhetoric' The following subsections detail the evolving definition of 'digital rhetoric' as a term since its creation in 1989. Early definitions (1989–2015) The term was coined by rhetorician Richard A. Lanham in a lecture he delivered in 1989 and first formally put into words in his 1993 essay collection, The Electronic
https://en.wikipedia.org/wiki/Personal%20Genome%20Project
The Personal Genome Project (PGP) is a long term, large cohort study which aims to sequence and publicize the complete genomes and medical records of 100,000 volunteers, in order to enable research into personal genomics and personalized medicine. It was initiated by Harvard University's George M. Church in 2005. As of November 2017, more than 10,000 volunteers had joined the project. Volunteers were accepted initially if they were permanent residents of the US and were able to submit tissue and/or genetic samples. Later the project was expanded to other countries. The Study The Project was initially launched in the US in 2005 and later extended to Canada (2012), United Kingdom (2013), Austria (2014), Korea (2015) and China (2017). The project allowed participants to publish the genotype (the full DNA sequence of all 46 chromosomes) of the volunteers, along with extensive information about their phenotype: medical records, various measurements, MRI images, etc. All data were placed within the public domain and made available over the Internet so that researchers could test various hypotheses about the relationships among genotype, environment and phenotype. Participants could decide what data they are comfortable to publish publicly and could choose to upload additional data or remove existing data at their own convenience. An important part of the project was the exploration of the resulting risks to the participants, such as possible discrimination by insurers and employers if the genome shows a predisposition for certain diseases. The PGP is establishing an international network of sites, including the United States (Harvard PGP), Canada (University of Toronto / Hospital for Sick Kids), and other countries that adhere to certain "conforming implementation" criteria such as no promise of anonymity and data return. The Harvard Medical School Institutional Review Board requested that the first set of volunteers include the principal investigator George Church
https://en.wikipedia.org/wiki/Multi%20categories%20security
Multi categories security (MCS) is an access control method in Security-Enhanced Linux that uses categories attached to objects (files) and granted to subjects (processes, ...) at the operating system level. The implementation in Fedora Core 5 is advisory because there is nothing stopping a process from increasing its access. The eventual aim is to make MCS a hierarchical mandatory access control system. Currently, MCS controls access to files and to ptrace or kill processes. It has not yet decided what level of control it should have over access to directories and other file system objects. It is still evolving. MCS access controls are applied after the Domain-Type access controls and after regular DAC (Unix permissions). In the default policy of Fedora Core 5, it is possible to manage up to 256 categories (c0 to c255). It is possible to recompile the policy with a much larger number of categories if required. As part of the Multi-Level Security (MLS) development work applications such as the CUPs print server will understand the MLS sensitivity labels, CUPs will use them to control printing and to label the printed pages according to their sensitivity level. The MCS data is stored and manipulated in the same way as MLS data, therefore any program which is modified for MCS support will also be expected to support MLS. This will increase the number of applications supporting MLS and therefore make it easier to run MLS (which is one of the reasons for developing MCS). Note that MCS is not a sub-set of MLS, the Bell–LaPadula model is not applied. If a process has a clearance that dominates the classification of a file then it gets both read and write access. For example in a commercial environment you might use categories to map to data from different departments. So you could have c0 for HR data and c1 for Financial data. If a user is running with categories c0 and c1 then they can read HR data and write it to a file labeled for Financial data. In a
https://en.wikipedia.org/wiki/Relative%20viscosity
Relative viscosity () (a synonym of "viscosity ratio") is the ratio of the viscosity of a solution () to the viscosity of the solvent used (), . The significance in Relative viscosity is that it can be analyzed the effect a polymer can have on a solution's viscosity such as increasing the solutions viscosity. Lead Liquids possess an amount of internal friction that presents itself when stirred in the form of resistance. This resistance is the different layers of the liquid reacting to one another as they are stirred. This can be seen in things like syrup, which has a higher viscosity than water and exhibits less internal friction when stirred. The ratio of this viscosity is known as Relative Viscosity.
https://en.wikipedia.org/wiki/Herbert%20Wilf
Herbert Saul Wilf (June 13, 1931 – January 7, 2012) was an American mathematician, specializing in combinatorics and graph theory. He was the Thomas A. Scott Professor of Mathematics in Combinatorial Analysis and Computing at the University of Pennsylvania. He wrote numerous books and research papers. Together with Neil Calkin he founded The Electronic Journal of Combinatorics in 1994 and was its editor-in-chief until 2001. Biography Wilf was the author of numerous papers and books, and was adviser and mentor to many students and colleagues. His collaborators include Doron Zeilberger and Donald Knuth. One of Wilf's former students is Richard Garfield, the creator of the collectible card game Magic: The Gathering. He also served as a thesis advisor for E. Roy Weintraub in the late 1960s. Wilf died of a progressive neuromuscular disease in 2012. Awards In 1998, Wilf and Zeilberger received the Leroy P. Steele Prize for Seminal Contribution to Research for their joint paper, "Rational functions certify combinatorial identities" (Journal of the American Mathematical Society, 3 (1990) 147–158). The prize citation reads: "New mathematical ideas can have an impact on experts in a field, on people outside the field, and on how the field develops after the idea has been introduced. The remarkably simple idea of the work of Wilf and Zeilberger has already changed a part of mathematics for the experts, for the high-level users outside the area, and the area itself." Their work has been translated into computer packages that have simplified hypergeometric summation. In 2002, Wilf was awarded the Euler Medal by the Institute of Combinatorics and its Applications. Selected publications 1971: (editor with Frank Harary) Mathematical Aspects of Electrical Networks Analysis, SIAM-AMS Proceedings, Volume 3,American Mathematical Society 1998: (with N. J. Calkin) "The Number of Independent Sets in a Grid Graph", SIAM Journal on Discrete Mathematics Books A=B (
https://en.wikipedia.org/wiki/Pseudogap
In condensed matter physics, a pseudogap describes a state where the Fermi surface of a material possesses a partial energy gap, for example, a band structure state where the Fermi surface is gapped only at certain points. The term pseudogap was coined by Nevill Mott in 1968 to indicate a minimum in the density of states at the Fermi level, N(EF), resulting from Coulomb repulsion between electrons in the same atom, a band gap in a disordered material or a combination of these. In the modern context pseudogap is a term from the field of high-temperature superconductivity which refers to an energy range (normally near the Fermi level) which has very few states associated with it. This is very similar to a true 'gap', which is an energy range that contains no allowed states. Such gaps open up, for example, when electrons interact with the lattice. The pseudogap phenomenon is observed in a region of the phase diagram generic to cuprate high-temperature superconductors, existing in underdoped specimens at temperatures above the superconducting transition temperature. Only certain electrons 'see' this gap. The gap, which should be associated with an insulating state, only exists for electrons traveling parallel to the copper-oxygen bonds. Electrons traveling at 45° to this bond can move freely throughout the crystal. The Fermi surface therefore consists of Fermi arcs forming pockets centered on the corner of the Brillouin zone. In the pseudogap phase these arcs gradually disappear as the temperature is lowered until only four points on the diagonals of the Brillouin zone remain ungapped. On one hand, this could indicate a completely new electronic phase which consumes available states, leaving only a few to pair up and superconduct. On the other hand, the similarity between this partial gap and that in the superconducting state could indicate that the pseudogap results from preformed Cooper pairs. Recently a pseudogap state has also been reported in strongly di
https://en.wikipedia.org/wiki/Luca%20Turin
Luca Turin (born 20 November 1953) is a biophysicist and writer with a long-standing interest in bioelectronics, the sense of smell, perfumery, and the fragrance industry. Early life and education Turin was born in Beirut, Lebanon on 20 November 1953 into an Italian-Argentinian family, and raised in France, Italy and Switzerland. His father, Duccio Turin, was a UN diplomat and chief architect of the Palestinian refugee camps, and his mother, Adela Turin (born Mandelli), is an art historian, designer, and award-winning children's author. Turin studied Physiology and Biophysics at University College London and earned his PhD in 1978. He worked at the CNRS from 1982-1992, and served as lecturer in Biophysics at University College London from 1992-2000. Career After leaving the CNRS, Turin first held a visiting research position at the National Institutes of Health in North Carolina before moving back to London, where he became a lecturer in biophysics at University College London. In 2001 Turin was hired as CTO of start-up company Flexitral, based in Chantilly, Virginia, to pursue rational odorant design based on his theories. In April 2010 he described this role in the past tense, and the company's domain name appears to have been surrendered. In 2010, Turin was based at MIT, working on a project to develop an electronic nose using natural receptors, financed by DARPA. In 2014 he moved to the Institute of Theoretical Physics at the University of Ulm where he was a Visiting Professor. He is a Stavros Niarchos Researcher in the neurobiology division at the Biomedical Sciences Research Center Alexander Fleming in Greece. In 2021 he moved to the University of Buckingham, UK as Professor of Physiology in the Medical School. Vibration theory of olfaction A major prediction of Turin's vibration theory of olfaction is the isotope effect: that the normal and deuterated versions of a compound should smell different due to unique vibration frequencies, despite having the
https://en.wikipedia.org/wiki/Nucleolus%20organizer%20region
Nucleolus organizer regions (NORs) are chromosomal regions crucial for the formation of the nucleolus. In humans, the NORs are located on the short arms of the acrocentric chromosomes 13, 14, 15, 21 and 22, the genes RNR1, RNR2, RNR3, RNR4, and RNR5 respectively. These regions code for 5.8S, 18S, and 28S ribosomal RNA. The NORs are "sandwiched" between the repetitive, heterochromatic DNA sequences of the centromeres and telomeres. The exact sequence of these regions is not included in the human reference genome as of 2016 or the GRCh38.p10 released January 6, 2017. On 28 February 2019, GRCh38.p13 was released, which added the NOR sequences for the short arms of chromosomes 13, 14, 15, 21, and 22. However, it is known that NORs contain tandem copies of ribosomal DNA (rDNA) genes. Some sequences of flanking sequences proximal and distal to NORs have been reported. The NORs of a loris have been reported to be highly variable. There are also DNA sequences related to rDNA that are on other chromosomes and may be involved in nucleoli formation. Visualization Barbara McClintock first described the "nucleolar-organizing body" in Zea mays in 1934. In karyotype analysis, a silver stain can be used to identify the NOR. NORs can also be seen in nucleoli using silver stain, and that is being used to investigate cancerous changes. NORs can also be seen using antibodies directed against the protein UBF, which binds to NOR DNA. Molecular biology In addition to UBF, NORs also bind to ATRX protein, treacle, sirtuin-7 and other proteins. UBF has been identified as a mitotic "bookmark" of expressed rDNA, which allows it to resume transcription quickly after mitosis. The distal flanking junction (DJ) of the NORs has been shown to associate with the periphery of nucleoli. rDNA operons in Escherichia coli have been found to cluster near each other, similar to a eukaryotic nucleolus. See also Cell nucleus Nucleoid
https://en.wikipedia.org/wiki/Virus%20latency
Virus latency (or viral latency) is the ability of a pathogenic virus to lie dormant (latent) within a cell, denoted as the lysogenic part of the viral life cycle. A latent viral infection is a type of persistent viral infection which is distinguished from a chronic viral infection. Latency is the phase in certain viruses' life cycles in which, after initial infection, proliferation of virus particles ceases. However, the viral genome is not eradicated. The virus can reactivate and begin producing large amounts of viral progeny (the lytic part of the viral life cycle) without the host becoming reinfected by new outside virus, and stays within the host indefinitely. Virus latency is not to be confused with clinical latency during the incubation period when a virus is not dormant. Mechanisms Episomal latency Episomal latency refers to the use of genetic episomes during latency. In this latency type, viral genes are stabilized, floating in the cytoplasm or nucleus as distinct objects, either as linear or lariat structures. Episomal latency is more vulnerable to ribozymes or host foreign gene degradation than proviral latency (see below). Herpesviridae One example is the herpes virus family, Herpesviridae, all of which establish latent infection. Herpes virus include chicken-pox virus and herpes simplex viruses (HSV-1, HSV-2), all of which establish episomal latency in neurons and leave linear genetic material floating in the cytoplasm. Epstein-Barr virus The Gammaherpesvirinae subfamily is associated with episomal latency established in cells of the immune system, such as B-cells in the case of Epstein–Barr virus. Epstein–Barr virus lytic reactivation (which can be due to chemotherapy or radiation) can result in genome instability and cancer. Herpes simplex virus In the case of herpes simplex (HSV), the virus has been shown to fuse with DNA in neurons, such as nerve ganglia or neurons, and HSV reactivates upon even minor chromatin loosening with stress, a
https://en.wikipedia.org/wiki/Massbus
The Massbus is a high-performance computer input/output bus designed in the 1970s by Digital Equipment Corporation (DEC). The architecture development was sponsored by Gordon Bell and John Levy was the principal architect. The bus was used by Digital to interconnect its highest-performance computers with magnetic disk and magnetic tape storage equipment. The use of a common bus was intended to allow a single controller design to handle multiple peripheral models, and allowed the PDP-10, PDP-11, and VAX computer families to share a common set of peripherals. At the time there were multiple operating systems for each of the 16-bit, 32-bit, and 36-bit computer lines. The 18-bit PDP-15/40 connected to Massbus peripherals via a PDP-11 front end. An engineering goal was to reduce the need for a new driver per peripheral per operating system per computer family. Also, a major technical goal was to place any magnetic technology changes (data separators) into the storage device rather than in the CPU-attached controller. Thus the CPU I/O or memory bus to Massbus adapter needed no changes for multiple generations of storage technology. A business objective was to provide a subsystem entry price well below that of IBM storage subsystems which used large and expensive controllers unique to each storage technology and processor architecture and were optimized for connecting large numbers of storage devices. The first Massbus device was the RP04, based on Sperry Univac Information Storage Systems's (ISS) clone of the IBM 3330. Subsequently, DEC offered the RP05 and RP06, based on Memorex's 3330 clone. Memorex modified the IBM compatible interface to DEC requirements and moved the data separator electronics into the drive. DEC designed the rest which was mounted in the "bustle" attached to the drive. This set the pattern for future improvements of disk technology to double density 3330, CDC SMD drives, and then "Winchester" technology. Drives were supplied by ISS/Univa
https://en.wikipedia.org/wiki/Future%20interests%20%28actuarial%20science%29
Future interests is the subset of actuarial math that divides enjoyment of property -- usually the right to an income stream either from an annuity, a trust, royalties, or rents -- based usually on the future survival of one or more persons (natural humans, not juridical persons such as corporations). Actuarial science
https://en.wikipedia.org/wiki/Synchronous%20Backplane%20Interconnect
The Synchronous Backplane Interconnect (SBI) was the internal processor-memory bus used by early VAX computers manufactured by the Digital Equipment Corporation of Maynard, Massachusetts. The bus was implemented using Schottky TTL logic levels and allowed multiprocessor operation. Computer buses DEC hardware
https://en.wikipedia.org/wiki/Bond%20graph
A bond graph is a graphical representation of a physical dynamic system. It allows the conversion of the system into a state-space representation. It is similar to a block diagram or signal-flow graph, with the major difference that the arcs in bond graphs represent bi-directional exchange of physical energy, while those in block diagrams and signal-flow graphs represent uni-directional flow of information. Bond graphs are multi-energy domain (e.g. mechanical, electrical, hydraulic, etc.) and domain neutral. This means a bond graph can incorporate multiple domains seamlessly. The bond graph is composed of the "bonds" which link together "single-port", "double-port" and "multi-port" elements (see below for details). Each bond represents the instantaneous flow of energy () or power. The flow in each bond is denoted by a pair of variables called power variables, akin to conjugate variables, whose product is the instantaneous power of the bond. The power variables are broken into two parts: flow and effort. For example, for the bond of an electrical system, the flow is the current, while the effort is the voltage. By multiplying current and voltage in this example you can get the instantaneous power of the bond. A bond has two other features described briefly here, and discussed in more detail below. One is the "half-arrow" sign convention. This defines the assumed direction of positive energy flow. As with electrical circuit diagrams and free-body diagrams, the choice of positive direction is arbitrary, with the caveat that the analyst must be consistent throughout with the chosen definition. The other feature is the "causality". This is a vertical bar placed on only one end of the bond. It is not arbitrary. As described below, there are rules for assigning the proper causality to a given port, and rules for the precedence among ports. Causality explains the mathematical relationship between effort and flow. The positions of the causalities show which of the power va
https://en.wikipedia.org/wiki/Portability%20testing
Portability testing is the process of determining the degree of ease or difficulty to which a software component or application can be effectively and efficiently transferred from one hardware, software or other operational or usage environment to another. The test results, defined by the individual needs of the system, are some measurement of how easily the component or application will be to integrate into the environment and these results will then be compared to the software system's non-functional requirement of portability for correctness. The levels of correctness are usually measured by the cost to adapt the software to the new environment compared to the cost of redevelopment. Use cases When multiple subsystems share components of a larger system, portability testing can be used to help prevent propagation of errors throughout the system. Changing or upgrading to a newer system, adapting to a new interface or interfacing a new system in an existing environment are all problems that software systems with longevity will face sooner or later and properly testing the environment for portability can save on overall cost throughout the life of the system. A general guideline for portability testing is that it should be done if the software system is designed to move from one hardware platform, operating system, or web browser to another. Examples Software designed to run on Macintosh OS X and Microsoft Windows operating systems. Applications developed to be compatible with Google Android and Apple iOS phones. Video Games or other graphic intensive software intended to work with OpenGL and DirectX API's. Software that should be compatible with Google Chrome and Mozilla Firefox browsers. Attributes There are four testing attributes included in portability testing. The ISO 9126 (1991) standard breaks down portability testing attributes as Installability, Compatibility, Adaptability and Replaceability. The ISO 29119 (2013) standard describes Portability with t
https://en.wikipedia.org/wiki/Causal%20perturbation%20theory
Causal perturbation theory is a mathematically rigorous approach to renormalization theory, which makes it possible to put the theoretical setup of perturbative quantum field theory on a sound mathematical basis. It goes back to a seminal work by Henri Epstein and Vladimir Jurko Glaser. Overview When developing quantum electrodynamics in the 1940s, Shin'ichiro Tomonaga, Julian Schwinger, Richard Feynman, and Freeman Dyson discovered that, in perturbative calculations, problems with divergent integrals abounded. The divergences appeared in calculations involving Feynman diagrams with closed loops of virtual particles. It is an important observation that in perturbative quantum field theory, time-ordered products of distributions arise in a natural way and may lead to ultraviolet divergences in the corresponding calculations. From the generalized functions point of view, the problem of divergences is rooted in the fact that the theory of distributions is a purely linear theory, in the sense that the product of two distributions cannot consistently be defined (in general), as was proved by Laurent Schwartz in the 1950s. Epstein and Glaser solved this problem for a special class of distributions that fulfill a causality condition, which itself is a basic requirement in axiomatic quantum field theory. In their original work, Epstein and Glaser studied only theories involving scalar (spinless) particles. Since then, the causal approach has been applied also to a wide range of gauge theories, which represent the most important quantum field theories in modern physics.
https://en.wikipedia.org/wiki/Ternary%20relation
In mathematics, a ternary relation or triadic relation is a finitary relation in which the number of places in the relation is three. Ternary relations may also be referred to as 3-adic, 3-ary, 3-dimensional, or 3-place. Just as a binary relation is formally defined as a set of pairs, i.e. a subset of the Cartesian product of some sets A and B, so a ternary relation is a set of triples, forming a subset of the Cartesian product of three sets A, B and C. An example of a ternary relation in elementary geometry can be given on triples of points, where a triple is in the relation if the three points are collinear. Another geometric example can be obtained by considering triples consisting of two points and a line, where a triple is in the ternary relation if the two points determine (are incident with) the line. Examples Binary functions A function in two variables, mapping two values from sets A and B, respectively, to a value in C associates to every pair (a,b) in an element f(a, b) in C. Therefore, its graph consists of pairs of the form . Such pairs in which the first element is itself a pair are often identified with triples. This makes the graph of f a ternary relation between A, B and C, consisting of all triples , satisfying , , and Cyclic orders Given any set A whose elements are arranged on a circle, one can define a ternary relation R on A, i.e. a subset of A3 = , by stipulating that holds if and only if the elements a, b and c are pairwise different and when going from a to c in a clockwise direction one passes through b. For example, if A = { } represents the hours on a clock face, then holds and does not hold. Betweenness relations Ternary equivalence relation Congruence relation The ordinary congruence of arithmetics which holds for three integers a, b, and m if and only if m divides a − b, formally may be considered as a ternary relation. However, usually, this instead is considered as a family of binary relations between the a and
https://en.wikipedia.org/wiki/Log%20probability
In probability theory and computer science, a log probability is simply a logarithm of a probability. The use of log probabilities means representing probabilities on a logarithmic scale , instead of the standard unit interval. Since the probabilities of independent events multiply, and logarithms convert multiplication to addition, log probabilities of independent events add. Log probabilities are thus practical for computations, and have an intuitive interpretation in terms of information theory: the negative of the average log probability is the information entropy of an event. Similarly, likelihoods are often transformed to the log scale, and the corresponding log-likelihood can be interpreted as the degree to which an event supports a statistical model. The log probability is widely used in implementations of computations with probability, and is studied as a concept in its own right in some applications of information theory, such as natural language processing. Motivation Representing probabilities in this way has several practical advantages: Speed. Since multiplication is more expensive than addition, taking the product of a high number of probabilities is often faster if they are represented in log form. (The conversion to log form is expensive, but is only incurred once.) Multiplication arises from calculating the probability that multiple independent events occur: the probability that all independent events of interest occur is the product of all these events' probabilities. Accuracy. The use of log probabilities improves numerical stability, when the probabilities are very small, because of the way in which computers approximate real numbers. Simplicity. Many probability distributions have an exponential form. Taking the log of these distributions eliminates the exponential function, unwrapping the exponent. For example, the log probability of the normal distribution's probability density function is instead of . Log probabilities make some mat
https://en.wikipedia.org/wiki/National%20Information%20Assurance%20Partnership
The National Information Assurance Partnership (NIAP) is a United States government initiative to meet the security testing needs of both information technology consumers and producers that is operated by the National Security Agency (NSA), and was originally a joint effort between NSA and the National Institute of Standards and Technology (NIST). Purpose The long-term goal of NIAP is to help increase the level of trust consumers have in their information systems and networks through the use of cost-effective security testing, evaluation, and validation programs. In meeting this goal, NIAP seeks to: Promote the development and use of evaluated IT products and systems Champion the development and use of national and international standards for IT security Common Criteria Foster research and development in IT security requirements definition, test methods, tools, techniques, and assurance metrics Support a framework for international recognition and acceptance of IT security testing and evaluation results Facilitate the development and growth of a commercial security testing industry within the U.S. Services Common Criteria Evaluation and Validation Scheme (CCEVS) Product / System Configuration Guidance Product and Protection Profile Evaluation Consistency Instruction Manuals NIAP Validation Body The principal objective of the NIAP Validation Body is to ensure the provision of competent IT security evaluation and validation services for both government and industry. The Validation Body has the ultimate responsibility for the operation of the CCEVS in accordance with its policies and procedures, and where appropriate: interpret and amend those policies and procedures. The NSA is responsible for providing sufficient resources to the Validation Body so that it may carry out its responsibilities. The Validation Body is led by a Director and Deputy Director selected by NSA management. The Director of the Validation Body reports to the NIAP Director for administrat
https://en.wikipedia.org/wiki/CPU%20Wars
CPU Wars is an underground comic strip by Charles Andres that circulated around Digital Equipment Corporation and other computer manufacturers starting in 1977. It described a hypothetical invasion of Digital's slightly disguised Maynard, Massachusetts ex-woolen mill headquarters (now located in Barnyard, Mass) by troops from IPM, the Impossible to Program Machine Corporation in a rather-blunt-edged parody of IBM. The humor hinged on the differences in style and culture between the invading forces of IPM and the laid-back employees of the Human Equipment Corporation. For example, even at gunpoint, the employees were unable to lead the invading forces to their leaders because they had no specific leaders as a result their corporation's use of matrix management. The comic was drawn by a DEC employee, initially anonymous and later self-revealed to be Charles Andres. A compendium of the strips was finally published in 1980. The most notable trace of the comic is the phrase Eat flaming death, supposedly derived from a famously turgid line in a World War II-era anti-Nazi propaganda comic that ran “Eat flaming death, non-Aryan mongrels!” or something of the sort (however, it is also reported that on the Firesign Theatre's 1975 album In The Next World, You're On Your Own a character won the right to scream “Eat flaming death, fascist media pigs” in the middle of Oscar night on a game show; this may have been an influence). Used in humorously overblown expressions of hostility. “Eat flaming death, EBCDIC users!”
https://en.wikipedia.org/wiki/Apotome%20%28mathematics%29
In the historical study of mathematics, an apotome is a line segment formed from a longer line segment by breaking it into two parts, one of which is commensurable only in power to the whole; the other part is the apotome. In this definition, two line segments are said to be "commensurable only in power" when the ratio of their lengths is an irrational number but the ratio of their squared lengths is rational. Translated into modern algebraic language, an apotome can be interpreted as a quadratic irrational number formed by subtracting one square root of a rational number from another. This concept of the apotome appears in Euclid's Elements beginning in book X, where Euclid defines two special kinds of apotomes. In an apotome of the first kind, the whole is rational, while in an apotome of the second kind, the part subtracted from it is rational; both kinds of apotomes also satisfy an additional condition. Euclid Proposition XIII.6 states that, if a rational line segment is split into two pieces in the golden ratio, then both pieces may be represented as apotomes.
https://en.wikipedia.org/wiki/Ronald%20Jensen
Ronald Björn Jensen (born April 1, 1936) is an American mathematician who lives in Germany, primarily known for his work in mathematical logic and set theory. Career Jensen completed a BA in economics at American University in 1959, and a Ph.D. in mathematics at the University of Bonn in 1964. His supervisor was Gisbert Hasenjaeger. Jensen taught at Rockefeller University, 1969–71, and the University of California, Berkeley, 1971–73. The balance of his academic career was spent in Europe at the University of Bonn, the University of Oslo, the University of Freiburg, the University of Oxford, and the Humboldt-Universität zu Berlin, from which he retired in 2001. He now resides in Berlin. Jensen was honored by the Association for Symbolic Logic as the first Gödel Lecturer in 1990. In 2015, the European Set Theory Society awarded him and John R. Steel the Hausdorff Medal for their paper "K without the measurable". Results Jensen's better-known results include the: Axiomatic set theory NFU, a variant of New Foundations (NF) where extensionality is weakened to allow several sets with no elements, and the proof of NFU's consistency relative to Peano arithmetic; Fine structure theory of the constructible universe L. This work led to his being awarded in 2003 the Leroy P. Steele Prize for Seminal Contribution to Research of the American Mathematical Society for his 1972 paper titled "The fine structure of the constructible hierarchy"; Definitions and proofs of various infinitary combinatorial principles in L, including diamond , square, and morass; Jensen's covering theorem for L; General theory of core models and the construction of the Dodd–Jensen core model; Consistency of CH plus Suslin's hypothesis. Technique of coding the universe by a real. Selected publications Articles Ronald Jensen, 1969, « On the Consistency of a Slight(?) Modification of Quine's NF », Synthese 19: 250–263. With discussion by Quine. The fine structure of the constructible hierarchy,
https://en.wikipedia.org/wiki/Replicon%20%28genetics%29
A replicon is a region of an organism's genome that is independently replicated from a single origin of replication. A bacterial chromosome contains a single origin, and therefore the whole bacterial chromosome is a replicon. The chromosomes of archaea and eukaryotes can have multiple origins of replication, and so their chromosomes may consist of several replicons. The concept of the replicon was formulated in 1963 by François Jacob, Sydney Brenner, and Jacques Cuzin as a part of their replicon model for replication initiation. According to the replicon model, two components control replication initiation: the replicator and the initiator. The replicator is the entire DNA sequence (including, but not limited to the origin of replication) required to direct the initiation of DNA replication. The initiator is the protein that recognizes the replicator and activates replication initiation. Sometimes in bacteriology, the term "replicon" is only used to refer to chromosomes containing a single origin of replication and therefore excludes the genomes of archaea and eukaryotes which can have several origins. Prokaryotes For most prokaryotic chromosomes, the replicon is the entire chromosome. One notable exception comes from archaea, where two Sulfolobus species have been shown to contain three replicons. Examples of bacterial species that have been found to possess multiple replicons include Rhodobacter sphaeroides (two), Vibrio cholerae, and Burkholderia multivorans (three). These "secondary" (or tertiary) chromosomes are often described as molecules that are intermediate between a true chromosome and a plasmid and are sometimes called "chromids". Various Azospirillum species possess seven replicons; A. lipoferum, for instance, has one bacterial chromosome, five chromids, and one plasmid. Plasmids and bacteriophages are usually replicated as single replicons, but large plasmids in Gram-negative bacteria have been shown to carry several replicons. Eukaryotes For eukar
https://en.wikipedia.org/wiki/Cervical%20conization
Cervical conization (CPT codes 57520 (Cold Knife) and 57522 (Loop Excision)) refers to an excision of a cone-shaped sample of tissue from the mucous membrane of the cervix. Conization may be used for either diagnostic purposes as part of a biopsy or therapeutic purposes to remove pre-cancerous cells. Types include: Cold knife conization (CKC): usually outpatient, occasionally inpatient Loop electrical excision procedure (LEEP): usually outpatient. Conization of the cervix is a common treatment for dysplasia following abnormal results from a pap smear. Side effects Cervical conization causes a risk for subsequent pregnancies ending up in preterm birth of approximately 30% on average, due to cervical incompetence. See also Cervicectomy
https://en.wikipedia.org/wiki/Ferroics
In physics, ferroics is the generic name given to the study of ferromagnets, ferroelectrics, and ferroelastics. Overview The basis of ferroics is to understand the large changes in physical characteristics that occur over a very narrow temperature range. The changes in physical characteristics occur when phase transitions take place around some critical temperature value, normally denoted by . Above this critical temperature, the crystal is in a nonferroic state and does not exhibit the physical characteristic of interest. Upon cooling the material down below it undergoes a spontaneous phase transition. Such a phase transition typically results in only a small deviation from the nonferroic crystal structure, but in altering the shape of the unit cell the point symmetry of the material is reduced. This breaking of symmetry is physically what allows the formation of the ferroic phase. In ferroelectrics, upon lowering the temperature below , a spontaneous dipole moment is induced along an axis of the unit cell. Although individual dipole moments can sometimes be small, the effect of unit cells gives rise to an electric field that over the bulk substance that is not insignificant. An important point about ferroelectrics is that they cannot exist in a centrosymmetric crystal. A centrosymmetric crystal is one where a lattice point can be mapped onto a lattice point . Ferromagnets is a term that most people are familiar with, and, as with ferroelastics, the spontaneous magnetization of a ferromagnet can be attributed to a breaking of point symmetry in switching from the paramagnetic to the ferromagnetic phase. In this case, is normally known as the Curie Temperature. In ferroelastic crystals, in going from the nonferroic (or prototypic phase) to the ferroic phase, a spontaneous strain is induced. An example of a ferroelastic phase transition is when the crystal structure spontaneously changes from a tetragonal structure (a square prism shape) to a mono
https://en.wikipedia.org/wiki/Fringe%20shift
In interferometry experiments such as the Michelson–Morley experiment, a fringe shift is the behavior of a pattern of “fringes” when the phase relationship between the component sources change. A fringe pattern can be created in a number of ways but the stable fringe pattern found in the Michelson type interferometers is caused by the separation of the original source into two separate beams and then recombining them at differing angles of incidence on a viewing surface. The interaction of the waves on a viewing surface alternates between constructive interference and destructive interference causing alternating lines of dark and light. In the example of a Michelson Interferometer, a single fringe represents one wavelength of the source light and is measured from the center of one bright line to the center of the next. The physical width of a fringe is governed by the difference in the angles of incidence of the component beams of light, but regardless of a fringe's physical width, it still represents a single wavelength of light. Historical Context In the 1887 Michelson–Morley experiment, the round trip distance that the two beams traveled down the precisely equal arms was expected to be made unequal because of the, now deprecated, idea that light is constrained to travel as a mechanical wave at the speed C only in the rest frame of the luminiferous aether. The Earth's presumed motion through that frame was believed to cause a local aether "wind" in the moving frame of the interferometer like a car passing through still air creates an apparent wind for those inside. It is crucial to avoid the Historian's fallacy and note that these experimenters did not expect that a mechanical wave would travel varying speeds within a homogenous isotropic medium of aether. Waves have been studied since antiquity and mathematically at least since Jean le Rond d'Alembert in the 1700s. Our modern understanding of the constancy of light, however, grants the additional, new, "n
https://en.wikipedia.org/wiki/Phillips%20Code
The Phillips Code is a brevity code (shorthand) created in 1879 by Walter P. Phillips (then of the Associated Press) for the rapid transmission of press reports by telegraph. Overview It was created in 1879 by Walter P. Phillips. It defined hundreds of abbreviations and initialisms for commonly used words that news authors and copy desk staff would commonly use. There were subcodes for commodities and stocks called the Market Code, a Baseball Supplement, and single-letter codes for Option Months. The last official edition was published in 1925, but there was also a Market supplement last published in 1909 that was separate. The code consists of a dictionary of common words or phrases and their associated abbreviations. Extremely common terms are represented by a single letter (C: See; Y: Year); those less frequently used gain successively longer abbreviations (Ab: About; Abb: Abbreviate; Abty: Ability; Acmpd: Accompanied). Later, The Evans Basic English Code expanded the 1,760 abbreviations in the Phillips Code to 3,848 abbreviations. Examples of use Using the Phillips Code, this ten-word telegraphic transmission: ABBG LG WORDS CAN SAVE XB AMTS MON AVOG FAPIB expands to this: Abbreviating long words can save exorbitant amounts of money, avoiding filing a petition in bankruptcy. In 1910, an article explaining the basic structure and purpose of the Phillips Code appeared in various US newspapers and magazines. One example given is: T tri o HKT ft mu o SW on Ms roof garden, nw in pg, etc.which the article translates as: The trial of Harry K. Thaw for the murder of Stanford White on Madison Square Roof Garden, now in progress, etc. Notable codes The terms POTUS and SCOTUS originated in the code. SCOTUS appeared in the very first edition of 1879 and POTUS was in use by 1895, and was officially included in the 1923 edition. These abbreviations entered common parlance when news gathering services, in particular, the Associated Press, adopted the terminology.
https://en.wikipedia.org/wiki/Cantabrian%20labarum
The Cantabrian labarum (Cantabrian: lábaru cántabru or ) is a modern interpretation of the ancient military standard known by the Romans as Cantabrum. It consists of a purple cloth on which there is what would be called in heraldry a "saltire voided" made up of curved lines, with knobs at the end of each line. The name and design of the flag is in the theory advocated by several authors of a relationship between the genesis of labarum and the military standard called Cantabrum, thereby identifying both as a same thing; and the alleged relationship the Codex Theodosianus established between the Labarum and the Cantabrarii, the school of Roman soldiers in charge of carrying the Cantabrum. Additionally, and according to the definition of the Royal Academy of the Spanish language, labarum is the Roman standard (as in military ceremonial flag) on which, under Emperor Constantine's rule, the cross and the Monogram of Christ (Chi-Rho) was drawn. By association of ideas, labarum can refer just to the monogram itself, or even just the cross. Etymologically, the word comes from (p)lab- which means to speak in a number of Celtic languages, many of which have derivatives. For example, in Welsh means "speech", "language", "voice". Ancient Cornish and Breton have lavar, "word", and ancient Irish has : "language", "speech". Legal status The plenary session of the Parliament of Cantabria, at its meeting of March 14 2016, approved a resolution as a result of the processing of the non-legislative proposal No. 9L/4300-0056 relative to the recognition of the Lábaro. "The Parliament of Cantabria: 1. Recognizes the lábaro as a representative and identity symbol of the Cantabrian people and the values they represent. 2. Urges the institutions and civil society of Cantabria to actively promote and participate in their knowledge and dissemination as an iconographic expression of the identity of the Cantabrian people. Keeping the official character of the flag of the Community of
https://en.wikipedia.org/wiki/Image%20sensor
An image sensor or imager is a sensor that detects and conveys information used to form an image. It does so by converting the variable attenuation of light waves (as they pass through or reflect off objects) into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, camera phones, optical mouse devices, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, electronic and digital imaging tends to replace chemical and analog imaging. The two main types of electronic image sensors are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor). Both CCD and CMOS sensors are based on metal–oxide–semiconductor (MOS) technology, with CCDs based on MOS capacitors and CMOS sensors based on MOSFET (MOS field-effect transistor) amplifiers. Analog sensors for invisible radiation tend to involve vacuum tubes of various kinds, while digital sensors include flat-panel detectors. CCD vs. CMOS sensors The two main types of digital image sensors are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor), fabricated in complementary MOS (CMOS) or N-type MOS (NMOS or Live MOS) technologies. Both CCD and CMOS sensors are based on the MOS technology, with MOS capacitors being the building blocks of a CCD, and MOSFET amplifiers being the building blocks of a CMOS sensor. Cameras integrated in small consumer products generally use CMOS sensors, which are usually cheaper and have lower power consumption in battery powered devices than CCDs. CCD sensors are used for high end broadcast quality video cameras, and CMOS sensors dominate in still photography and consumer goods where overall cost is a major concern. Both types of sensor accomplish the same task of capturing light and c
https://en.wikipedia.org/wiki/WormBook
WormBook is an open access, comprehensive collection of original, peer-reviewed chapters covering topics related to the biology of the nematode worm Caenorhabditis elegans (C. elegans). WormBook also includes WormMethods, an up-to-date collection of methods and protocols for C. elegans researchers. WormBook is the online text companion to WormBase, the C. elegans model organism database. Capitalizing on the World Wide Web, WormBook links in-text references (e.g. genes, alleles, proteins, literature citations) with primary biological databases such as WormBase and PubMed. C. elegans was the first multicellular organism to have its genome sequenced and is a model organism for studying developmental genetics and neurobiology. Contents The content of WormBook is categorized into the sections listed below, each filled with a variety of relevant chapters. These sections include: Genetics and genomics Molecular biology Biochemistry Cell Biology Signal transduction Developmental biology Post-embryonic development Sex-determination systems The germ line Neurobiology and behavior Evolution and ecology Disease models and drug discovery WormMethods
https://en.wikipedia.org/wiki/Bioconversion
Bioconversion, also known as biotransformation, is the conversion of organic materials, such as plant or animal waste, into usable products or energy sources by biological processes or agents, such as certain microorganisms. One example is the industrial production of cortisone, which one step is the bioconversion of progesterone to 11-alpha-Hydroxyprogesterone by Rhizopus nigricans. Another example is the bioconversion of glycerol to 1,3-propanediol, which is part of scientific research for many decades. Another example of bioconversion is the conversion of organic materials, such as plant or animal waste, into usable products or energy sources by biological processes or agents, such as certain microorganisms, some detritivores or enzymes. In the US, the Bioconversion Science and Technology group performs multidisciplinary R&D for the Department of Energy's (DOE) relevant applications of bioprocessing, especially with biomass. Bioprocessing combines the disciplines of chemical engineering, microbiology and biochemistry. The Group 's primary role is investigation of the use of microorganism, microbial consortia and microbial enzymes in bioenergy research. New cellulosic ethanol conversion processes have enabled the variety and volume of feedstock that can be bioconverted to expand rapidly. Feedstock now includes materials derived from plant or animal waste such as paper, auto-fluff, tires, fabric, construction materials, municipal solid waste (MSW), sludge, sewage, etc. Three different processes for bioconversion 1 - Enzymatic hydrolysis - a single source of feedstock, switchgrass for example, is mixed with strong enzymes which convert a portion of cellulosic material into sugars which can then be fermented into ethanol. Genencor and Novozymes are two companies that have received United States government Department of Energy funding for research into reducing the cost of cellulase, a key enzyme in the production cellulosic ethanol by this process. 2 - Synthesis
https://en.wikipedia.org/wiki/Geniculate%20ganglion
The geniculate ganglion (from Latin genu, for "knee") is a collection of pseudounipolar sensory neurons of the facial nerve located in the facial canal of the head. It receives fibers from the facial nerve. It sends fibers that supply the lacrimal glands, submandibular glands, sublingual glands, tongue, palate, pharynx, external auditory meatus, stapedius muscle, posterior belly of the digastric muscle, stylohyoid muscle, and muscles of facial expression. The geniculate ganglion is one of several ganglia of the head and neck. Like the others, it is a bilaterally distributed structure, with each side of the face having a geniculate ganglion. Structure The geniculate ganglion is located close to the internal auditory meatus. It is covered superiorly by the petrous part of the temporal bone (which is sometimes absent over the ganglion). The geniculate ganglion receives fibers from the motor, sensory, and parasympathetic components of the facial nerve. It contains special sensory neuronal cell bodies for taste, from fibers coming up from the tongue through the chorda tympani and from fibers coming up from the roof of the palate through the greater petrosal nerve. Sensory and parasympathetic inputs are carried into the geniculate ganglion via the nervus intermedius. Motor fibers are carried via the facial nerve proper. The greater petrosal nerve, which carries preganglionic parasympathetic fibers, emerges from the anterior aspect of the ganglion. The motor fibers of the facial nerve proper and parasympathetic fibers to the submandibular and pterygopalatine ganglia do not synapse in the geniculate ganglion. The afferent fibers carrying pain, temperature, and touch from the posterior auricular nerve, as well as those carrying special sensory (taste) fibers from the tongue (via the chorda tympani), do not synapse in the geniculate ganglion. Instead, the cells of the geniculate ganglion relay the signal to the appropriate brainstem nucleus, much like the Dorsal root gan
https://en.wikipedia.org/wiki/Indumentum
In biology, an indumentum (Latin, literally: "garment") is a covering of trichomes (fine "hairs") on a plant or of bristles (rarely scales) of an insect. In plants, indumentum types include: pubescent hirsute pilose lanate villous tomentose stellate scabrous scurfy The indumentum on plants can have a wide variety of functions, including as anchorage in climbing plants (e.g., Galium aparine), in transpiration control, in water absorption (Tillandsia), the reflection of solar radiation, increasing water-repellency (e.g., in the aquatic fern Salvinia), in protection against insect predation, and in the trapping of insects (Drosera, Nepenthes, Stylosanthes). The use of an indumentum on insects can also be pollen-related, as on bees, sensory like whiskers, or for varied other uses including adhesion and poison. See also Glossary of botanical terms
https://en.wikipedia.org/wiki/The%20Quantum%20Rose
The Quantum Rose is a science fiction novel by Catherine Asaro which tells the story of Kamoj Argali and Skolian Prince Havyrl Valdoria. The book is set in her Saga of the Skolian Empire. It won the 2001 Nebula Award for Best Novel and the 2001 Affaire de Coeur Award for Best Science Fiction. The first third of the novel appeared as a three-part serialization in Analog magazine in the 1999 May, June and July/August issues. Tor Books published the full novel in 2000. Plot summary The Quantum Rose is a retelling of the Beauty and the Beast folktale in a science fiction setting. In the novel, Kamoj Argali, the governor of an impoverished province on the backward planet Balumil, is betrothed to Jax Ironbridge, ruler of a wealthy neighboring province, an arrangement made for political purposes to save her province from starvation and death. Havyrl (Vyrl) Lionstar, a prince of the titular Ruby Dynasty, comes to Balimul as part of a governmental plan to deal with the aftermath of an interstellar war. Masked and enigmatic, he has a reputation as a monster with Kamoj's people. Lionstar interferes with Kamoj's culture and destabilizes their government by pushing her into marriage with himself. In the traditional fairy tale, Belle must save her father from the prince transformed into a beast; in The Quantum Rose, Kamoj must save her province from the prince in exile. The book deals with themes about the physical and emotional scars left on the survivors of a war with no clear victor. As such, it is also a story of healing for the characters Kamoj and Lionstar. The second half of The Quantum Rose involves Lionstar's return to his home world with Kamoj, where he becomes the central figure in a planet wide act of civil disobedience designed to eject an occupying military force that has taken control of his planet. Both the world Balimul in the first half of the novel and the world Lyshriol in the second half fall into the lost colony genre of literature in science ficti
https://en.wikipedia.org/wiki/Plaintext-aware%20encryption
Plaintext-awareness is a notion of security for public-key encryption. A cryptosystem is plaintext-aware if it is difficult for any efficient algorithm to come up with a valid ciphertext without being aware of the corresponding plaintext. From a lay point of view, this is a strange property. Normally, a ciphertext is computed by encrypting a plaintext. If a ciphertext is created this way, its creator would be aware, in some sense, of the plaintext. However, many cryptosystems are not plaintext-aware. As an example, consider the RSA cryptosystem without padding. In the RSA cryptosystem, plaintexts and ciphertexts are both values modulo N (the modulus). Therefore, RSA is not plaintext aware: one way of generating a ciphertext without knowing the plaintext is to simply choose a random number modulo N. In fact, plaintext-awareness is a very strong property. Any cryptosystem that is semantically secure and is plaintext-aware is actually secure against a chosen-ciphertext attack, since any adversary that chooses ciphertexts would already know the plaintexts associated with them. History The concept of plaintext-aware encryption was developed by Mihir Bellare and Phillip Rogaway in their paper on optimal asymmetric encryption, as a method to prove that a cryptosystem is chosen-ciphertext secure. Further research Limited research on plaintext-aware encryption has been done since Bellare and Rogaway's paper. Although several papers have applied the plaintext-aware technique in proving encryption schemes are chosen-ciphertext secure, only three papers revisit the concept of plaintext-aware encryption itself, both focussed on the definition given by Bellare and Rogaway that inherently require random oracles. Plaintext-aware encryption is known to exist when a public-key infrastructure is assumed. Also, it has been shown that weaker forms of plaintext-awareness exist under the knowledge of exponent assumption, a non-standard assumption about Diffie-Hellman trip
https://en.wikipedia.org/wiki/Kochen%E2%80%93Specker%20theorem
In quantum mechanics, the Kochen–Specker (KS) theorem, also known as the Bell–Kochen–Specker theorem, is a "no-go" theorem proved by John S. Bell in 1966 and by Simon B. Kochen and Ernst Specker in 1967. It places certain constraints on the permissible types of hidden-variable theories, which try to explain the predictions of quantum mechanics in a context-independent way. The version of the theorem proved by Kochen and Specker also gave an explicit example for this constraint in terms of a finite number of state vectors. The theorem is a complement to Bell's theorem (to be distinguished from the (Bell–)Kochen–Specker theorem of this article). While Bell's theorem established nonlocality to be a feature of any hidden variable theory that recovers the predictions of quantum mechanics, the KS theorem established contextuality to be an inevitable feature of such theories. The theorem proves that there is a contradiction between two basic assumptions of the hidden-variable theories intended to reproduce the results of quantum mechanics: that all hidden variables corresponding to quantum-mechanical observables have definite values at any given time, and that the values of those variables are intrinsic and independent of the device used to measure them. The contradiction is caused by the fact that quantum-mechanical observables need not be commutative. It turns out to be impossible to simultaneously embed all the commuting subalgebras of the algebra of these observables in one commutative algebra, assumed to represent the classical structure of the hidden-variables theory, if the Hilbert space dimension is at least three. The Kochen–Specker theorem excludes hidden-variable theories that assume that elements of physical reality can all be consistently represented simultaneously by the quantum mechanical Hilbert space formalism disregarding the context of a particular framework (technically a projective decomposition of the identity operator) related to the experiment or
https://en.wikipedia.org/wiki/Critical%20resolved%20shear%20stress
In materials science, critical resolved shear stress (CRSS) is the component of shear stress, resolved in the direction of slip, necessary to initiate slip in a grain. Resolved shear stress (RSS) is the shear component of an applied tensile or compressive stress resolved along a slip plane that is other than perpendicular or parallel to the stress axis. The RSS is related to the applied stress by a geometrical factor, , typically the Schmid factor: where is the magnitude of the applied tensile stress, is the angle between the normal of the slip plane and the direction of the applied force, and is the angle between the slip direction and the direction of the applied force. The Schmid factor is most applicable to FCC single-crystal metals, but for polycrystal metals the Taylor factor has been shown to be more accurate. The CRSS is the value of resolved shear stress at which yielding of the grain occurs, marking the onset of plastic deformation. CRSS, therefore, is a material property and is not dependent on the applied load or grain orientation. The CRSS is related to the observed yield strength of the material by the maximum value of the Schmid factor: CRSS is a constant for crystal families. Hexagonal close-packed crystals, for example, have three main families - basal, prismatic, and pyramidal - with different values for the critical resolved shear stress. Slip systems and resolved shear stress In crystalline metals, slip occurs in specific directions on crystallographic planes, and each combination of slip direction and slip plane will have its own Schmid factor. As an example, for a face-centered cubic (FCC) system the primary slip plane is {111} and primary slip directions exist within the <110> permutation families. The Schmid Factor for an axial applied stress in the direction, along the primary slip plane of , with the critical applied shear stress acting in the direction can be calculated by quickly determining if any of the dot product between th
https://en.wikipedia.org/wiki/Hidden%20sector
In particle physics, the hidden sector, also known as the dark sector, is a hypothetical collection of yet-unobserved quantum fields and their corresponding hypothetical particles. The interactions between the hidden sector particles and the Standard Model particles are weak, indirect, and typically mediated through gravity or other new particles. Examples of new hypothetical mediating particles in this class of theories include the dark photon, sterile neutrino, and axion. In many cases, hidden sectors include a new gauge group that is independent from the Standard Model gauge group. The hidden sectors are commonly predicted by the models from string theory. They may be relevant as a source of dark matter and supersymmetry breaking, solving the Muon g-2 anomaly and beryllium-8 decay anomaly. See also Fifth force Dark energy Dark matter Dark radiation Higgs sector
https://en.wikipedia.org/wiki/Little%20hierarchy%20problem
In particle physics the little hierarchy problem in the Minimal Supersymmetric Standard Model (MSSM) is a refinement of the hierarchy problem. According to quantum field theory, the mass of the Higgs boson must be rather light for the electroweak theory to work. However, the loop corrections to the mass are naturally much greater; this is known as the hierarchy problem. New physical effects such as supersymmetry may in principle reduce the size of the loop corrections, making the theory natural. However, it is known from experiments that new physics such as superpartners does not occur at very low energy scales, so even if these new particles reduce the loop corrections, they do not reduce them enough to make the renormalized Higgs mass completely natural. The expected value of the Higgs mass is about 10 percent of the size of the loop corrections which shows that a certain "little" amount of fine-tuning seems necessary. Particle physicists have different opinions as to whether the little hierarchy problem is serious. See also MSSM Higgs mass Naturalness mu problem
https://en.wikipedia.org/wiki/DGP%20model
The DGP model is a model of gravity proposed by Gia Dvali, Gregory Gabadadze, and Massimo Porrati in 2000. The model is popular among some model builders, but has resisted being embedded into string theory. Overview The DGP model assumes the existence of a 4+1-dimensional Minkowski space, within which ordinary 3+1-dimensional Minkowski space is embedded. The model assumes an action consisting of two terms: One term is the usual Einstein–Hilbert action, which involves only the 4-D spacetime dimensions. The other term is the equivalent of the Einstein–Hilbert action, as extended to all 5 dimensions. The 4-D term dominates at short distances, and the 5-D term dominates at long distances. The model was proposed in part in order to reproduce the cosmic acceleration of dark energy without any need for a small but non-zero vacuum energy density. But critics argue that this branch of the theory is unstable. However, the theory remains interesting because of Dvali's claim that the unusual structure of the graviton propagator makes non-perturbative effects important in a seemingly linear regime, such as the solar system. Because there is no four-dimensional, linearized effective theory that reproduces the DGP model for weak-field gravity, the theory avoids the vDVZ discontinuity that otherwise plagues attempts to write down a theory of massive gravity. In 2008, Fang et al. argued that recent cosmological observations (including measurements of baryon acoustic oscillations by the Sloan Digital Sky Survey, and measurements of the cosmic microwave background and type 1a supernovae) is in direct conflict with the DGP cosmology unless a cosmological constant or some other form of dark energy is added. However, this negates the appeal of the DGP cosmology, which accelerates without needing to add dark energy. See also Kaluza–Klein theory Randall–Sundrum model Large extra dimension
https://en.wikipedia.org/wiki/Johannes%20van%20der%20Corput
Johannes Gaultherus van der Corput (4 September 1890 – 16 September 1975) was a Dutch mathematician, working in the field of analytic number theory. He was appointed professor at the University of Fribourg (Switzerland) in 1922, at the University of Groningen in 1923, and at the University of Amsterdam in 1946. He was one of the founders of the Mathematisch Centrum in Amsterdam, of which he also was the first director. From 1953 on he worked in the United States at the University of California, Berkeley, and the University of Wisconsin–Madison. He introduced the van der Corput lemma, a technique for creating an upper bound on the measure of a set drawn from harmonic analysis, and the van der Corput theorem on equidistribution modulo 1. He became member of the Royal Netherlands Academy of Arts and Sciences in 1929, and foreign member in 1953. He was a Plenary Speaker of the ICM in 1936 in Oslo. See also van der Corput inequality van der Corput lemma (harmonic analysis) van der Corput's method van der Corput sequence van der Corput's theorem
https://en.wikipedia.org/wiki/Stygofauna
Stygofauna are any fauna that live in groundwater systems or aquifers, such as caves, fissures and vugs. Stygofauna and troglofauna are the two types of subterranean fauna (based on life-history). Both are associated with subterranean environments – stygofauna are associated with water, and troglofauna with caves and spaces above the water table. Stygofauna can live within freshwater aquifers and within the pore spaces of limestone, calcrete or laterite, whilst larger animals can be found in cave waters and wells. Stygofaunal animals, like troglofauna, are divided into three groups based on their life history - stygophiles, stygoxenes, and stygobites. Stygophiles inhabit both surface and subterranean aquatic environments, but are not necessarily restricted to either. Stygoxenes are like stygophiles, except they are defined as accidental or occasional presence in subterranean waters. Stygophiles and stygoxenes may live for part of their lives in caves, but don't complete their life cycle in them. Stygobites are obligate, or strictly subterranean, aquatic animals and complete their entire life in this environment. Extensive research of stygofauna has been undertaken in countries with ready access to caves and wells such as France, Slovenia, the US and, more recently, Australia. Many species of stygofauna, particularly obligate stygobites, are endemic to specific regions or even individual caves. This makes them an important focus for the conservation of groundwater systems. Diet and lifecycle Stygofauna have adapted to the limited food supply and are extremely energy efficient. Stygofauna feed on plankton, bacteria, and plants found in streams. To survive in an environment where food is scarce and oxygen levels are low, stygofauna often have very low metabolism. As a result, stygofauna may live longer than other terrestrial species. For example, the crayfish Orconectes australis from Shelta Cave in Alabama has been estimated to reproduce at 100 years and live
https://en.wikipedia.org/wiki/Subvocal%20recognition
Subvocal recognition (SVR) is the process of taking subvocalization and converting the detected results to a digital output, aural or text-based. Concept A set of electrodes are attached to the skin of the throat and, without opening the mouth or uttering a sound, the words are recognized by a computer. Subvocal speech recognition deals with electromyograms that are different for each speaker. Therefore, consistency can be thrown off just by the positioning of an electrode. To improve accuracy, researchers in this field are relying on statistical models that get better at pattern-matching the more times a subject "speaks" through the electrodes, but even then there are lapses. At Carnegie Mellon University, researchers found that the same "speaker" with accuracy rates of 94% one day can see that rate drop to 48% a day later; between two different speakers it drops even more. Relevant applications for this technology where audible speech is impossible: for astronauts, underwater Navy Seals, fighter pilots and emergency workers charging into loud, harsh environments. At Worcester Polytechnic Institute in Massachusetts, research is underway to use subvocal information as a control source for computer music instruments. Research and patents With a grant from the U.S. Army, research into synthetic telepathy using subvocalization is taking place at the University of California, Irvine under lead scientist Mike D'Zmura. NASA's Ames Research Laboratory in Mountain View, California, under the supervision of Charles Jorgensen is conducting subvocalization research. The Brain Computer Interface R&D program at Wadsworth Center under the New York State Department of Health has confirmed the existing ability to decipher consonants and vowels from imagined speech, which allows for brain-based communication using imagined speech. US Patents on silent communication technologies include: US Patent 6587729 "Apparatus for audibly communicating speech using the radio frequency
https://en.wikipedia.org/wiki/Altruism%20%28ethics%29
In ethical philosophy, altruism (also called the ethic of altruism, moralistic altruism, and ethical altruism) is an ethical doctrine that holds that the moral value of an individual's actions depends solely on the impact of those actions on other individuals, regardless of the consequences for the actor. James Fieser states the altruist dictum as: "An action is morally right if the consequences of that action are more favorable than unfavorable to everyone except the agent." Auguste Comte's version of altruism calls for living for the sake of others. One who holds to either of these ethics is known as an "altruist". Overview The word "altruism" (, from , derived ) was coined by Auguste Comte, the French founder of positivism, in order to describe the ethical doctrine he supported. He believed that individuals had a moral obligation to renounce self-interest and live for others. Comte says, in his , that: [The] social point of view cannot tolerate the notion of rights, for such notion rests on individualism. We are born under a load of obligations of every kind, to our predecessors, to our successors, to our contemporaries. After our birth these obligations increase or accumulate, for it is some time before we can return any service.... This ["to live for others"], the definitive formula of human morality, gives a direct sanction exclusively to our instincts of benevolence, the common source of happiness and duty. [Man must serve] Humanity, whose we are entirely. The Catholic Encyclopedia says that for Comte's altruism, "The first principle of morality... is the regulative supremacy of social sympathy over the self-regarding instincts." Author Gabriel Moran, (professor in the department of Humanities and the Social Sciences, New York University) says "The law and duty of life in altruism [for Comte] was summed up in the phrase: Live for others." Various philosophers define the doctrine in various ways, but all definitions generally revolve around a moral obligat
https://en.wikipedia.org/wiki/Impossible%20world
In philosophical logic, the concept of an impossible world (sometimes called a non-normal world) is used to model certain phenomena that cannot be adequately handled using ordinary possible worlds. An impossible world, , is the same sort of thing as a possible world (whatever that may be), except that it is in some sense "impossible." Depending on the context, this may mean that some contradictions, statements of the form are true at , or that the normal laws of logic, metaphysics, and mathematics, fail to hold at , or both. Impossible worlds are controversial objects in philosophy, logic, and semantics. They have been around since the advent of possible world semantics for modal logic, as well as world based semantics for non-classical logics, but have yet to find the ubiquitous acceptance, that their possible counterparts have found in all walks of philosophy. Argument from ways Possible worlds Possible worlds are often regarded with suspicion, which is why their proponents have struggled to find arguments in their favor. An often-cited argument is called the argument from ways. It defines possible worlds as "ways how things could have been" and relies for its premises and inferences on assumptions from natural language, for example: (1) Hillary Clinton could have won the 2016 US election. (2) So there are other ways how things could have been. (3) Possible worlds are ways how things could have been. (4) So there are other possible worlds. The central step of this argument happens at (2) where the plausible (1) is interpreted in a way that involves quantification over "ways". Many philosophers, following Willard Van Orman Quine, hold that quantification entails ontological commitments, in this case, a commitment to the existence of possible worlds. Quine himself restricted his method to scientific theories, but others have applied it also to natural language, for example, Amie L. Thomasson in her paper entitled Ontology Made Easy. The strength of the argum
https://en.wikipedia.org/wiki/POVM
In functional analysis and quantum information science, a positive operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalization of projection-valued measures (PVM) and, correspondingly, quantum measurements described by POVMs are a generalization of quantum measurement described by PVMs (called projective measurements). In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see purification of quantum state); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics, and can also be used in quantum field theory. They are extensively used in the field of quantum information. Definition In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite Hermitian matrices on a Hilbert space that sum to the identity matrix, In quantum mechanics, the POVM element is associated with the measurement outcome , such that the probability of obtaining it when making a measurement on the quantum state is given by , where is the trace operator. When the quantum state being measured is a pure state this formula reduces to . The simplest case of a POVM generalises the simplest case of a PVM, which is a set of orthogonal projectors that sum to the identity matrix: The probability formulas for a PVM are the same as for the POVM. An important difference is that the elements of a POVM are not necessarily orthogonal. As a consequence, the number of elements of the POVM can be larger than the dimension of the Hilbert space they act in. On the other hand, the number of elements of the PVM is at most the dimension of the Hilbert space. In general, POVMs can also be def
https://en.wikipedia.org/wiki/Interaction-free%20measurement
In physics, interaction-free measurement is a type of measurement in quantum mechanics that detects the position, presence, or state of an object without an interaction occurring between it and the measuring device. Examples include the Renninger negative-result experiment, the Elitzur–Vaidman bomb-testing problem, and certain double-cavity optical systems, such as Hardy's paradox. In Quantum Computation such measurements are referred to as Counterfactual Quantum Computation, an idea introduced by physicists Graeme Mitchinson and Richard Jozsa. Examples include Keith Bowden's Counterfactual Mirror Array describing a digital computer that could be counterfactually interrogated to calculate whether a light beam would fail to pass through a maze. Initially proposed as thought experiments, interaction-free measurements have been experimentally demonstrated in various configurations. Interaction-free measurements have also been proposed as a way to reduce sample damage in electron microscopy. Counterfactual quantum communication In 2012 the idea of counterfactual quantum communication has been proposed and demonstrated. Its first achievement was reported in 2017. According to contemporary conceptions of counterfactual quantum communication, information can thereby be exchanged without any physical particle / matter / energy being transferred between the parties, without quantum teleportation and without the information being the absence of a signal. In 2020 research suggested that this is based on some form of relation between the properties of modular angular momentum with massless current of modular angular momentum current crossing the "transmission channel" with their interpretation's explanation not being based on "spooky action at a distance" but properties of a particle being able to "travel locally through regions from which the particle itself is excluded". See also Counterfactual quantum computation Counterfactual definiteness
https://en.wikipedia.org/wiki/American%20Cryptogram%20Association
The American Cryptogram Association (ACA) is an American non-profit organization devoted to the hobby of cryptography, with an emphasis on types of codes, ciphers, and cryptograms that can be solved either with pencil and paper, or with computers, but not computer-only systems. History The ACA was formed on September 1, 1930. Initially the primary interest was in monoalphabetic substitution ciphers (also known as "single alphabet" or "Aristocrat" puzzles), but this has since extended to dozens of different systems, such as Playfair, autokey, transposition, and Vigenère ciphers. Since some of its members had belonged to the “National Puzzlers' League”, some of the NPL terminology ("nom," "Krewe," etc.) is also used in the ACA. Publications and activities The association has a collection of books and articles on cryptography and related subjects in the library at Kent State University. An annual convention takes place in late August or early September. Recent conventions have been held in Bletchley Park and Fort Lauderdale, Florida. There is also a regular journal called “The Cryptogram”, which first appeared in February, 1932, and has grown to a 28-page bimonthly periodical which includes articles and challenge ciphers. Notable members H. O. Yardley, who used the nom BOZO, first Vice President in 1933. Helen Fouché Gaines, member since 1933, who used the nom PICCOLA, editor of the 1939 book Elementary Cryptanalysis. Rosario Candela, who used the nom ISKANDER, member since June 1934. David Kahn, who used the noms DAKON, ISHCABIBEL and more recently Kahn D. Will Shortz, New York Times Puzzle Editor, who uses the nom WILLz. David Shulman, lexicographer and cryptographer, member since 1933, who used the nom Ab Struse. James Gillogly, who uses the nom SCRYER. Gelett Burgess, American artist, art critic, poet, author and humorist, used the nom TWO O'CLOCK.
https://en.wikipedia.org/wiki/Mazur%E2%80%93Ulam%20theorem
In mathematics, the Mazur–Ulam theorem states that if and are normed spaces over R and the mapping is a surjective isometry, then is affine. It was proved by Stanisław Mazur and Stanisław Ulam in response to a question raised by Stefan Banach. For strictly convex spaces the result is true, and easy, even for isometries which are not necessarily surjective. In this case, for any and in , and for any in , write and denote the closed ball of radius around by . Then is the unique element of , so, since is injective, is the unique element of and therefore is equal to . Therefore is an affine map. This argument fails in the general case, because in a normed space which is not strictly convex two tangent balls may meet in some flat convex region of their boundary, not just a single point. See also Aleksandrov–Rassias problem
https://en.wikipedia.org/wiki/192%20%28number%29
192 (one hundred [and] ninety-two) is the natural number following 191 and preceding 193. In mathematics 192 has the prime factorization . Because it has so many small prime factors, it is the smallest number with 14 divisors, namely 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, and 192 itself. Because its only prime factors are 2 and 3, it is a 3-smooth number. 192 is the sum of ten consecutive primes (5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37). 192 is a Leyland number of the second kind. See also 192 (disambiguation)
https://en.wikipedia.org/wiki/Maldivian%20writing%20systems
Several Dhivehi scripts have been used by Maldivians during their history. The early Dhivehi scripts fell into the abugida category, while the more recent Thaana has characteristics of both an abugida and a true alphabet. An ancient form of Nagari script, as well as the Arabic and Devanagari scripts, have also been extensively used in the Maldives, but with a more restricted function. Latin was official only during a very brief period of the Islands' history. The first Dhivehi script likely appeared in association with the expansion of Buddhism throughout South Asia. This was over two millennia ago, in the Mauryan period, during emperor Ashoka's time. Manuscripts used by Maldivian Buddhist monks were probably written in a script that slowly evolved into a characteristic Dhivehi form. Few of those ancient documents have been discovered and the early forms of the Maldivian script are only found etched on a few coral rocks and copper plates. Ancient scripts (Evēla Akuru) Dhivehi Akuru "island letters" is a script formerly used to write the Dhivehi language. Unlike the modern Thaana script, Divehi Akuru has its origins in the Brahmi script and thus was written from left to right. Dhivehi Akuru was separated into two variants, a more recent and an ancient one and christened "Dives Akuru" and "Evēla Akuru" respectively by Harry Charles Purvis Bell in the early 20th century. Bell was British and studied Maldivian epigraphy when he retired from the colonial government service in Colombo. Bell wrote a monograph on the archaeology, history and epigraphy of the Maldives. He was the first modern scholar to study these ancient writings and he undertook an extensive and serious research on the available epigraphy. The division that Bell made based on the differences he perceived between the two types of Dhivehi scripts is convenient for the study of old Dhivehi documents. Dhives Akuru developed from Brahmi. The oldest attested inscription bears a clear resemblance to South
https://en.wikipedia.org/wiki/Ovi%20%28magazine%29
Ovi (meaning Door in English) is a multilingual non-profit daily publication that carries articles about ideas and opinion. It is based in Helsinki. History and profile Launched in December 2004 by two immigrants to Finland, Asa Butcher and Thanos Kalamidas, Ovi carries contributions to society, politics and culture in a number of different languages. In 2006 Ovi was chosen as a Uranus.fi Success Story Since 4 September 2006 the site has had daily updates and continues to cover various global issues, including discrimination, inequality, poverty, human rights and children's rights. In January 2007, Ovi came second in Newropeans Magazine's Grands Prix 2006 awards. They were nominated as one of the three finalists in its 'Citizenship - Information' section. The awards recognise people active in the democratisation of the EU. A registered jury of around 1,000 people voted online, awarding Ovi 29% of the vote. See also List of Finnish magazines
https://en.wikipedia.org/wiki/Call%E2%80%93Exner%20bodies
Call–Exner bodies, giving a follicle-like appearance, are small eosinophilic fluid-filled punched out spaces between granulosa cells. The granulosa cells are usually arranged haphazardly around the space. They are pathognomonic for granulosa cell tumors. Histologically, these tumors consists of monotonous islands of granulosa cells with "coffee-bean" nuclei. That same nuclear groove appearance noted in Brenner tumour, an epithelial-stromal ovarian tumor distinguishable by nests of transitional epithelial cells (urothelial) with longitudinal nuclear grooves (coffee bean nuclei) in abundant fibrous stroma. They are composed of membrane-packaged secretion of granulosa cells and have relations to the formation of liquor folliculi which are seen among closely arranged granulosa cells. They are named for Emma Louise Call (1847–1937), an American physician, and Sigmund Exner (1846–1926), an Austrian physiologist.
https://en.wikipedia.org/wiki/Rule%2030
Rule 30 is an elementary cellular automaton introduced by Stephen Wolfram in 1983. Using Wolfram's classification scheme, Rule 30 is a Class III rule, displaying aperiodic, chaotic behaviour. This rule is of particular interest because it produces complex, seemingly random patterns from simple, well-defined rules. Because of this, Wolfram believes that Rule 30, and cellular automata in general, are the key to understanding how simple rules produce complex structures and behaviour in nature. For instance, a pattern resembling Rule 30 appears on the shell of the widespread cone snail species Conus textile. Rule 30 has also been used as a random number generator in Mathematica, and has also been proposed as a possible stream cipher for use in cryptography. Rule 30 is so named because 30 is the smallest Wolfram code which describes its rule set (as described below). The mirror image, complement, and mirror complement of Rule 30 have Wolfram codes 86, 135, and 149, respectively. Rule set In all of Wolfram's elementary cellular automata, an infinite one-dimensional array of cellular automaton cells with only two states is considered, with each cell in some initial state. At discrete time intervals, every cell spontaneously changes state based on its current state and the state of its two neighbors. For Rule 30, the rule set which governs the next state of the automaton is: The corresponding formula is [left_cell XOR (central_cell OR right_cell)]. It is called Rule 30 because in binary, 000111102 = 30. The following diagram shows the pattern created, with cells colored based on the previous state of their neighborhood. Darker colors represent "1" and lighter colors represent "0". Time increases down the vertical axis. Structure and properties The following pattern emerges from an initial state in which a single cell with state 1 (shown as black) is surrounded by cells with state 0 (white). Rule 30 cellular automaton Here, the vertical axis represents time and an
https://en.wikipedia.org/wiki/Bird%20hybrid
A bird hybrid is a bird that has two different species as parents. The resulting bird can present with any combination of characteristics from the parent species, from totally identical to completely different. Usually, the bird hybrid shows intermediate characteristics between the two species. A "successful" hybrid is one demonstrated to produce fertile offspring. According to the most recent estimates, about 16% of all wild bird species have been known to hybridize with one another; this number increases to 22% when captive hybrids are taken into account. Several bird species hybridize with multiple other species. For example, the mallard (Anas platyrhynchos) is known to interbreed with at least 40 different species. The ecological and evolutionary consequences of multispecies hybridization remain to be determined. In the wild, some of the most frequently reported hybrids are waterfowl, gulls, hummingbirds, and birds-of-paradise. Mallards, whether of wild or domestic origin, hybridize with other ducks so often that multiple duck species are at risk of extinction because of it. In gulls, Western × Glaucous-winged Gulls (known as "Olympic Gulls") are particularly common; these hybrids are fertile and may be more evolutionarily fit than either parent species. At least twenty different hummingbird hybrid combinations have been reported, and intergeneric hybrids are not uncommon within the family. Wood-warblers are known to hybridize as well, and an unusual three-species warbler hybrid was discovered in May 2018. Hybridisation in shorebirds is unusual but reliably recorded. Numerous gamebird, domestic fowl and duck hybrids are known. Captive songbird hybrids are sometimes called mules. Numerous hybrid macaws exist in aviculture and occasionally occur in the wild. Some of these hybrid parrots are fertile with both the parent species and other hybrids. The scientific literature on hybridization in birds has been collected at the Avian Hybrids Project. The reality of
https://en.wikipedia.org/wiki/Lee%20Hwa%20Chung%20theorem
The Lee Hwa Chung theorem is a theorem in symplectic topology. The statement is as follows. Let M be a symplectic manifold with symplectic form ω. Let be a differential k-form on M which is invariant for all Hamiltonian vector fields. Then: If k is odd, If k is even, , where
https://en.wikipedia.org/wiki/Inner%20model%20theory
In set theory, inner model theory is the study of certain models of ZFC or some fragment or strengthening thereof. Ordinarily these models are transitive subsets or subclasses of the von Neumann universe V, or sometimes of a generic extension of V. Inner model theory studies the relationships of these models to determinacy, large cardinals, and descriptive set theory. Despite the name, it is considered more a branch of set theory than of model theory. Examples The class of all sets is an inner model containing all other inner models. The first non-trivial example of an inner model was the constructible universe L developed by Kurt Gödel. Every model M of ZF has an inner model LM satisfying the axiom of constructibility, and this will be the smallest inner model of M containing all the ordinals of M. Regardless of the properties of the original model, LM will satisfy the generalized continuum hypothesis and combinatorial axioms such as the diamond principle ◊. HOD, the class of sets that are hereditarily ordinal definable, form an inner model, which satisfies ZFC. The sets that are hereditarily definable over a countable sequence of ordinals form an inner model, used in Solovay's theorem. L(R), the smallest inner model containing all real numbers and all ordinals. L[U], the class constructed relative to a normal, non-principal, -complete ultrafilter U over an ordinal (see zero dagger). Consistency results One important use of inner models is the proof of consistency results. If it can be shown that every model of an axiom A has an inner model satisfying axiom B, then if A is consistent, B must also be consistent. This analysis is most useful when A is an axiom independent of ZFC, for example a large cardinal axiom; it is one of the tools used to rank axioms by consistency strength.
https://en.wikipedia.org/wiki/Angular%20momentum%20operator
In quantum mechanics, the angular momentum operator is one of several related operators analogous to classical angular momentum. The angular momentum operator plays a central role in the theory of atomic and molecular physics and other quantum problems involving rotational symmetry. Such an operator is applied to a mathematical representation of the physical state of a system and yields an angular momentum value if the state has a definite value for it. In both classical and quantum mechanical systems, angular momentum (together with linear momentum and energy) is one of the three fundamental properties of motion. There are several angular momentum operators: total angular momentum (usually denoted J), orbital angular momentum (usually denoted L), and spin angular momentum (spin for short, usually denoted S). The term angular momentum operator can (confusingly) refer to either the total or the orbital angular momentum. Total angular momentum is always conserved, see Noether's theorem. Overview In quantum mechanics, angular momentum can refer to one of three different, but related things. Orbital angular momentum The classical definition of angular momentum is . The quantum-mechanical counterparts of these objects share the same relationship: where r is the quantum position operator, p is the quantum momentum operator, × is cross product, and L is the orbital angular momentum operator. L (just like p and r) is a vector operator (a vector whose components are operators), i.e. where Lx, Ly, Lz are three different quantum-mechanical operators. In the special case of a single particle with no electric charge and no spin, the orbital angular momentum operator can be written in the position basis as: where is the vector differential operator, del. Spin angular momentum There is another type of angular momentum, called spin angular momentum (more often shortened to spin), represented by the spin operator . Spin is often depicted as a particle literally spinning a
https://en.wikipedia.org/wiki/Phosphatidylinositol%203%2C4-bisphosphate
Phosphatidylinositol (3,4)-bisphosphate (PtdIns(3,4)P2) is a minor phospholipid component of cell membranes, yet an important second messenger. The generation of PtdIns(3,4)P2 at the plasma membrane activates a number of important cell signaling pathways. Of all the phospholipids found within the membrane, inositol phospholipids make up less than 10%. Phosphoinositide’s (PI’s), also known as phosphatidylinositol phosphates, are synthesized in the cell's endoplasmic reticulum by the protein phosphatidylinositol synthase (PIS). PI’s are highly compartmentalized; their main components include a glycerol backbone, two fatty acid chains enriched with stearic acid and arachidonic acid, and an inositol ring whose phosphate groups' regulation differs between organelles depending on the specific PI and PIP kinases and PIP phosphatases present in the organelle. These kinases and phosphatases conduct phosphorylation and dephosphorylation at the inositol sugar head groups 3’, 4’, and 5’ positions, producing differing phosphoinositides, including PtdIns(3,4)P2.[1] PI kinases catalyze phosphate groups binding while PI phosphatases remove phosphate groups at the three positions on the PI inositol ring, giving seven different combinations of PI’s. PtdIns(3,4)P2 is dephophosphorylated by the phosphatase INPP4B on the 4' position of the inositol ring and by the TPTE (transmembrane phosphatases with tensin homology) family of phosphatases on the 3 position of the inositol ring. The PH domain in a number of proteins binds to PtdIns(3,4)P2 including the PH domain in PKB. The generation of PtdIns(3,4)P2 at the plasma membrane upon the activation of class I PI 3-kinases and SHIP phosphatases causes these proteins to translocate to the plasma membrane, thereby affecting their activity. Class I and II phosphoinositide 3-kinases (PI3Ks) synthesize PtdIns(3,4)P2 by phosphorylating the phosphoinositide PI4P’s 3' -OH position. Phosphatases SHIP1 and SH2-containing inositol 5’-polyphosphata
https://en.wikipedia.org/wiki/Microsoft%20Software%20Assurance
Microsoft Software Assurance (SA) is a Microsoft maintenance program aimed at business users who use Microsoft Windows, Microsoft Office, and other server and desktop applications. The core premise behind SA is to give users the ability to spread payments over several years, while offering "free" upgrades to newer versions during that time period. Overview Microsoft differentiates License and Software Assurance. Customers may purchase (depending on the program), a license without Software Assurance, Software Assurance only (but only to be used in combination with an existing license), both a License and Software Assurance together. The three possibilities are not always available, depending on the program (single license or volume license). Features The full list of benefits, effective March 2006, are as follows: Free upgrades: Subscribers may upgrade to newer versions of their Microsoft software Access to exclusive software products: Windows Fundamentals for Legacy PCs, Windows Vista Enterprise Edition, Windows 7 Enterprise Edition, Windows 8 Enterprise Edition and Microsoft Desktop Optimization Pack are only available to Software Assurance customers Training: Free training from Microsoft and access to Microsoft E-Learning, a series of interactive online training tools for users. This training can only be taken at a Microsoft Certified Partner for Learning Solutions and can only be redeemed for training that is categorized as Microsoft Official Curriculum. Home use: Employees of a company with SA can use an additional copy of Microsoft software Access to source code for larger companies (1,500+ desktops) 24x7 telephone and web support Additional error reporting tools Free licenses for additional servers provisioned as "Cold backups" of live servers Access to Microsoft TechNet managed newsgroups Access to Microsoft TechNet downloads for 1 user Extended Hotfix support: Typically Microsoft charges for non-security hotfixes after mainstream support for
https://en.wikipedia.org/wiki/Tetrahedral%20molecular%20geometry
In a tetrahedral molecular geometry, a central atom is located at the center with four substituents that are located at the corners of a tetrahedron. The bond angles are cos−1(−) = 109.4712206...° ≈ 109.5° when all four substituents are the same, as in methane () as well as its heavier analogues. Methane and other perfectly symmetrical tetrahedral molecules belong to point group Td, but most tetrahedral molecules have lower symmetry. Tetrahedral molecules can be chiral. Tetrahedral bond angle The bond angle for a symmetric tetrahedral molecule such as CH4 may be calculated using the dot product of two vectors. As shown in the diagram, the molecule can be inscribed in a cube with the tetravalent atom (e.g. carbon) at the cube centre which is the origin of coordinates, O. The four monovalent atoms (e.g. hydrogens) are at four corners of the cube (A, B, C, D) chosen so that no two atoms are at adjacent corners linked by only one cube edge. If the edge length of the cube is chosen as 2 units, then the two bonds OA and OB correspond to the vectors a = (1, –1, 1) and b = (1, 1, –1), and the bond angle θ is the angle between these two vectors. This angle may be calculated from the dot product of the two vectors, defined as a • b = ||a|| ||b|| cos θ where ||a|| denotes the length of vector a. As shown in the diagram, the dot product here is –1 and the length of each vector is √3, so that cos θ = –1/3 and the tetrahedral bond angle θ = arccos(–1/3) ≃ 109.47°. Examples Main group chemistry Aside from virtually all saturated organic compounds, most compounds of Si, Ge, and Sn are tetrahedral. Often tetrahedral molecules feature multiple bonding to the outer ligands, as in xenon tetroxide (XeO4), the perchlorate ion (), the sulfate ion (), the phosphate ion (). Thiazyl trifluoride () is tetrahedral, featuring a sulfur-to-nitrogen triple bond. Other molecules have a tetrahedral arrangement of electron pairs around a central atom; for example ammonia () with the nitrogen at
https://en.wikipedia.org/wiki/Wine%20Olympics
A Wine Olympics was organized by the French food and wine magazine Gault-Millau in 1979. A total of 330 wines from 33 countries were evaluated by 62 experts from ten nationalities. The 1976 contestant Trefethen Vineyards Chardonnay from Napa Valley won the Chardonnay tasting and was judged best in the world. Gran Coronas Mas La Plana 1970 from Spain received first place in the Cabernet Sauvignon blend category. In the Pinot noir competition, the 1975 Eyrie Vineyards Reserve from Oregon placed in the top ten. The 1975 HMR Pinot Noir from Paso Robles placed third. Tyrell's Pinot Noir 1976 from Australia was selected for the Gault-Millau World Dozen and placed first. The New York Times did note the limitations of this competition, in that it "did not pretend to include all or even most of the world's best wines." See also Wine competition Blind tasting of wine 1976 Judgment of Paris
https://en.wikipedia.org/wiki/Molecular%20tagging%20velocimetry
Molecular tagging velocimetry (MTV) is a specific form of flow velocimetry, a technique for determining the velocity of currents in fluids such as air and water. In its simplest form, a single "write" laser beam is shot once through the sample space. Along its path an optically induced chemical process is initiated, resulting in the creation of a new chemical species or in changing the internal energy state of an existing one, so that the molecules struck by the laser beam can be distinguished from the rest of the fluid. Such molecules are said to be "tagged". This line of tagged molecules is now transported by the fluid flow. To obtain velocity information, images at two instances in time are obtained and analyzed (often by correlation of the image intensities) to determine the displacement. If the flow is three-dimensional or turbulent the line will not only be displaced, it will also be deformed. Description There are three optical ways via which these tagged molecules can be visualized: fluorescence, phosphorescence and laser-induced fluorescence (LIF). In all three cases molecules relax to a lower state and their excess energy is released as photons. In fluorescence this energy decay occurs rapidly (within s to s at atmospheric pressure), thus making "direct" fluorescence impractical for tagging. In phosphorescence the decay is slower, because the transition is quantum-mechanically forbidden. In some "writing" schemes, the tagged molecule ends up in an excited state. If the molecule relaxes through phosphorescence, lasting long enough to see line displacement, this can be used to track the written line and no additional visualisation step is needed. If during tagging the molecule did not reach a phosphorescing state, or relaxed before the molecule was "read", a second step is needed. The tagged molecule is then excited using a second laser beam, employing a wavelength such that it specifically excites the tagged molecule. The molecule will fluoresce and th
https://en.wikipedia.org/wiki/Daedalian%20Opus
is a puzzle game for the Game Boy and was released in July 1990. Gameplay The game is essentially a series of 36 jigsaw puzzles with pentominos that must be assembled into a specific shape. The puzzles start off with rectangular shapes and simple solutions, but the puzzles quickly grow more complex, with odder shapes like a rocket ship, a gun, and even enlarged versions of some of the pentominoes themselves. Each level is timed, and once the timer is started it cannot be stopped until the level is finished. One starts off the game with only three pentomino pieces, and at the completion of each early level, a new piece is awarded to the player. At the final level, the player is given the 2x2 square O tetromino and must complete an 8x8 square puzzle. After completing each level, the player was given a password to access that level at a later time. Each password was a common English four-letter word, so that by guessing common four-letter words, players could potentially access levels they had not actually reached by playing the game. Development and ports The name of the game was inspired by Daedalus, the mythical character of Greek legend who created the labyrinth. A faithful fan version was later coded for the MSX computer system by Karoshi Corporation in 2006 for the game development contest MSXdev'06. The game has been ported to different platforms, such as PC and GP2X.
https://en.wikipedia.org/wiki/Radio-controlled%20boat
A radio-controlled boat is a boat or ship model controlled remotely with radio control equipment. Type Fun sport Electric sport boats are the most common type of boat amongst casual hobbyists. Hobby-quality boat speed generally start at around 20 mph and go up from there, and can be just as fast or faster than their internal-combustion counterparts, with the latest in lithium polymer and brushless motor technology. Ready-to-run speedboats from AquaCraft, ProBoat and OffshoreElectrics can reach speeds over 40 mph out of the box and with modifications can reach well into the 50-60 mph range. These types of boats are referred to as "hobby grade" and can be found only at hobby shops and retailers. "Toy grade" boats which are obtained through mass consumer retailers, are generally much slower and their maximum speeds are usually less than 15 mph. Scale Scale boats are replicas of full-size boats. They are to scale of the full sized ones. They can be small enough to fit into your hand, or large, trailer-transported models weighing hundreds of pounds. More often than not they are a miniaturized version of a prototype, built using plans and/or photos, although there are variants that utilize freelance designs. An offshoot of this style of marine RC's is radio-controlled submarines. Sailboats Sailboats use the power of the wind acting on sails to propel the boat. Model sailboats are typically controlled via a multi-channel radio transmitter in the hands of the operator with a corresponding receiver in the boat. By changing the position of the two joysticks on the transmitter signals are sent over two separate channels on a single radio frequency (assigned to the individual boat/operator). On the boat, the radio receiver is connected to two battery-powered electric motors or servos. Signals from the radio transmitter are interpreted by the radio receiver and translated into instructions to change the position of the servos. One servo controls the position of both m
https://en.wikipedia.org/wiki/Overlayer
An overlayer is a layer of adatoms adsorbed onto a surface, for instance onto the surface of a single crystal. On single crystals Adsorbed species on single crystal surfaces are frequently found to exhibit long-range ordering; that is to say that the adsorbed species form a well-defined overlayer structure. Each particular structure may only exist over a limited coverage range of the adsorbate, and in some adsorbate/substrate systems a whole progression of adsorbate structure are formed as the surface coverage is gradually increased. The periodicity of the overlayer (which often is larger than that of the substrate unit cell) can be determined by low-energy electron diffraction (LEED), because there will be additional diffraction beams associated with the overlayer. Types There are two types of overlayers: commensurate and incommensurate. In the former the substrate-adsorbate interaction tends to dominate over any lateral adsorbate-adsorbate interaction, while in the latter the adsorbate-adsorbate interactions are of similar magnitude to those between adsorbate and substrate. Notation An overlayer on a substrate can be notated in either Wood's notation or matrix notation. Wood's notation Wood's notation takes the form where M is the chemical symbol of the substrate, A is the chemical symbol of the overlayer, are the Miller indices of the surface plane, R and correspond to the rotational difference between the substrate and overlayer vectors, and the vector magnitudes shown are those of the substrate ( subscripts) and of the overlayer ( subscripts). This notation can only describe commensurate overlayers however, while matrix notation can describe both. Matrix notation Matrix notation differs from Wood's notation in the second term, which is replaced by the matrix that describes the overlayer primitive vectors in terms of the substrate primitive vectors: , where and so hence matrix notation has the form See also Surface reconstruction Superstructure LE
https://en.wikipedia.org/wiki/Glossary%20of%20Unified%20Modeling%20Language%20terms
Glossary of Unified Modeling Language (UML) terms provides a compilation of terminology used in all versions of UML, along with their definitions. Any notable distinctions that may exist between versions are noted with the individual entry it applies to. A Abstract - An indicator applied to a classifier (e.g., actor, class, use case) or to some features of a classifier (e.g., a class's operations) showing that the feature is incomplete and is intended not to be instantiated, but to be specialized by other definitions. Abstract class - A class that does not provide a complete declaration, perhaps because it has no implementation method identified for an operation. By declaring a class as abstract, one intends to prohibit direct instantiation of the class. An abstract class cannot directly instantiate objects; it must be inherited from before it can be used. Abstract data type Abstract operation - Unlike attributes, class operations can be abstract, meaning that there is no provided implementation. Generally, a class containing an abstract operation should be marked as an abstract class. An Operation must have a method supplied in some specialized Class before it can be used. Abstraction is the process of picking out common features and deriving essential characteristics from objects and procedure entities that distinguish it from other kinds of entities. Action - An action is the fundamental unit of behaviour specification and represents some transformation or processing in the modeled system, such as invoking a method of a class or a sub activity Action sequence - Action state - Action steps - Activation - the time during which an object has a method executing. It is often indicated by a thin box or bar superimposed on the Object's lifeline in a Sequence Diagram Activity diagram - a diagram that describes procedural logic, business process or work flow. An activity diagram contains a number of Activities and connected by Control Flows and Object Flo
https://en.wikipedia.org/wiki/Blotto%20%28biology%29
In biology, BLOTTO is a blocking reagent made from nonfat dry milk, phosphate buffered saline, and sodium azide. Its name is an almost-acronym of bovine lacto transfer technique optimizer. It constitutes an inexpensive source of nonspecific protein (milk casein) which blocks protein binding sites in a variety of experimental paradigms, notably Southern blots, Western blots, and ELISA. Its use was first reported in 1984 by Johnson and Elder's lab at Scripps. Prior to 1984, partially purified proteins such as bovine serum albumin, ovalbumin, or gelatin from various species had been used as blocking reagents but had the disadvantage of being expensive.
https://en.wikipedia.org/wiki/Invention%20of%20radio
The invention of radio communication was preceded by many decades of establishing theoretical underpinnings, discovery and experimental investigation of radio waves, and engineering and technical developments related to their transmission and detection. These developments allowed Guglielmo Marconi to turn radio waves into a wireless communication system. The idea that the wires needed for electrical telegraph could be eliminated, creating a wireless telegraph, had been around for a while before the establishment of radio-based communication. Inventors attempted to build systems based on electric conduction, electromagnetic induction, or on other theoretical ideas. Several inventors/experimenters came across the phenomenon of radio waves before its existence was proven; it was written off as electromagnetic induction at the time. The discovery of electromagnetic waves, including radio waves, by Heinrich Rudolf Hertz in the 1880s came after theoretical development on the connection between electricity and magnetism that started in the early 1800s. This work culminated in a theory of electromagnetic radiation developed by James Clerk Maxwell by 1873, which Hertz demonstrated experimentally. Hertz considered electromagnetic waves to be of little practical value. Other experimenters, such as Oliver Lodge and Jagadish Chandra Bose, explored the physical properties of electromagnetic waves, and they developed electric devices and methods to improve the transmission and detection of electromagnetic waves. But they did not apparently see the value in developing a communication system based on electromagnetic waves. In the mid-1890s, building on techniques physicists were using to study electromagnetic waves, Guglielmo Marconi developed the first apparatus for long-distance radio communication. On 23 December 1900, the Canadian inventor Reginald A. Fessenden became the first person to send audio (wireless telephony) by means of electromagnetic waves, successfully transmitt
https://en.wikipedia.org/wiki/Optical%20lattice
An optical lattice is formed by the interference of counter-propagating laser beams, creating a spatially periodic polarization pattern. The resulting periodic potential may trap neutral atoms via the Stark shift. Atoms are cooled and congregate at the potential extrema (at maxima for blue-detuned lattices, and minima for red-detuned lattices). The resulting arrangement of trapped atoms resembles a crystal lattice and can be used for quantum simulation. Atoms trapped in the optical lattice may move due to quantum tunneling, even if the potential well depth of the lattice points exceeds the kinetic energy of the atoms, which is similar to the electrons in a conductor. However, a superfluid–Mott insulator transition may occur, if the interaction energy between the atoms becomes larger than the hopping energy when the well depth is very large. In the Mott insulator phase, atoms will be trapped in the potential minima and cannot move freely, which is similar to the electrons in an insulator. In the case of Fermionic atoms, if the well depth is further increased the atoms are predicted to form an antiferromagnetic, i.e. Néel state at sufficiently low temperatures. Parameters There are two important parameters of an optical lattice: the potential well depth and the periodicity. Control of potential depth The potential experienced by the atoms is related to the intensity of the laser used to generate the optical lattice. The potential depth of the optical lattice can be tuned in real time by changing the power of the laser, which is normally controlled by an acousto-optic modulator (AOM). The AOM is tuned to deflect a variable amount of the laser power into the optical lattice. Active power stabilization of the lattice laser can be accomplished by feedback of a photodiode signal to the AOM. Control of periodicity The periodicity of the optical lattice can be tuned by changing the wavelength of the laser or by changing the relative angle between the two laser beams.
https://en.wikipedia.org/wiki/Inverse%20tangent%20integral
The inverse tangent integral is a special function, defined by: Equivalently, it can be defined by a power series, or in terms of the dilogarithm, a closely related special function. Definition The inverse tangent integral is defined by: The arctangent is taken to be the principal branch; that is, −/2 < arctan(t) < /2 for all real t. Its power series representation is which is absolutely convergent for The inverse tangent integral is closely related to the dilogarithm and can be expressed simply in terms of it: That is, for all real x. Properties The inverse tangent integral is an odd function: The values of Ti2(x) and Ti2(1/x) are related by the identity valid for all x > 0 (or, more generally, for Re(x) > 0). This can be proven by differentiating and using the identity . The special value Ti2(1) is Catalan's constant . Generalizations Similar to the polylogarithm , the function is defined analogously. This satisfies the recurrence relation: Relation to other special functions The inverse tangent integral is related to the Legendre chi function by: Note that can be expressed as , similar to the inverse tangent integral but with the inverse hyperbolic tangent instead. The inverse tangent integral can also be written in terms of the Lerch transcendent History The notation Ti2 and Tin is due to Lewin. Spence (1809) studied the function, using the notation . The function was also studied by Ramanujan.
https://en.wikipedia.org/wiki/Ayazi%20syndrome
Ayazi syndrome (or Chromosome 21 Xq21 deletion syndrome) is a syndrome characterized by choroideremia, congenital deafness and obesity. Signs and symptoms The presentation for this condition is as follows: Mental retardation Deafness at birth Obesity Choroideremia Impaired vision Progressive degeneration of the choroid Genetics Ayazi syndrome's inheritance pattern is described as x-linked recessive. Genes known to be deleted are CHM and POU3F4, both located on the Xq21 locus. Diagnosis Treatment
https://en.wikipedia.org/wiki/Asynchronous%20system
The primary focus of this article is asynchronous control in digital electronic systems. In a synchronous system, operations (instructions, calculations, logic, etc.) are coordinated by one, or more, centralized clock signals. An asynchronous system, in contrast, has no global clock. Asynchronous systems do not depend on strict arrival times of signals or messages for reliable operation. Coordination is achieved using event-driven architecture triggered by network packet arrival, changes (transitions) of signals, handshake protocols, and other methods. Modularity Asynchronous systems – much like object-oriented software – are typically constructed out of modular 'hardware objects', each with well-defined communication interfaces. These modules may operate at variable speeds, whether due to data-dependent processing, dynamic voltage scaling, or process variation. The modules can then be combined to form a correct working system, without reference to a global clock signal. Typically, low power is obtained since components are activated only on demand. Furthermore, several asynchronous styles have been shown to accommodate clocked interfaces, and thereby support mixed-timing design. Hence, asynchronous systems match well the need for correct-by-construction methodologies in assembling large-scale heterogeneous and scalable systems. Design styles There is a large spectrum of asynchronous design styles, with tradeoffs between robustness and performance (and other parameters such as power). The choice of design style depends on the application target: reliability/ease-of-design vs. speed. The most robust designs use 'delay-insensitive circuits', whose operation is correct regardless of gate and wire delays; however, only limited useful systems can be designed with this style. Slightly less robust, but much more useful, are quasi-delay-insensitive circuits (also known as speed-independent circuits), such as delay-insensitive minterm synthesis, which opera
https://en.wikipedia.org/wiki/Gas%20torus
A gas torus is a toroidal cloud of gas or plasma that encircles a planet. In the Solar System, gas tori tend to be produced by the interaction of a satellite's atmosphere with the magnetic field of a planet. The most famous example of this is the Io plasma torus, which is produced by the ionization of roughly 1 ton per second of oxygen and sulfur from the tenuous atmosphere of Jupiter's volcanic moon Io. Before being ionized, these particles are part of a neutral torus, also centered on the orbit of Io. Energetic particle observations also suggest the presence of a neutral torus around the orbit of Jupiter's moon Europa although such a torus would be merged with the outer portions of an Io torus. Other examples include the largely neutral torus of oxygen and hydrogen produced by Saturn's moon Enceladus. The Enceladus and Io tori differ in that particles in the Io torus are predominantly ionized while in the Enceladus torus, the neutral density is much greater than the ion density. After the Voyager encounters, the possibility of a torus of nitrogen produced by Saturn's moon Titan was proposed. Subsequent observations by the Cassini spacecraft showed no clear evidence of such a torus. While neutral nitrogen could not be measured, the ions near the orbit of Titan were primarily hydrogen or water group (O+, OH+, H2O+ and H3O+) from the Enceladus torus. Trace amounts of nitrogen ions were detected but at levels consistent with an Enceladus source. A fictional gas torus is the setting for Larry Niven's novels The Integral Trees and The Smoke Ring, in which a gas giant in orbit around a neutron star generates a gas torus of sufficient density and free oxygen to support life (including humans). External links Life in a gas torus around a neutron star worldbuilding.stackexchange.com Gas torus academic.microsoft.com Notes Astrophysics
https://en.wikipedia.org/wiki/Internet%20Fibre%20Channel%20Protocol
Internet Fibre Channel Protocol (iFCP) is a gateway-to-gateway network protocol standard that provides Fibre Channel fabric functionality to Fibre Channel devices over an IP network. It is officially ratified by the Internet Engineering Task Force. Its most common forms are in 1 Gbit/s, 2 Gbit/s, 4 Gbit/s, 8 Gbit/s, and 10 Gbit/s. Technical overview The iFCP protocol enables the implementation of Fibre Channel functionality over an IP network, within which the Fibre Channel switching and routing infrastructure is replaced by IP components and technology. Congestion control, error detection and recovery are provided through the use of TCP (Transmission Control Protocol). The primary objective of iFCP is to allow existing Fibre Channel devices to be networked and interconnected over an IP based network at wire speeds. The method of address translation defined and the protocol permit Fibre Channel storage devices and host adapters to be attached to an IP-based fabric using transparent gateways. The iFCP protocol layer's main function is to transport Fibre Channel frame images between Fibre Channel ports attached both locally and remotely. iFCP encapsulates and routes the fibre channel frames that make up each Fibre Channel information unit via a predetermined TCP connection for transport across the IP network when transporting frames to a remote Fibre Channel port. See also Fibre Channel over Ethernet (FCoE) Fibre Channel over IP (FCIP) Internet SCSI (iSCSI)
https://en.wikipedia.org/wiki/History%20of%20electrical%20engineering
This article details the history of electrical engineering. The first substantial practical use of electricity was electromagnetism. Ancient developments Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the "Thunderer of the Nile", and described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. Possibly the earliest and nearest approach to the discovery of the identity of lightning, and electricity from any other source, is to be attributed to the Arabs, who before the 15th century had the Arabic word for lightning ra‘ad () applied to the electric ray. Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus, an ancient Greek philosopher, writing at around 600 BCE, described a form of static electricity, noting that rubbing fur on various substances, such as amber, would cause a particular attraction between the two. He noted that the amber buttons could attract light objects such as hair and that if they rubbed the amber for long enough they could even get a spark to jump. At around 450 BCE Democritus, a later Greek philosopher, developed an atomic theory that was similar to modern atomic theory. His mentor, Leucippus, is credited with this same theory. The hypothesis of Leucippus and Democritus held everything to be composed of atoms. But these atoms,
https://en.wikipedia.org/wiki/Certified%20ethical%20hacker
Certified Ethical Hacker (CEH) is a qualification given by EC-Council and obtained by demonstrating knowledge of assessing the security of computer systems by looking for weaknesses and vulnerabilities in target systems, using the same knowledge and tools as a malicious hacker, but in a lawful and legitimate manner to assess the security posture of a target system. This knowledge is assessed by answering multiple choice questions regarding various ethical hacking techniques and tools. The code for the CEH exam is 312–50. This certification has now been made a baseline with a progression to the CEH (Practical), launched in March 2018, a test of penetration testing skills in a lab environment where the candidate must demonstrate the ability to apply techniques and use penetration testing tools to compromise various simulated systems within a virtual environment. Ethical hackers are employed by organizations to penetrate networks and computer systems with the purpose of finding and fixing security vulnerabilities. The EC-Council offers another certification, known as Certified Network Defense Architect (CNDA). This certification is designed for United States Government agencies and is available only to members of selected agencies including some private government contractors, primarily in compliance to DOD Directive 8570.01-M. It is also ANSI accredited and is recognized as a GCHQ Certified Training (GCT). Examination Certification is achieved by taking the CEH examination after having either attended training at an Accredited Training Center (ATC), or completed through EC-Council's learning portal, iClass. If a candidate opts to self-study, an application must be filled out and proof submitted of two years of relevant information security work experience. Those without the required two years of information security related work experience can request consideration of educational background. The current version of the CEH is V12, released in September 2022. The exa
https://en.wikipedia.org/wiki/United%20States%20Physics%20Olympiad
The United States Physics Olympiad (USAPhO) is a high school physics competition run by the American Association of Physics Teachers and the American Institute of Physics to select the team to represent the United States at the International Physics Olympiad (IPhO). The team is selected through a series of exams testing their problem solving abilities. The top 20 finalists are invited to a rigorous study camp at the University of Maryland to prepare for the IPhO. History The International Physics Olympiad began in 1967 among Eastern European countries; many western countries soon joined in the 1970s. In 1986, the American Association of Physics Teachers led by Jack Wilson organized the United States Physics Team for the first time. The 1986 team was made up of 20 talented high school physics students nominated by their teachers. Five students were selected for the International Physics Olympiad after a rigorous preparation at the University of Maryland. At the 1986 London IPhO, the team brought home three bronze medals. Since then, the US Physics Team has regularly placed in the top ten nations at the international competition. It has accumulated 66 Gold Medals, 48 Silver Medals, 29 Bronze Medals, and 11 Honorable Mentions at IPhO as of 2019. Academic directors Tengiz Bibilashvili (2021–present) JiaJia Dong (2018–2021) Paul Stanley (2008–2018) Robert Shurtz (2006–2008) Mary Mogge (1998–2006) Ed Neuenschwander (1996–1998) Larry D. Kirkpatrick (1988–1996) Alumni Alexandr Wang (2014) Sherry Gong (2006) Natalia Toro (1999) Chris Hirata (1996) Rhiju Das (1995) Steven Gubser (1989) Physics team selection The procedure to select the U.S. Physics Team consists of two exams: the "F=ma" and the "USA Physics Olympiad" (USAPhO). Approximately the top 20 finishers are invited to the U.S. Physics Team training camp. (In some previous years, there was also an intermediate, quarterfinal exam.) F = ma exam Approximately 6,000 students take this first exam, which
https://en.wikipedia.org/wiki/The%20Talking%20Stone
"The Talking Stone" is a science fiction mystery short story by American writer Isaac Asimov, which first appeared in the October 1955 issue of The Magazine of Fantasy and Science Fiction and was reprinted in the 1968 collection Asimov's Mysteries. "The Talking Stone" was the second of Asimov's Wendell Urth stories. Plot summary Larry Verdansky, a repair technician assigned alone on Station Five, is interested in "siliconies", the silicon-based life forms found on some asteroids. The creatures typically grow to a maximum size of by absorbing gamma rays from radioactive ores. Some are telepathic. When the space freighter Robert Q appears at the station with a giant of a "silicony" in diameter, Verdansky deduces that the crew has found an incredibly rich source of uranium. Verdansky contacts the authorities, but before a patrol ship can reach her, the Robert Q is hit by a meteor, killing the three human crew members. The silicony itself is fatally injured from the explosive decompression. When questioned, the dying silicony states that the coordinates of its home are written on "the asteroid". Dr. Wendell Urth deduces that the silicony meant that the numbers were actually engraved on the hull of the Robert Q, disguised as serial and registration numbers, since the ship fit the definition of an asteroid (a small body orbiting the Sun) the ship's crew had read to it from an ancient astronomy book.
https://en.wikipedia.org/wiki/BitTorrent%20protocol%20encryption
Protocol encryption (PE), message stream encryption (MSE) or protocol header encrypt (PHE) are related features of some peer-to-peer file-sharing clients, including BitTorrent clients. They attempt to enhance privacy and confidentiality. In addition, they attempt to make traffic harder to identify by third parties including internet service providers (ISPs). However, encryption will not protect one from DMCA notices from sharing not legal content, as one is still uploading material and the monitoring firms can merely connect to the swarm. MSE/PE is implemented in BitComet, BitTornado, Deluge, Flashget, KTorrent, libtorrent (used by various BitTorrent clients, including qBittorrent), Mainline, μTorrent, qBittorrent, rTorrent, Transmission, Tixati and Vuze. PHE was implemented in old versions of BitComet. Similar protocol obfuscation is supported in up-to-date versions of some other (non-BitTorrent) systems including eMule. Purpose As of January 2005, BitTorrent traffic made up more than a third of total residential internet traffic, although this dropped to less than 20% as of 2009. Some ISPs deal with this traffic by increasing their capacity whilst others use specialised systems to slow peer-to-peer traffic to cut costs. Obfuscation and encryption make traffic harder to detect and therefore harder to throttle. These systems were designed initially to provide anonymity or confidentiality, but became required in countries where Internet Service Providers were granted the power to throttle BitTorrent users and even ban those they believed were guilty of illegal file sharing. History Early approach Protocol header encryption (PHE) was conceived by RnySmile and first implemented in BitComet version 0.60 on 8 September 2005. Some software like IPP2P claims BitComet traffic is detectable even with PHE. PHE is detectable because only part of the stream is encrypted. Since there are no open specifications to this protocol implementation, the only possibility to suppo
https://en.wikipedia.org/wiki/Bigraph
A bigraph can be modelled as the superposition of a graph (the link graph) and a set of trees (the place graph). Each node of the bigraph is part of a graph and also part of some tree that describes how the nodes are nested. Bigraphs can be conveniently and formally displayed as diagrams. They have applications in the modelling of distributed systems for ubiquitous computing and can be used to describe mobile interactions. They have also been used by Robin Milner in an attempt to subsume Calculus of Communicating Systems (CCS) and π-calculus. They have been studied in the context of category theory. Anatomy of a bigraph Aside from nodes and (hyper-)edges, a bigraph may have associated with it one or more regions which are roots in the place forest, and zero or more holes in the place graph, into which other bigraph regions may be inserted. Similarly, to nodes we may assign controls that define identities and an arity (the number of ports for a given node to which link-graph edges may connect). These controls are drawn from a bigraph signature. In the link graph we define inner and outer names, which define the connection points at which coincident names may be fused to form a single link. Foundations A bigraph is a 5-tuple: where is a set of nodes, is a set of edges, is the control map that assigns controls to nodes, is the parent map that defines the nesting of nodes, and is the link map that defines the link structure. The notation indicates that the bigraph has holes (sites) and a set of inner names and regions, with a set of outer names . These are respectively known as the inner and outer interfaces of the bigraph. Formally speaking, each bigraph is an arrow in a symmetric partial monoidal category (usually abbreviated spm-category) in which the objects are these interfaces. As a result, the composition of bigraphs is definable in terms of the composition of arrows in the category. Extensions and variants Directed Bigraphs Directed Bigr
https://en.wikipedia.org/wiki/Archimedes%27s%20cattle%20problem
Archimedes's cattle problem (or the or ) is a problem in Diophantine analysis, the study of polynomial equations with integer solutions. Attributed to Archimedes, the problem involves computing the number of cattle in a herd of the sun god from a given set of restrictions. The problem was discovered by Gotthold Ephraim Lessing in a Greek manuscript containing a poem of 44 lines, in the Herzog August Library in Wolfenbüttel, Germany in 1773. The problem remained unsolved for a number of years, due partly to the difficulty of computing the huge numbers involved in the solution. The general solution was found in 1880 by (1845–1916), headmaster of the (Gymnasium of the Holy Cross) in Dresden, Germany. Using logarithmic tables, he calculated the first digits of the smallest solution, showing that it is about cattle, far more than could fit in the observable universe. The decimal form is too long for humans to calculate exactly, but multiple-precision arithmetic packages on computers can write it out explicitly. History In 1769, Gotthold Ephraim Lessing was appointed librarian of the Herzog August Library in Wolfenbüttel, Germany, which contained many Greek and Latin manuscripts. A few years later, Lessing published translations of some of the manuscripts with commentaries. Among them was a Greek poem of forty-four lines, containing an arithmetical problem which asks the reader to find the number of cattle in the herd of the god of the sun. It is now generally credited to Archimedes. Problem The problem, as translated into English by Ivor Thomas, states: If thou art diligent and wise, O stranger, compute the number of cattle of the Sun, who once upon a time grazed on the fields of the Thrinacian isle of Sicily, divided into four herds of different colours, one milk white, another a glossy black, a third yellow and the last dappled. In each herd were bulls, mighty in number according to these proportions: Understand, stranger, that the white bulls were equal to a ha
https://en.wikipedia.org/wiki/Floating%20body%20effect
The floating body effect is the effect of dependence of the body potential of a transistor realized by the silicon on insulator (SOI) technology on the history of its biasing and the carrier recombination processes. The transistor's body forms a capacitor against the insulated substrate. The charge accumulates on this capacitor and may cause adverse effects, for example, opening of parasitic transistors in the structure and causing off-state leakages, resulting in higher current consumption and in case of DRAM in loss of information from the memory cells. It also causes the history effect, the dependence of the threshold voltage of the transistor on its previous states. In analog devices, the floating body effect is known as the kink effect. One countermeasure to floating body effect involves use of fully depleted (FD) devices. The insulator layer in FD devices is significantly thinner than the channel depletion width. The charge and thus also the body potential of the transistors is therefore fixed. However, the short-channel effect is worsened in the FD devices, the body may still charge up if both source and drain are high, and the architecture is unsuitable for some analog devices that require contact with the body. Hybrid trench isolation is another approach. While floating body effect presents a problem in SOI DRAM chips, it is exploited as the underlying principle for Z-RAM and T-RAM technologies. For this reason, the effect is sometimes called the Cinderella effect in the context of these technologies, because it transforms a disadvantage into an advantage. AMD and Hynix licensed Z-RAM, but as of 2008 had not put it into production. Another similar technology (and Z-RAM competitor) developed at Toshiba and refined at Intel is Floating Body Cell (FBC).
https://en.wikipedia.org/wiki/Location%20identifier
A location identifier is a symbolic representation for the name and the location of an airport, navigation aid, or weather station, and is used for staffed air traffic control facilities in air traffic control, telecommunications, computer programming, weather reports, and related services. ICAO location indicator The International Civil Aviation Organization establishes sets of four-letter location indicators which are published in ICAO Publication 7910. These are used by air traffic control agencies to identify airports and by weather agencies to produce METAR weather reports. The first letter indicates the region; for example, K for the contiguous United States, C for Canada, E for northern Europe, R for the Asian Far East, and Y for Australia. Examples of ICAO location indicators are RPLL for Manila Ninoy Aquino Airport and KCEF for Westover Joint Air Reserve Base. IATA identifier The International Air Transport Association uses sets of three-letter IATA identifiers which are used for airline operations, baggage routing, and ticketing. There is no specific organization scheme to IATA identifiers; typically they take on the abbreviation of the airport or city such as MNL for Manila Ninoy Aquino Airport. In the United States, the IATA identifier usually equals the FAA identifier, but this is not always the case. A prominent example is Sawyer International Airport in Marquette, Michigan, which uses the FAA identifier SAW and the IATA identifier MQT. FAA identifier The Federal Aviation Administration location identifier (FAA LID) is a three- to five-character alphanumeric code identifying aviation-related facilities inside the United States, though some codes are reserved for, and are managed by other entities. For nearly all major airports, the assigned identifiers are alphabetic three-letter codes, such as ORD for Chicago O’Hare International Airport. Minor airfields are typically assigned a mix of alphanumeric characters, such as 8N2 for Skydive Chica
https://en.wikipedia.org/wiki/Patrick%20Flanagan
Patrick Flanagan (October 11, 1944 - December 19, 2019) was an American New Age author and inventor. Flanagan wrote books focused on Egyptian sacred geometry and Pyramidology. In 1958, at the age of 14, while living in Bellaire, Texas, Flanagan invented the neurophone, an electronic device that claims to transmit sound through the body’s nervous system directly to the brain. It was patented in the United States in 1968 (Patent #3,393,279). The invention earned him a profile in Life magazine, which called him a "unique, mature and inquisitive scientist." Pyramid power During the 1970s, Flanagan was a proponent of pyramid power. He wrote several books and promoted it with lectures and seminars. According to Flanagan, pyramids with the exact relative dimensions of Egyptian pyramids act as "an effective resonator of randomly polarized microwave signals which can be converted into electrical energy." One of his first books, Pyramid Power, was featured in the lyrics of The Alan Parsons Project album, Pyramid. Inventions and discoveries In 1958, at the age of 13, Flanagan invented a device which he called a Neurophone, which he claimed transmitted sound via the nervous system to the brain. Bibliography
https://en.wikipedia.org/wiki/Right-to-left%20shunt
A right-to-left shunt is a cardiac shunt which allows blood to flow from the right heart to the left heart. This terminology is used both for the abnormal state in humans and for normal physiological shunts in reptiles. Clinical Significance A right-to-left shunt occurs when: there is an opening or passage between the atria, ventricles, and/or great vessels; and, right heart pressure is higher than left heart pressure and/or the shunt has a one-way valvular opening. Small physiological, or "normal", shunts are seen due to the return of bronchial artery blood and coronary blood through the Thebesian veins, which are deoxygenated, to the left side of the heart. Causes Congenital defects can lead to right-to-left shunting immediately after birth: Persistent truncus arteriosus (minimal cyanosis) Transposition of great vessels Tricuspid atresia Tetralogy of Fallot Total anomalous pulmonary venous return A mnemonic to remember the conditions associated with right-to-left shunting involves the numbers 1-5, as follows: 1 Combination Vessel: Persistent truncus arteriosus (minimal cyanosis) 2 Vessels involved: Transposition of great vessels 3 Leaflets: Tricuspid atresia 4 Tetra- prefix: Tetralogy of Fallot 5 Words: Total anomalous pulmonary venous return A mainstem intubation with an endotracheal tube can lead to right-to-left shunting. This occurs when the tip of the endotracheal tube is placed beyond the carina. In this way only one lung is oxygenated and oxygen-poor blood from the non-ventilated lung dilutes the oxygen level of blood returning from the lungs in the left ventricle. Eisenmenger syndrome An uncorrected left-to-right shunt can progress to a right-to-left shunt; this process is termed Eisenmenger syndrome. This is seen in Ventricular septal defect, Atrial septal defect, and patent ductus arteriosus, and can manifest as late as adult life. This switch in blood flow direction is precipitated by pulmonary hypertension due to increased pulmonary blood flow in
https://en.wikipedia.org/wiki/Edy
Edy, provided by Rakuten, Inc. in Japan is a prepaid rechargeable contactless smart card. While the name derives from euro, dollar, and yen, it works with yen only. History Edy was launched on January 18, 2001, by BitWallet, with financing primarily from Sony, in addition to then other companies, including NTT Docomo and the Sumitomo Group. NTT Docomo's i-mode mobile payment service Osaifu-Keitai, which launched on 10 July 2004, included support for BitWallet's Edy. In 2005, over a million payments had been made with the service. On 18 April 2006, Intel announced a five billion yen (approx. US$45 million, or 35 million euros as of May 20, 2006) investment in bitWallet, aimed at furthering its usage on computers. On 1 June 2012, Rakuten acquired Edy, changing the official name to RakutenEdy and the parent company from bitWallet to RakutenEdy Inc. The three-oval blue-tone logo was changed to the Rakuten logo and the font of the word 'Edy' was altered. Mobile phones Edy can be used on Osaifu-Keitai featured cellphones. Makers of these phones include major cell phone carriers such as docomo, au and SoftBank. The phones can be used physically like an Edy card, and online Edy features can be accessed from the phones as well, such as the ability to charge an Edy account.
https://en.wikipedia.org/wiki/Edward%20Flatau
Edward Flatau (27 December 1868, Płock – 7 June 1932, Warsaw) was a Polish neurologist and psychiatrist. He was a co-founder of the modern Polish neurology, an authority on the physiology and pathology of meningitis, co-founder of medical journals Neurologia Polska and Warszawskie Czasopismo Lekarskie, and member of the Polish Academy of Learning. His name in medicine is linked to Redlich-Flatau syndrome, Flatau-Sterling torsion dystonia (type 1), Flatau-Schilder disease, and Flatau's law. His publications greatly influenced the developing field of neurology. He published a human brain atlas (1894), wrote a fundamental book on migraine (1912), established the localization principle of long fibers in the spinal cord (1893), and with Sterling published an early paper (1911) on progressive torsion spasm in children and suggested that the disease has a genetic component. Life He was born in 1868 in Płock, the son of Anna and Ludwik Flatau of assimilated Jewish family. In 1886, he graduated from high school (gymnasium) in Płock (now Marshal Stanisław Małachowski High School, Płock, also known as "Małachowianka"). From 1886, Flatau attended medical school at the University of Moscow, where he graduated eximia cum laude. In Moscow, he was greatly influenced by the psychiatrist Sergei Sergeievich Korsakoff (1854–1900) and the neurologist Alexis Jakovlevich Kozhevnikof (1836–1902). Flatau became a medical doctor in 1892. He spent the years 1893–1899 in Berlin in the laboratories of Emanuel Mendel (1839–1907) and in the University of Berlin under Wilhelm von Waldeyer-Hartz (1836–1921). During that time, he worked together with Alfred Goldscheider (1858–1935), Ernst Viktor von Leyden (1832–1910), Hermann Oppenheim, Louis Jacobsohn, Ernst Remak, and Hugo Liepmann. Though he was offered a position of professorship of neurology in Buenos Aires, he returned to Poland and in 1899 settled in Warsaw. He was married twice. He had two daughters, Anna and Joanna Flatau. His first
https://en.wikipedia.org/wiki/Finnix
Finnix is a Debian-based Live CD operating system, developed by Ryan Finnie and intended for system administrators for tasks such as filesystem recovery, network monitoring and OS installation. Finnix is a relatively small distribution, with an ISO download size of approximately 100 MiB, and is available for the x86 and PowerPC architectures, and paravirtualized (User Mode Linux and Xen) systems. Finnix can be run off a bootable CD, a USB flash drive, a hard drive, or network boot (PXE). History Finnix development first began in 1999, making it one of the oldest Linux distributions released with the intent of being run completely from a bootable CD (the other Live CD around at the time was the Linuxcare Bootable Business Card CD, first released in 1999). Finnix 0.01 was based on Red Hat Linux 6.0, and was created to help with administration and recovery of other Linux workstations around Finnie's office. The first public release of Finnix was 0.03, and was released in early 2000, based on an updated Red Hat Linux 6.1. Despite its 300 MiB ISO size and requirement of 32 MiB RAM (which, given RAM prices and lack of high-speed Internet proliferation at the time, was prohibitive for many), Finnix enjoyed moderate success, with over 10,000 downloads. After version 0.03, development ceased, and Finnix was left unmaintained until 2005. On 23 October 2005, Finnix 86.0 was released. Earlier unreleased versions (84, and 85.0 through 85.3) were "Knoppix remasters", with support for Linux LVM and dm-crypt being the main reason for creation. However, 86.0 was a departure from Knoppix, and was derived directly from the Debian "testing" tree. Usage Finnix is released as a small bootable CD ISO. A user can download the ISO, burn the image to CD, and boot into a text mode Linux environment. Finnix requires at least 32 MiB RAM to run properly, but can use more if present. Most hardware devices are detected and dealt with automatically, such as hard drives, network cards and U
https://en.wikipedia.org/wiki/Supplicant%20%28computer%29
In computer networking, a supplicant is an entity at one end of a point-to-point LAN segment that seeks to be authenticated by an authenticator attached to the other end of that link. The IEEE 802.1X standard uses the term "supplicant" to refer either to hardware or to software. In practice, a supplicant is a software application installed on an end-user's computer. The user invokes the supplicant and submits credentials to connect the computer to a secure network. If the authentication succeeds, the authenticator typically allows the computer to connect to the network. A supplicant, in some contexts, refers to a user or to a client in a network environment seeking to access network resources secured by the IEEE 802.1X authentication mechanism. But saying "user" or "client" over-generalizes; in reality, the interaction takes place through a personal computer, an Internet Protocol (IP) phone, or similar network device. Each of these must run supplicant software that initiates or reacts to IEEE 802.1X authentication requests for association. Overview Businesses, campuses, governments and all other social entities across-the-board in need of security may resort to the use of IEEE 802.1X authentication to regulate users access to their corresponding network infrastructure. And to enable this, client devices need to meet supplicant definition in order to gain access. In businesses, for example, it is very common that employees will receive their new computer with all the necessary settings appropriately set for IEEE 802.1X authentication, in particular when connecting wirelessly to the network. Access For a supplicant-capable device to gain access to the secured resources on a network, some preconditions should be observed and a context that will make this feasible. The network to which the supplicant needs to interact with must have a RADIUS Server (also known as an Authentication Server or an Authenticator), a Dynamic Host Configuration Protocol (DHCP) server if aut
https://en.wikipedia.org/wiki/Vital%20theory
According to the vital force theory, the conduction of water up the xylem vessel is a result of vital action of the living cells in the xylem tissue. These living cells are involved in ascent of sap. Relay pump theory and Pulsation theory support the active theory of ascent of sap. Emil Godlewski (senior) (1884) proposed Relay pump or Clamberinh force theory (through xylem parenchyma) and Jagadish Chandra Bose(1923) proposed pulsation theory (due to pulsatory activities of innermost cortical cells just outside endodermis). Jagadish Chandra Bose suggested a mechanism for the ascent of sap in 1927. His theory can be explained with the help of galvanometer of electric probes. He found electrical ‘pulsations’ or oscillations in electric potentials, and came to believe these were coupled with rhythmic movements in the telegraph plant Codariocalyx motorius (then Desmodium). On the basis of this Bose theorized that regular wave-like ‘pulsations’ in cell electric potential and turgor pressure were an endogenous form of cell signaling. According to him the living cells in the inner lining of the xylem tissue pump water by contractive and expulsive movements similar to the animal heart circulating blood. This mechanism has not been well supported, and in spite of some ongoing debate, the evidence overwhelmingly supports the cohesion-tension theory for the ascent of sap. See also Cohesion-tension theory External links Bioelectricity and the rhythms of sensitive plants – The biophysical research of Jagadis Chandra Bose Botany
https://en.wikipedia.org/wiki/Peduncle%20%28botany%29
In botany, a peduncle is a stalk supporting an inflorescence or a solitary flower, or, after fecundation, an infructescence or a solitary fruit. The peduncle sometimes has bracts (a type of cataphylls) at nodes. The main axis of an inflorescence above the peduncle is the rachis. There are no flowers on the peduncle but there are flowers on the rachis. When a peduncle arises from the ground level, either from a compressed aerial stem or from a subterranean stem (rhizome, tuber, bulb, corm), with few or no bracts except the part near the rachis or receptacle, it is referred to as a scape. The acorns of the pedunculate oak are borne on a long peduncle, hence the name of the tree. See also Pedicel (botany) Scape (botany)
https://en.wikipedia.org/wiki/Ascent%20of%20sap
The ascent of sap in the xylem tissue of plants is the upward movement of water and minerals from the root to the aerial parts of the plant. The conducting cells in xylem are typically non-living and include, in various groups of plants, vessel members and tracheids. Both of these cell types have thick, lignified secondary cell walls and are dead at maturity. Although several mechanisms have been proposed to explain how sap moves through the xylem, the cohesion-tension mechanism has the most support. Although cohesion-tension has received criticism due to the apparent existence of large negative pressures in some living plants, experimental and observational data favor this mechanism. Theories of sap ascent One early theory that has recently been revisited is the one presented by Jagadish Chandra Bose in 1923. In his experiment, he used his invention called a galvanometer (made of an electric probe and copper wire) and inserted it into the cortex of the Desmodium plant. After analyzing the findings his experiment, he saw that there were rhythmic electric oscillations. He concluded that plants move sap through pulses or a heartbeat. Many scientists discredited his work and claimed that his findings were not creditable. These scientists believed that the oscillations he recorded was an action potential across the cell wall. Modern-day scientists hypothesized that the oscillations that were measured in Bose's initial experiment was a stress response due to presence of sodium in the water. The results of this modern-day experiment showed that there were no rhythmic electric oscillations present in the plant. Despite not being able to replicate the oscillations that Bose recorded, this study believes that the presence of sodium played a role in his findings. Furthermore, plants do not have a pulse or heartbeat. An alternative theory based on the behavior of thin films has been developed by Henri Gouin, a French professor of fluid dynamics. The theory is intended
https://en.wikipedia.org/wiki/Potty%20parity
Potty parity is equal or equitable provision of public toilet facilities for females and males within a public space. Definition of parity Parity may be defined in various ways in relation to facilities in a building. The simplest is as equal floorspace for male and female washrooms. Since men's and boys' bathrooms include urinals, which take up less space than stalls, this still results in more facilities for males. An alternative parity is by number of fixtures within washrooms. However, since females on average spend more time in washrooms more males are able to use more facilities per unit time. More recent parity regulations therefore require more fixtures for females to ensure that the average time spent waiting to use the toilet is the same for females as for males, or to equalise throughputs of male and female toilets. The lack of diaper-changing stations for babies in men's restrooms has been listed as a potty parity issue by fathers. Some jurisdictions have considered legislation mandating diaper-changing stations in men's restrooms. Sex differences Women and girls often spend more time in washrooms than men and boys, for both physiological and cultural reasons. The requirement to use a cubicle rather than a urinal means urination takes longer and hand washing must be done more thoroughly. Females also make more visits to washrooms. Urinary tract infections and incontinence are more common in females. Pregnancy, menstruation, breastfeeding, and diaper-changing increase usage. The elderly, who are disproportionately female, take longer and more frequent bathroom visits. A variety of female urinals and personal funnels have been invented to make it easier for females to urinate standing up. None has become widespread enough to affect policy formation on potty parity. John F. Banzhaf III, a law professor at George Washington University, calls himself the "father of potty parity." Banzhaf argues that to ignore potty parity; that is, to have merely equ
https://en.wikipedia.org/wiki/GFP-cDNA
The GFP-cDNA project documents the localisation of proteins to subcellular compartments of the eukaryotic cell applying fluorescence microscopy. Experimental data are complemented with bioinformatic analyses and published online in a database. A search function allows the finding of proteins containing features or motifs of particular interest. The project is a collaboration of the research groups of Rainer Pepperkok at the European Molecular Biology Laboratory (EMBL) and Stefan Wiemann at the German Cancer Research Centre (DKFZ). What kinds of experiments are made? The cDNAs of novel identified Open Reading Frames(ORF) are tagged with Green Fluorescent Protein (GFP) and expressed in eukaryotic cells. Subsequently, the subcellular localisation of the fusion proteins is recorded by fluorescence microscopy. Steps: 1. Large-scale cloning Any large-scale manipulation of ORFs requires cloning technologies which are free of restriction enzymes. In this respect those that utilise recombination cloning (Gateway of Invitrogen or Creator of BD Biosciences) have proved to be the most suitable. This cloning technology is based on recombination mechanisms used by phages to integrate their DNA into the host genome. It allows the ORFs to be rapidly and conveniently shuttled between functionally useful vectors without the need for conventional restriction cloning. In the cDNA-GFP project the ORFs are transferred into CFP/YFP expression vectors. For the localisation analysis both N- and C-terminal fusions are generated. This maximises the possibility of correctly ascertaining the localisation, since the presence of GFP may mask targeting signals that may be present at one end of the native protein. N-Terminal Fluorescent Fusions Insert your gene of interest into the MCS upstream of the fluorescent protein gene, and express your gene as a fusion to the N-terminus of the fluorescent protein. C-Terminal Fluorescent Fusions Insert your gene of interest into the MCS downstream o
https://en.wikipedia.org/wiki/Driver%20circuit
In electronics, a driver is a circuit or component used to control another circuit or component, such as a high-power transistor, liquid crystal display (LCD), stepper motors, SRAM memory, and numerous others. They are usually used to regulate current flowing through a circuit or to control other factors such as other components and some other devices in the circuit. The term is often used, for example, for a specialized integrated circuit that controls high-power switches in switched-mode power converters. An amplifier can also be considered a driver for loudspeakers, or a voltage regulator that keeps an attached component operating within a broad range of input voltages. Typically the driver stage(s) of a circuit requires different characteristics to other circuit stages. For example, in a transistor power amplifier circuit, typically the driver circuit requires current gain, often the ability to discharge the following transistor bases rapidly, and low output impedance to avoid or minimize distortion. In SRAM memory driver circuits are used to rapidly discharge necessary bit lines from a precharge level to the write margin or below. See also Hitachi HD44780 LCD controller
https://en.wikipedia.org/wiki/Plotkin%20bound
In the mathematics of coding theory, the Plotkin bound, named after Morris Plotkin, is a limit (or bound) on the maximum possible number of codewords in binary codes of given length n and given minimum distance d. Statement of the bound A code is considered "binary" if the codewords use symbols from the binary alphabet . In particular, if all codewords have a fixed length n, then the binary code has length n. Equivalently, in this case the codewords can be considered elements of vector space over the finite field . Let be the minimum distance of , i.e. where is the Hamming distance between and . The expression represents the maximum number of possible codewords in a binary code of length and minimum distance . The Plotkin bound places a limit on this expression. Theorem (Plotkin bound): i) If is even and , then ii) If is odd and , then iii) If is even, then iv) If is odd, then where denotes the floor function. Proof of case i Let be the Hamming distance of and , and be the number of elements in (thus, is equal to ). The bound is proved by bounding the quantity in two different ways. On the one hand, there are choices for and for each such choice, there are choices for . Since by definition for all and (), it follows that On the other hand, let be an matrix whose rows are the elements of . Let be the number of zeros contained in the 'th column of . This means that the 'th column contains ones. Each choice of a zero and a one in the same column contributes exactly (because ) to the sum and therefore The quantity on the right is maximized if and only if holds for all (at this point of the proof we ignore the fact, that the are integers), then Combining the upper and lower bounds for that we have just derived, which given that is equivalent to Since is even, it follows that This completes the proof of the bound. See also Singleton bound Hamming bound Elias-Bassalygo bound Gilbert-Varshamov bound Johnson bou
https://en.wikipedia.org/wiki/Sipgate
Sipgate, stylised as sipgate, is a European VoIP and mobile telephony operator. Company Sipgate was founded in 2004 and became one of Germany's largest VoIP service providers for consumers and small businesses. Through its network, which used SIP protocol, it allowed making low-cost national and international calls and provided customers with an incoming geographical phone number. Customers were expected to use a client software or a SIP-compliant hardware (a VoIP phone or ATA) to access its services. Since 2011, Sipgate's network has been using the open-source project Yate for the core of its softswitch infrastructure. Sipgate are among the sponsors of the Kamailio World Conference & Exhibition. In January 2013, the firm entered the German mobile phone market as a full MVNO. Sipgate's German mobile phone services run over the Telefónica Germany network. Products sipgate team Introduced in 2009, the product is a hosted business phone system (PBX) providing online management of phone services for 1 to 250 Users. All billing, end user management, call management, etc. is through an online portal. A mobile solution was released in Germany in early 2013 that can be integrated with the 'Team' business VoIP service. SIM cards can be used as extensions in the Team web telephone system or used individually with mobile and landline numbers. sipgate trunking In Germany, SIP trunking services connect customer's third party VoIP PBXs via broadband with the public telephone network. SIP trunking can be combined with the team product. sipgate basic and sipgate basic plus The basic residential VoIP service was released in Germany and the UK in January, 2004. basic accounts receive one free UK or German geographic 'landline' phone number and a voicemail box. With a suitable fax-enabled VoIP adapter faxes may also be sent from conventional fax machines. On 6 October 2014, the firm released an open API sipgate io. Discontinued products Smartphone apps Sipgate pro