source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Collaboration%20Data%20Objects
Collaboration Data Objects (CDO), previously known as OLE Messaging or Active Messaging, is an application programming interface included with Microsoft Windows and Microsoft Exchange Server products. The library allows developers to access the Global Address List and other server objects, in addition to the contents of mailboxes and public folders. Overview CDO is a technology for building messaging or collaboration applications. CDO can be used separately or in connection with Outlook Object Model to gain more access over Outlook. CDO is not a part of Outlook Object Model and it doesn't provide any event-based functionality, nor can Outlook objects be manipulated using CDO. Starting with Exchange 2007, neither the Messaging API (MAPI) client libraries nor CDO 1.2.1 are provided as a part of the base product installation. They are available as downloads. Versions CDONTS: available on Windows NT 4.0 by installing the Option Pack, or Exchange Server. CDOSYS: available on Windows 2000 and onwards by installing the SMTP service in Internet Information Server (IIS). See also Collaboration Data Objects for Windows NT Server MAPI ActiveX References External links j-XChange - Pure and Open Source (LGPL v3) Java implementation of the Collaboration Data Objects (CDO 1.21) for accessing Microsoft Exchange Server in a platform independent manner. Overview of CDO - Overview of CDO at MSDN CDOSYS protocol Downloads ExchangeMapiCdo.EXE download ExchangeCdo.MSI download Microsoft application programming interfaces Email
https://en.wikipedia.org/wiki/Cyclic%20symmetry%20in%20three%20dimensions
In three dimensional geometry, there are four infinite series of point groups in three dimensions (n≥1) with n-fold rotational or reflectional symmetry about one axis (by an angle of 360°/n) that does not change the object. They are the finite symmetry groups on a cone. For n = ∞ they correspond to four frieze groups. Schönflies notation is used. The terms horizontal (h) and vertical (v) imply the existence and direction of reflections with respect to a vertical axis of symmetry. Also shown are Coxeter notation in brackets, and, in parentheses, orbifold notation. Types Chiral Cn, [n]+, (nn) of order n - n-fold rotational symmetry - acro-n-gonal group (abstract group Zn); for n=1: no symmetry (trivial group) AchiralCnh, [n+,2], (n*) of order 2n - prismatic symmetry or ortho-n-gonal group (abstract group Zn × Dih1); for n=1 this is denoted by Cs (1*) and called reflection symmetry, also bilateral symmetry. It has reflection symmetry with respect to a plane perpendicular to the n-fold rotation axis.Cnv, [n], (*nn) of order 2n - pyramidal symmetry or full acro-n-gonal group (abstract group Dihn); in biology C2v is called biradial symmetry. For n=1 we have again Cs (1*). It has vertical mirror planes. This is the symmetry group for a regular n-sided pyramid.S2n, [2+,2n+], (n×) of order 2n - gyro-n-gonal group (not to be confused with symmetric groups, for which the same notation is used; abstract group Z2n); It has a 2n-fold rotoreflection axis, also called 2n-fold improper rotation axis, i.e., the symmetry group contains a combination of a reflection in the horizontal plane and a rotation by an angle 180°/n. Thus, like Dnd, it contains a number of improper rotations without containing the corresponding rotations. for n=1 we have S2 (1×), also denoted by Ci; this is inversion symmetry. C2h, [2,2+] (2*) and C2v, [2], (*22) of order 4 are two of the three 3D symmetry group types with the Klein four-group as abstract group. C2v applies e.g. for a rectangular tile wit
https://en.wikipedia.org/wiki/General%20Instrument%20AY-3-8910
The AY-3-8910 is a 3-voice programmable sound generator (PSG) designed by General Instrument in 1978, initially for use with their 16-bit CP1610 or one of the PIC1650 series of 8-bit microcomputers. The AY-3-8910 and its variants were used in many arcade games—Konami's Gyruss contains five—and pinball machines as well as being the sound chip in the Intellivision and Vectrex video game consoles, and the Amstrad CPC, Oric-1, Colour Genie, Elektor TV Games Computer, MSX, and later ZX Spectrum home computers. It was also used in the Mockingboard and Cricket sound cards for the Apple II and the Speech/Sound Cartridge for the TRS-80 Color Computer. After General Instrument's spinoff of Microchip Technology in 1987, the chip was sold for a few years under the Microchip brand. It was also manufactured under license by Yamaha (with a selectable clock divider pin and a double-resolution and double-rate volume envelope table) as the YM2149F; the Atari ST uses this version. It produces very similar results to the Texas Instruments SN76489 and was on the market for a similar period. The chips are no longer made, but functionally-identical clones are still in active production. An unofficial VHDL description is freely available for use with FPGAs. Description The AY-3-8910 was essentially a state machine, with the state being set up in a series of sixteen 8-bit registers. These were programmed over an 8-bit bus that was used both for addressing and data by toggling one of the external pins. For instance, a typical setup cycle would put the bus into "address mode" to select a register, and then switch to "data mode" to set the contents of that register. This bus was implemented natively on GI's own CPUs, but it had to be recreated in glue logic or with the help of an additional interface adapter such as the MOS Technology 6522 when the chip was used with the much more common MOS Technology 6502 or Zilog Z80 CPUs. Six registers controlled the pitches produced in the three pri
https://en.wikipedia.org/wiki/Move%20by%20nature
In game theory a move by nature is a decision or move in an extensive form game made by a player who has no strategic interests in the outcome. The effect is to add a player, 'Nature', whose practical role is to act as a random number generator. For instance, if a game of Poker requires a dealer to choose which cards a player is dealt, the dealer plays the role of the Nature player. Fig. 1 shows a signaling game which begins with a move by nature. Moves by nature are an integral part of games of incomplete information. References Game theory
https://en.wikipedia.org/wiki/Etch%20pit%20density
The etch pit density (EPD) is a measure for the quality of semiconductor wafers. Etching An etch solution is applied on the surface of the wafer where the etch rate is increased at dislocations of the crystal resulting in pits. For GaAs one uses typically molten KOH at 450 degrees Celsius for about 40 minutes in a zirconium crucible. The density of the pits can be determined by optical contrast microscopy. Silicon wafers have usually a very low density of < 100 cm−2 while semi-insulating GaAs wafers have a density on the order of 105 cm−2. Germanium detectors High-purity Germanium detectors require the Ge crystals to be grown with a controlled range of dislocation density to reduce impurities. The etch pitch density requirement is typically within the range 103 to 104 cm−2. Standards The etch pit density can be determined according to DIN 50454-1 and ASTM F 1404. References Semiconductors
https://en.wikipedia.org/wiki/Emergenesis
In psychology, a trait (or phenotype) is called emergenic if it is the result of a specific combination of several interacting genes (rather than of a simple sum of several independent genes). Emergenic traits will not run in families, but identical twins will share them. Traits such as "leadership", "genius" or certain mental illnesses may be emergenic. Although one may expect epigenetics to play a significant role in the phenotypic manifestation of twins reared apart, the concordance displayed between them can be attributed to emergenesis. References Genetics Emergence
https://en.wikipedia.org/wiki/Gold%20leaf
Gold leaf is gold that has been hammered into thin sheets (usually around 0.1 µm thick) by a process known as goldbeating, for use in gilding. Gold leaf is a type of metal leaf, but the term is rarely used when referring to gold leaf. The term metal leaf is normally used for thin sheets of metal of any color that do not contain any real gold. Gold leaf is available in a wide variety of karats and shades. The most commonly used gold is 22-karat yellow gold. Pure gold is 24 karat. Real, yellow gold leaf is approximately 91.7% pure (i.e. 22-karat) gold. Traditional water gilding is the most difficult and highly regarded form of gold leafing. It has remained virtually unchanged for hundreds of years and is still done by hand. History 5,000 years ago, Egyptian artisans recognized the extraordinary durability and malleability of gold and became the first goldbeaters and gilders. They pounded gold using a round stone to create the thinnest leaf possible. Except for the introduction of a cast-iron hammer and a few other innovations, the tools and techniques have remained virtually unchanged for thousands of years. Gold-leaf forging is a traditional handicraft in Nanjing (China), produced as early as the Three Kingdoms (220 – 280 AD) and Two Jins (266 – 420) dynasties; it was used in Buddha-statue manufacturing and construction. It was widely used in the gilding of Buddha statues and idols and in the construction industry during the Eastern Wu (222-280) and Eastern Jin (266–420) dynasties. During the Qing dynasty (1640-1912), the technology developed, and Nanjing gold leaf was sold overseas. It retains traditional smelting, hand-beating and other techniques, and the gold leaf is pure, uniform and soft. On May 20, 2006, it was included in the first batch of national intangible cultural heritage representative items. Modern gold-leaf artists combine ancient traditional crafts with modern technology to make traditional gold leaf. Forging skills are more sophisticated.
https://en.wikipedia.org/wiki/Bauschinger%20effect
The Bauschinger effect refers to a property of materials where the material's stress/strain characteristics change as a result of the microscopic stress distribution of the material. For example, an increase in tensile yield strength occurs at the expense of compressive yield strength. The effect is named after German engineer Johann Bauschinger. While more tensile cold working increases the tensile yield strength, the local initial compressive yield strength after tensile cold working is actually reduced. The greater the tensile cold working, the lower the compressive yield strength. It is a general phenomenon found in most polycrystalline metals. Based on the cold work structure, two types of mechanisms are generally used to explain the Bauschinger effect: Local back stresses may be present in the material, which assist the movement of dislocations in the reverse direction. The pile-up of dislocations at grain boundaries and Orowan loops around strong precipitates are two main sources of these back stresses. When the strain direction is reversed, dislocations of the opposite sign can be produced from the same source that produced the slip-causing dislocations in the initial direction. Dislocations with opposite signs can attract and annihilate each other. Since strain hardening is related to an increased dislocation density, reducing the number of dislocations reduces strength. The net result is that the yield strength for strain in the opposite direction is less than it would be if the strain had continued in the initial direction. Mechanism of action Severe unidirectional cold working results in accumulation of dislocation at barriers to dislocation movement. When stresses are applied in the reverse direction, the dislocations are now aided by the back stresses that were present at the dislocation barriers previously and also because the back stresses at the dislocation barriers in the back are not likely to be strong compared to the previous case. Hence
https://en.wikipedia.org/wiki/Source%20Input%20Format
Source Input Format (SIF) defined in MPEG-1, is a video format that was developed to allow the storage and transmission of digital video. 625/50 SIF format (PAL/SECAM) has a resolution of active pixels (half of PAL ) [or active pixels (half of PAL )] and a refresh rate of 25 frames per second. 525/59.94 SIF Format (NTSC) has a resolution of active pixels (half of NTSC ) [or active pixels (half of NTSC )] and a refresh rate of 29.97 frames per second. When compared to the CCIR 601 specifications, which defines the appropriate parameters for digital encoding of TV signals, SIF can be seen as being reduced by half in all of height, width, frame-rate, and chrominance. SIF video is known as a constrained parameters bitstream. On square-pixel displays (e.g., computer screens and many modern televisions) SIF images should be rescaled so that the picture covers a 4:3 area, in order to avoid a "stretched" look. So the computer industry has defined "square-pixel SIF" to be 320 x 240 active pixels (QVGA) or 384 x 288 active pixels, with a refresh rate of whatever the computer is capable of supporting. To reach that the SIF content need to be "expanded" horizontally by 12:11 for PAL (PAR = DAR : SAR =  :  = ) and "reduced" horizontally by 10:11 for NTSC (PAR = DAR : SAR =  :  = ). See also Common Intermediate Format (CIF) References Graphics standards MPEG
https://en.wikipedia.org/wiki/Hofstadter%27s%20butterfly
In condensed matter physics, Hofstadter's butterfly is a graph of the spectral properties of non-interacting two-dimensional electrons in a perpendicular magnetic field in a lattice. The fractal, self-similar nature of the spectrum was discovered in the 1976 Ph.D. work of Douglas Hofstadter and is one of the early examples of modern scientific data visualization. The name reflects the fact that, as Hofstadter wrote, "the large gaps [in the graph] form a very striking pattern somewhat resembling a butterfly." The Hofstadter butterfly plays an important role in the theory of the integer quantum Hall effect and the theory of topological quantum numbers. History The first mathematical description of electrons on a 2D lattice, acted on by a perpendicular homogeneous magnetic field, was studied by Rudolf Peierls and his student R. G. Harper in the 1950s. Hofstadter first described the structure in 1976 in an article on the energy levels of Bloch electrons in perpendicular magnetic fields. It gives a graphical representation of the spectrum of Harper's equation at different frequencies. One key aspect of the mathematical structure of this spectrum – the splitting of energy bands for a specific value of the magnetic field, along a single dimension (energy) – had been previously mentioned in passing by Soviet physicist Mark Azbel in 1964 (in a paper cited by Hofstadter), but Hofstadter greatly expanded upon that work by plotting all values of the magnetic field against all energy values, creating the two-dimensional plot that first revealed the spectrum's uniquely recursive geometric properties. Written while Hofstadter was at the University of Oregon, his paper was influential in directing further research. It predicted on theoretical grounds that the allowed energy level values of an electron in a two-dimensional square lattice, as a function of a magnetic field applied perpendicularly to the system, formed what is now known as a fractal set. That is, the distributio
https://en.wikipedia.org/wiki/Context-sensitive%20help
Context-sensitive help is a kind of online help that is obtained from a specific point in the state of the software, providing help for the situation that is associated with that state. Context-sensitive help, as opposed to general online help or online manuals, does not need to be accessible for reading as a whole. Each topic is supposed to describe extensively one state, situation, or feature of the software. Context-sensitive help can be implemented using tooltips, which either provide a terse description of a GUI widget or display a complete topic from the help file. Other commonly used ways to access context-sensitive help start by clicking a button. One way uses a per widget button that displays the help immediately. Another way changes the pointer shape to a question mark, and then, after the user clicks a widget, the help appears. Context-sensitive help is most used in, but is not limited to, GUI environments. Examples include Apple's System 7 Balloon help, Microsoft's WinHelp, OS/2's INF Help or Sun's JavaHelp. An example of context sensitive help familiar to most Wikipedia users are the tooltips that show previews of links within articles, helping readers to determine whether following the link is valuable prior to shifting pages. A similar topic is embedded help, which can be thought of as a "deeper" context-sensitive help. It generally goes beyond basic explanations or manual clicks by either detecting a user's need for help or offering a guided explanation in situ. Embedded help is not to be confused with a software wizard. See also Balloon Help Tooltip AnswerDash Notes References Describes the connection between where the help is needed and how the help is provided. Online help
https://en.wikipedia.org/wiki/Google%20Summer%20of%20Code
The Google Summer of Code, often abbreviated to GSoC, is an international annual program in which Google awards stipends to contributors who successfully complete a free and open-source software coding project during the summer. , the program is open to anyone aged 18 or over, no longer just students and recent graduates. It was first held from May to August 2005. Participants get paid to write software, with the amount of their stipend depending on the purchasing power parity of the country where they are located. Project ideas are listed by host organizations involved in open-source software development, though students can also propose their own project ideas. The idea for the Summer of Code came directly from Google's founders, Sergey Brin and Larry Page. From 2007 until 2009 Leslie Hawthorn, who has been involved in the project since 2006, was the program manager. From 2010 until 2015, Carol Smith was the program manager. In 2016, Stephanie Taylor took over management of the program. Overview Each year, the program follows a timeline. First, open-source organizations apply to participate. If accepted, each organization provides a list of initial project ideas and invites contributors to their development communities. Contributors who meet the eligibility criteria then submit up to 3 proposals that detail the software-coding projects that interest them. These applications are then evaluated by the corresponding mentoring organization, with mentors and organizational administrators reviewing the applications and deciding how many "slots" to request from Google, and which proposals to accept. Google allocates slots to each organization, taking into account organizational capacity, mentoring history, and the number of applications the organization has received. Finally, organizations select the top proposals to fill their slots and Google verifies eligibility before announcing accepted contributors. In the event of a single contributors being selected by more th
https://en.wikipedia.org/wiki/I%2A
i* (pronounced "i star") or i* framework is a modeling language suitable for an early phase of system modeling in order to understand the problem domain. i* modeling language allows to model both as-is and to-be situations. The name i* refers to the notion of distributed intentionality which underlines the framework. It is an approach originally developed for modelling and reasoning about organizational environments and their information systems composed of heterogeneous actors with different, often competing, goals that depend on each other to undertake their tasks and achieve these goals. It covers both actor-oriented and Goal modeling. i* models answer the question WHO and WHY, not what. In contrast, the UML Use case approach covers only functional goals, with actors directly involved in operations (typically with software). The KAOS approach covers goals of all types but is less concerned with the intentionality of actors. Elements The model describes dependencies among actors. There are four elements to describe them: goal, soft goal, task and resource. The central concept in i* is in fact that of the intentional actor. Organizational actors are viewed as having intentional properties such as goals, beliefs, abilities, and commitments (concept of distributed intentionality). Actors depend on each other for goals to be achieved, tasks to be performed and resources to be furnished. By depending on others, an actor may be able to achieve goals that are difficult or impossible to achieve on its own; on the other hand, an actor becomes vulnerable if the depended-on actors do not deliver. Actors are strategic in the sense that they are concerned about opportunities and vulnerabilities, and seek rearrangement of their environments that would better serve their interests by restructuring intentional relationships. Models i* framework consists of two main modeling components: Strategic Dependency model (SD) An SD model describes a network of dependency relationshi
https://en.wikipedia.org/wiki/Term%20indexing
In computer science, a term index is a data structure to facilitate fast lookup of terms and clauses in a logic program, deductive database, or automated theorem prover. Overview Many operations in automatic theorem provers require search in huge collections of terms and clauses. Such operations typically fall into the following scheme. Given a collection of terms (clauses) and a query term (clause) , find in some/all terms related to according to a certain retrieval condition. Most interesting retrieval conditions are formulated as existence of a substitution that relates in a special way the query and the retrieved objects . Here is a list of retrieval conditions frequently used in provers: term is unifiable with term , i.e., there exists a substitution , such that = term is an instance of , i.e., there exists a substitution , such that = term is a generalisation of , i.e., there exists a substitution , such that = clause subsumes clause , i.e., there exists a substitution , such that is a subset/submultiset of clause is subsumed by , i.e., there exists a substitution , such that is a subset/submultiset of More often than not, we are actually interested in finding the appropriate substitutions explicitly, together with the retrieved terms , rather than just in establishing existence of such substitutions. Very often the sizes of term sets to be searched are large, the retrieval calls are frequent and the retrieval condition test is rather complex. In such situations linear search in , when the retrieval condition is tested on every term from , becomes prohibitively costly. To overcome this problem, special data structures, called indexes, are designed in order to support fast retrieval. Such data structures, together with the accompanying algorithms for index maintenance and retrieval, are called term indexing techniques. Classic indexing techniques discrimination trees substitution trees path indexing Substitution trees
https://en.wikipedia.org/wiki/Epithelial%E2%80%93mesenchymal%20transition
The epithelial–mesenchymal transition (EMT) is a process by which epithelial cells lose their cell polarity and cell–cell adhesion, and gain migratory and invasive properties to become mesenchymal stem cells; these are multipotent stromal cells that can differentiate into a variety of cell types. EMT is essential for numerous developmental processes including mesoderm formation and neural tube formation. EMT has also been shown to occur in wound healing, in organ fibrosis and in the initiation of metastasis in cancer progression. Introduction Epithelial–mesenchymal transition was first recognized as a feature of embryogenesis by Betty Hay in the 1980s. EMT, and its reverse process, MET (mesenchymal-epithelial transition) are critical for development of many tissues and organs in the developing embryo, and numerous embryonic events such as gastrulation, neural crest formation, heart valve formation, secondary palate development, and myogenesis. Epithelial and mesenchymal cells differ in phenotype as well as function, though both share inherent plasticity. Epithelial cells are closely connected to each other by tight junctions, gap junctions and adherens junctions, have an apico-basal polarity, polarization of the actin cytoskeleton and are bound by a basal lamina at their basal surface. Mesenchymal cells, on the other hand, lack this polarization, have a spindle-shaped morphology and interact with each other only through focal points. Epithelial cells express high levels of E-cadherin, whereas mesenchymal cells express those of N-cadherin, fibronectin and vimentin. Thus, EMT entails profound morphological and phenotypic changes to a cell. Based on the biological context, EMT has been categorized into 3 types: developmental (Type I), fibrosis and wound healing (Type II), and cancer (Type III). Inducers Loss of E-cadherin is considered to be a fundamental event in EMT. Many transcription factors (TFs) that can repress E-cadherin directly or indirectly can be consi
https://en.wikipedia.org/wiki/Xubuntu
Xubuntu () is a Canonical Ltd.–recognized, community-maintained derivative of the Ubuntu operating system. The name Xubuntu is a portmanteau of Xfce and Ubuntu, as it uses the Xfce desktop environment, instead of Ubuntu's customized GNOME desktop. Xubuntu seeks to provide "a light, stable and configurable desktop environment with conservative workflows" using Xfce components. Xubuntu is intended for both new and experienced Linux users. Rather than explicitly targeting low-powered machines, it attempts to provide "extra responsiveness and speed" on existing hardware. History Xubuntu was originally intended to be released at the same time as Ubuntu 5.10 Breezy Badger, 13 October 2005, but the work was not complete by that date. Instead the Xubuntu name was used for the xubuntu-desktop metapackage available through the Synaptic Package Manager which installed the Xfce desktop. The first official Xubuntu release, led by Jani Monoses, appeared on 1 June 2006, as part of the Ubuntu 6.06 Dapper Drake line, which also included Kubuntu and Edubuntu. Cody A.W. Somerville developed a comprehensive strategy for the Xubuntu project named the Xubuntu Strategy Document. This document was approved by the Ubuntu Community Council in 2008. In February 2009 Mark Shuttleworth agreed that an official LXDE version of Ubuntu, Lubuntu, would be developed. The LXDE desktop uses the Openbox window manager and, like Xubuntu, is intended to be a low-system-requirement, low-RAM environment for netbooks, mobile devices and older PCs and will compete with Xubuntu in that niche. In November 2009, Cody A.W. Somerville stepped down as the project leader and made a call for nominations to help find a successor. Lionel Le Folgoc was confirmed by the Xubuntu community as the new project leader on 10 January 2010 and requested the formation of an official Xubuntu council. , discussions regarding the future of Xubuntu's governance and the role a council might play in it were still ongoing. In Ma
https://en.wikipedia.org/wiki/Hyperion%20%28computer%29
The Hyperion is an early portable computer that vied with the Compaq Portable to be the first portable IBM PC compatible. It was marketed by Infotech Cie of Ottawa, a subsidiary of Bytec Management Corp., who acquired the designer and manufacturer Dynalogic Corporation, in January 1983. In 1984, the design was licensed by Commodore International in a move that was forecast as a "radical shift of position" and a signal that Commodore would soon dominate the PC compatible market. Despite computers being "hand-assembled from kits" provided by Bytec and displayed alongside the Commodore 900 at a German trade show as their forthcoming first portable computer, it was never sold by Commodore and some analysts downplayed the pact. The Hyperion was shipped in January 1983 at C$4995, two months ahead of the Compaq Portable. Brand name The name "Hyperion" was invented by Taylor-Sprules Corporation in Toronto. They also designed the retail packaging, all marketing materials and the tradeshow exhibit at Comdex in Atlantic City where Hyperion was first introduced in 1982. Two prototypes were shown. The amber graphics screens, and a built-in modem, were notable features that attracted comment at the show. Design The machine featured 256 KB RAM, dual 360 KB 5.25" floppy disk drives, a graphics card compatible with both CGA and HGC, a video-out jack, a built-in 7-inch amber CRT, 300 bit/s modem, and an acoustic coupler. It included a version of MS-DOS called H-DOS and bundled word processor, database, and modem software. While the Hyperion weighed just eighteen pounds (8.2 kg), or about 2/3 the weight of the Compaq, it was not as reliable or as IBM compatible and was discontinued within two years. One significant difference from the IBM system was the use of a Zilog Z80-SIO chip instead of a National Semiconductor 8250 for serial communications. Interface H-DOS was remarkable and is of historical significance because it featured a simple menu system. The through keys benea
https://en.wikipedia.org/wiki/Logarithm%20of%20a%20matrix
In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra. Definition The exponential of a matrix A is defined by . Given a matrix B, another matrix A is said to be a matrix logarithm of . Because the exponential function is not bijective for complex numbers (e.g. ), numbers can have multiple complex logarithms, and as a consequence of this, some matrices may have more than one logarithm, as explained below. If the matrix logarithm of exists and is unique, then it is written as in which case Power series expression If B is sufficiently close to the identity matrix, then a logarithm of B may be computed by means of the following power series: . Specifically, if , then the preceding series converges and . Example: Logarithm of rotations in the plane The rotations in the plane give a simple example. A rotation of angle α around the origin is represented by the 2×2-matrix For any integer n, the matrix is a logarithm of A. ⇔ where … qed. Thus, the matrix A has infinitely many logarithms. This corresponds to the fact that the rotation angle is only determined up to multiples of 2π. In the language of Lie theory, the rotation matrices A are elements of the Lie group SO(2). The corresponding logarithms B are elements of the Lie algebra so(2), which consists of all skew-symmetric matrices. The matrix is a generator of the Lie algebra so(2). Existence The question of whether a matrix has a logarithm has the easiest answer when con
https://en.wikipedia.org/wiki/Geomechanics
Geomechanics (from the Greek prefix geo- meaning "earth"; and "mechanics") is the study of the mechanical state of the Earth's crust and the processes occurring in it under the influence of natural physical factors. It involves the study of the mechanics of soil and rock. Background The two main disciplines of geomechanics are soil mechanics and rock mechanics. Former deals with the soil behaviour from a small scale to a landslide scale. The latter deals with issues in geosciences related to rock mass characterization and rock mass mechanics, such as applied to petroleum, mining and civil engineering problems, such as borehole stability, tunnel design, rock breakage, slope stability, foundations, and rock drilling. Many aspects of geomechanics overlap with parts of geotechnical engineering, engineering geology, and geological engineering. Modern developments relate to seismology, continuum mechanics, discontinuum mechanics, and transport phenomena. Reservoir Geomechanics In the petroleum industry geomechanics is used to: predict pore pressure establish the integrity of the cap rock evaluate reservoir properties determine in-situ rock stress evaluate the wellbore stability calculate the optimal trajectory of the borehole predict and control sand occurrence in the well analyze the validity of drilling on depression characterize fractured reservoirs increase the efficiency of the development of fractured reservoirs evaluate hydraulic fractures stability evaluate the effect of liquid and steam injection into the reservoir analyze surface subsidence evaluate shear deformation and casing collapse To put into practice the geomechanics capabilities mentioned above, it is necessary to create a Geomechanical Model of the Earth (GEM) which consists of six key components that can be both calculated and estimated using field data: Vertical stress, δv (often called geostatic pressure) Maximum horizontal stress, δHmax Minimum horizontal stress, δHmin Stres
https://en.wikipedia.org/wiki/Conditioner%20%28chemistry%29
In chemistry and materials science, a conditioner is a substance or process that improves the quality of a given material. Conditioning agents used in skincare products are also known as moisturizers, and usually are composed of various oils and lubricants. One method of their use is as a coating of the substrate to alter the feel and appearance. For cosmetic products, this effect is a temporary one but can help to protect skin and hair from further damage. In cosmetic products the types of conditioning agents used are as follows: Emollients, usually oils, fats, waxes or silicones, which are hydrophobic molecules of natural or synthetic origin that coat the skin or hair and provide an occlusive surface that helps prevent further loss of moisture as well as providing slip and lubricity Humectants, typically polyols or glycols, that can hydrogen bond with water in the skin and hair and reduce water loss Cationic surfactants or polymers that are substantive to the slightly negatively-charged skin and hair and provide a film on the hair that limits further damage Fatty alcohols which are amphiphilic and provide a hydrophobic coating to skin and hair as well as building a lamellar structure in the cosmetic product that builds viscosity as well as improving product stability See also Chemical conditioning References Materials science Cosmetics chemicals
https://en.wikipedia.org/wiki/Standard%20test%20image
A standard test image is a digital image file used across different institutions to test image processing and image compression algorithms. By using the same standard test images, different labs are able to compare results, both visually and quantitatively. The images are in many cases chosen to represent natural or typical images that a class of processing techniques would need to deal with. Other test images are chosen because they present a range of challenges to image reconstruction algorithms, such as the reproduction of fine detail and textures, sharp transitions and edges, and uniform regions. Historical origins Test images as transmission system calibration material probably date back to the original Paris to Lyon fax link. Analogue Fax equipment (and photographic equipment for the printing trade) were the largest user groups of the standardized image for calibration technology until the coming of television and digital image transmission systems. Common test image resolutions The standard resolution of the images is usually 512×512 or 720×576. Most of these images are available as TIFF files from the University of Southern California's Signal and Image Processing Institute. Kodak has released 768×512 images, available as PNGs, that was originally on Photo CD with higher resolution, that are widely used for comparing image compression techniques. See also Carole Hersee FERET database (DARPA/NIST face recognition database) Lenna List of common 3D test models References External links The USC-SIPI Image Database — A large collection of standard test images Computer Vision website — A large collection of links to various test images Vision @ Reading — University of Reading's set of popular test images CIPR still images — Some sets of test images at Rensselaer Polytechnic Institute (including the Kodak set) True-color Kodak test images — The Kodak set in PNG format TESTIMAGES — Large collection of sample images designed for analysis and
https://en.wikipedia.org/wiki/Circles%20of%20Apollonius
The circles of Apollonius are any of several sets of circles associated with Apollonius of Perga, a renowned Greek geometer. Most of these circles are found in planar Euclidean geometry, but analogs have been defined on other surfaces; for example, counterparts on the surface of a sphere can be defined through stereographic projection. The main uses of this term are fivefold: Apollonius showed that a circle can be defined as the set of points in a plane that have a specified ratio of distances to two fixed points, known as foci. This Apollonian circle is the basis of the Apollonius pursuit problem. It is a particular case of the first family described in #2. The Apollonian circles are two families of mutually orthogonal circles. The first family consists of the circles with all possible distance ratios to two fixed foci (the same circles as in #1), whereas the second family consists of all possible circles that pass through both foci. These circles form the basis of bipolar coordinates. The circles of Apollonius of a triangle are three circles, each of which passes through one vertex of the triangle and maintains a constant ratio of distances to the other two. The isodynamic points and Lemoine line of a triangle can be solved using these circles of Apollonius. Apollonius' problem is to construct circles that are simultaneously tangent to three specified circles. The solutions to this problem are sometimes called the circles of Apollonius. The Apollonian gasket—one of the first fractals ever described—is a set of mutually tangent circles, formed by solving Apollonius' problem iteratively. Apollonius' definition of a circle A circle is usually defined as the set of points P at a given distance r (the circle's radius) from a given point (the circle's center). However, there are other, equivalent definitions of a circle. Apollonius discovered that a circle could be defined as the set of points P that have a given ratio of distances k =  to two given p
https://en.wikipedia.org/wiki/GCD%20domain
In mathematics, a GCD domain is an integral domain R with the property that any two elements have a greatest common divisor (GCD); i.e., there is a unique minimal principal ideal containing the ideal generated by two given elements. Equivalently, any two elements of R have a least common multiple (LCM). A GCD domain generalizes a unique factorization domain (UFD) to a non-Noetherian setting in the following sense: an integral domain is a UFD if and only if it is a GCD domain satisfying the ascending chain condition on principal ideals (and in particular if it is Noetherian). GCD domains appear in the following chain of class inclusions: Properties Every irreducible element of a GCD domain is prime. A GCD domain is integrally closed, and every nonzero element is primal. In other words, every GCD domain is a Schreier domain. For every pair of elements x, y of a GCD domain R, a GCD d of x and y and an LCM m of x and y can be chosen such that , or stated differently, if x and y are nonzero elements and d is any GCD d of x and y, then xy/d is an LCM of x and y, and vice versa. It follows that the operations of GCD and LCM make the quotient R/~ into a distributive lattice, where "~" denotes the equivalence relation of being associate elements. The equivalence between the existence of GCDs and the existence of LCMs is not a corollary of the similar result on complete lattices, as the quotient R/~ need not be a complete lattice for a GCD domain R. If R is a GCD domain, then the polynomial ring R[X1,...,Xn] is also a GCD domain. R is a GCD domain if and only if finite intersections of its principal ideals are principal. In particular, , where is the LCM of and . For a polynomial in X over a GCD domain, one can define its content as the GCD of all its coefficients. Then the content of a product of polynomials is the product of their contents, as expressed by Gauss's lemma, which is valid over GCD domains. Examples A unique factorization domain is a GCD domain. Am
https://en.wikipedia.org/wiki/SBCS
SBCS, or single-byte character set, is used to refer to character encodings that use exactly one byte for each graphic character. An SBCS can accommodate a maximum of 256 symbols, and is useful for scripts that do not have many symbols or accented letters such as the Latin, Greek and Cyrillic scripts used mainly for European languages. Examples of SBCS encodings include ISO/IEC 646, the various ISO 8859 encodings, and the various Microsoft/IBM code pages. The term SBCS is commonly contrasted against the terms DBCS (double-byte character set) and TBCS (triple-byte character set), as well as MBCS (multi-byte character set). The multi-byte character sets are used to accommodate languages with scripts that have large numbers of characters and symbols, predominantly Asian languages such as Chinese, Japanese, and Korean. These are sometimes referred to by the acronym CJK. In these computing systems, SBCSs are traditionally associated with half-width characters, so-called because such SBCS characters would traditionally occupy half the width of a DBCS character on a fixed-width computer terminal or text screen. See also DBCS TBCS MBCS Variable-width encoding References Character encoding
https://en.wikipedia.org/wiki/Portable%20storage%20device
A portable storage device (PSD) is a compact plug-and-play mass storage device designed to hold a large volume of digital data of any kind. This is slightly different from a portable media player, which is designed to only store music and video files that its internal reader softwares can play. Most modern PSDs are dedicated solid state drives (SSD) that are connected to a computer and powered via USB ports. Some PSDs, usually those from before the wide adoption of SSDs, are modified hard disk drives via the installation of a disk enclosure, and require an additional AC adapter as the power required to operate the drive typically exceeds that can be provided by the USB port. PSDs, while much bigger and heavier than ultracompact flash drives such as USB flash drives and memory cards, offer significantly more external storage capacities, yet are still convenient enough for carrying around when travelling or as a readily accessible offline backup storage option, especially in situations where online storage alternatives such as network-attached storage and cloud storage are unavailable, unreliable or unsafe. Photography One type of data that may be stored is digital photographs (RAW data), transferred from a digital camera. Many PSDs will connect directly to a camera and copy the images, or they may provide a slot for a memory card to plug in, with or without a card reader device. Some early models allow the user to review the images on a colour screen, but at the top end of the market. See also Computer storage Mass Storage Digital Class (MSDC) References Computer storage media
https://en.wikipedia.org/wiki/Electronic%20symbol
An electronic symbol is a pictogram used to represent various electrical and electronic devices or functions, such as wires, batteries, resistors, and transistors, in a schematic diagram of an electrical or electronic circuit. These symbols are largely standardized internationally today, but may vary from country to country, or engineering discipline, based on traditional conventions. Standards for symbols The graphic symbols used for electrical components in circuit diagrams are covered by national and international standards, in particular: IEC 60617 (also known as BS 3939). There is also IEC 61131-3 – for ladder-logic symbols. JIC JIC (Joint Industrial Council) symbols as approved and adopted by the NMTBA (National Machine Tool Builders Association). They have been extracted from the Appendix of the NMTBA Specification EGPl-1967. ANSI Y32.2-1975 (also known as IEEE Std 315-1975 or CSA Z99-1975). IEEE Std 91/91a: graphic symbols for logic functions (used in digital electronics). It is referenced in ANSI Y32.2/IEEE Std 315. Australian Standard AS 1102 (based on a slightly modified version of IEC 60617; withdrawn without replacement with a recommendation to use IEC 60617). The number of standards leads to confusion and errors. Symbols usage is sometimes unique to engineering disciplines, and national or local variations to international standards exist. For example, lighting and power symbols used as part of architectural drawings may be different from symbols for devices used in electronics. Common electronic symbols Symbols shown are typical examples, not a complete list. Traces Grounds The shorthand for ground is GND. Optionally, the triangle in the middle symbol may be filled in. Sources Resistors It is very common for potentiometer and rheostat symbols to be used for many types of variable resistors, including trimmers. Capacitors Diodes Optionally, the triangle in these symbols may be filled in. Note: The words anode and cathode typically
https://en.wikipedia.org/wiki/Intel%20Turbo%20Memory
Intel Turbo Memory is a technology introduced by Intel Corporation that uses NAND flash memory modules to reduce the time it takes for a computer to power up, access programs, and write data to the hard drive. During development, the technology was codenamed Robson. It is supported by most of the Core 2 Mobile chipset series, but not by the newer Core i Series mobile chipsets. Overview The technology was publicly introduced on October 24, 2005, at the Intel Developer Forum (IDF) in Taiwan when a laptop that booted up almost immediately was demonstrated. The technology attempts to decrease hard drive usage by moving frequently accessed data over to the flash memory. Flash memory can be accessed faster than hard drives and requires less power to operate, thereby allowing laptops to operate faster while also being more power efficient. The Turbo memory cache connects to a motherboard via a mini-PCIe interface. It is designed to leverage features introduced in Windows Vista, namely ReadyBoost (a supplementation of RAM-based disk caching by dedicated files on flash drives, except on the 512 MB version) and/or ReadyDrive (a non-volatile caching solution, i.e. an implementation of a hybrid drive, as long as the main storage isn't already one); as ReadyBoost is backed by temporary files on generic storage volumes, it is unofficially possible to destinate this space for general purpose storage. Turbo Memory is not compatible with previous versions of Windows (with only a no-op driver that merely acknowledges the device existing for Windows 2000 and XP); Linux support is limited to a third-party experimental MTD driver only supporting 2 GB modules. Availability Intel Turbo Memory was made available on May 9, 2007, on the Intel's Santa Rosa platform and their Crestline (GM965) chipsets. Intel Turbo Memory 2.0 was introduced on July 15, 2008, on Intel's Montevina platform and their Cantiga (GM47) chipsets. It is available in 1, 2, and 4GB modules. It is supported in the In
https://en.wikipedia.org/wiki/Plane%20partition
In mathematics and especially in combinatorics, a plane partition is a two-dimensional array of nonnegative integers (with positive integer indices i and j) that is nonincreasing in both indices. This means that and for all i and j. Moreover, only finitely many of the may be nonzero. Plane partitions are a generalization of partitions of an integer. A plane partition may be represented visually by the placement of a stack of unit cubes above the point (i, j) in the plane, giving a three-dimensional solid as shown in the picture. The image has matrix form Plane partitions are also often described by the positions of the unit cubes. From this point of view, a plane partition can be defined as a finite subset of positive integer lattice points (i, j, k) in , such that if (r, s, t) lies in and if satisfies , , and , then (i, j, k) also lies in . The sum of a plane partition is The sum describes the number of cubes of which the plane partition consists. Much interest in plane partitions concerns the enumeration of plane partitions in various classes. The number of plane partitions with sum n is denoted by PL(n). For example, there are six plane partitions with sum 3 so PL(3) = 6. Plane partitions may be classified by how symmetric they are. Many symmetric classes of plane partitions are enumerated by simple product formulas. Generating function of plane partitions The generating function for PL(n) is . It is sometimes referred to as the MacMahon function, as it was discovered by Percy A. MacMahon. This formula may be viewed as the 2-dimensional analogue of Euler's product formula for the number of integer partitions of n. There is no analogous formula known for partitions in higher dimensions (i.e., for solid partitions). The asymptotics for plane partitions were first calculated by E. M. Wright. One obtains, for large , that Evaluating numerically yields Plane partitions in a box Around 1896, MacMahon set up the generating function of plane pa
https://en.wikipedia.org/wiki/Quantum%20calculus
Quantum calculus, sometimes called calculus without limits, is equivalent to traditional infinitesimal calculus without the notion of limits. It defines "q-calculus" and "h-calculus", where h ostensibly stands for Planck's constant while q stands for quantum. The two parameters are related by the formula where is the reduced Planck constant. Differentiation In the q-calculus and h-calculus, differentials of functions are defined as and respectively. Derivatives of functions are then defined as fractions by the q-derivative and by In the limit, as h goes to 0, or equivalently as q goes to 1, these expressions take on the form of the derivative of classical calculus. Integration q-integral A function F(x) is a q-antiderivative of f(x) if DqF(x) = f(x). The q-antiderivative (or q-integral) is denoted by and an expression for F(x) can be found from the formula which is called the Jackson integral of f(x). For , the series converges to a function F(x) on an interval (0,A] if |f(x)xα| is bounded on the interval for some . The q-integral is a Riemann–Stieltjes integral with respect to a step function having infinitely many points of increase at the points qj, with the jump at the point qj being qj. If we call this step function gq(t) then dgq(t) = dqt. h-integral A function F(x) is an h-antiderivative of f(x) if DhF(x) = f(x). The h-antiderivative (or h-integral) is denoted by . If a and b differ by an integer multiple of h then the definite integral is given by a Riemann sum of f(x) on the interval partitioned into subintervals of width h. Example The derivative of the function (for some positive integer ) in the classical calculus is . The corresponding expressions in q-calculus and h-calculus are with the q-bracket and respectively. The expression is then the q-calculus analogue of the simple power rule for positive integral powers. In this sense, the function is still nice in the q-calculus, but rather ugly in the h-calculus – the h-calculus a
https://en.wikipedia.org/wiki/Bay%20%28architecture%29
In architecture, a bay is the space between architectural elements, or a recess or compartment. The term bay comes from Old French baie, meaning an opening or hole. Examples The spaces between posts, columns, or buttresses in the length of a building, the division in the widths being called aisles. This meaning also applies to overhead vaults (between ribs), in a building using a vaulted structural system. For example, the Gothic architecture period's Chartres Cathedral has a nave (main interior space) that is "seven bays long." Similarly in timber framing a bay is the space between posts in the transverse direction of the building and aisles run longitudinally. Where there are no columns or other divisions, and regularly-spaced windows, each window in a wall is counted as a bay. For example Mulberry Fields in Maryland US, a Georgian style building, is described as "5 bay by 2 bay", meaning "5 windows at the front and 2 windows at the sides". A recess in a wall, such as a bay window. A division of space such as an animal stall, sick bay, or bay platform. The space between joists or rafters, a joist bay or rafter bay. East Asia The Japanese ken and Korean kan are both bays themselves and measurements based upon their number and standard placement. Under the Joseon, Koreans were allocated a set number of bays in their residential architecture based upon their class. See also Architectural elements References Architectural elements Windows Arches and vaults Building engineering
https://en.wikipedia.org/wiki/Canon%20T90
The Canon T90, introduced in 1986, was the top of the line in Canon's T series of 35 mm Single-lens reflex (SLR) cameras. It is the last professional-level manual-focus camera from Canon, and the last professional camera to use the Canon FD lens mount. Although it was overtaken by the autofocus revolution and Canon's new, incompatible EOS (Electro-Optical System) after only a year in production, the T90 pioneered many concepts seen in high-end Canon cameras up to the present day, particularly the user interface, industrial design, and the high level of automation. Due to its ruggedness, the T90 was nicknamed "the tank" by Japanese photojournalists. Many have still rated it highly even 30+ years after its introduction. Design Previous Canon cameras had been wholly in-house design projects. For the T90, Canon brought in German industrial designer Luigi Colani in a collaboration with Canon's own designers. The final design was composed of Colani's ideas by Kunihisa Ito of ODS Co. Ltd., incorporating Colani's distinctive "bio-form" curvaceous shapes. Canon considered Colani's contribution important enough to present him with the first production T90 body, engraved with his name. Computer-aided design techniques were introduced to Canon for the T90, as well as the use of computer controlled (CNC) milling machines to make the molding dies for the shell. Much work went into human factors engineering to create an ergonomic user interface for the camera. The form of previous cameras was largely dictated by the required locations of mechanical controls on the body, such as the film advance lever, rewind crank, shutter speed dial, shutter release, etc. On the T90, the film transport control is no longer required, while the others are no longer mechanically linked. This gave the designers more freedom to shape the camera to make it easier to control and hold, and to place controls in a way that suited the user rather than a mechanical design. The T90 introduced features s
https://en.wikipedia.org/wiki/Agglutination%20%28biology%29
Agglutination is the clumping of particles. The word agglutination comes from the Latin agglutinare (glueing to). Agglutination is a reaction in which particles (as red blood cells or bacteria) suspended in a liquid collect into clumps usually as a response to a specific antibody. This occurs in biology in two main examples: The clumping of cells such as bacteria or red blood cells in the presence of an antibody or complement. The antibody or other molecule binds multiple particles and joins them, creating a large complex. This increases the efficacy of microbial elimination by phagocytosis as large clumps of bacteria can be eliminated in one pass, versus the elimination of single microbial antigens. When people are given blood transfusions of the wrong blood group, the antibodies react with the incorrectly transfused blood group and as a result, the erythrocytes clump up and stick together causing them to agglutinate. The coalescing of small particles that are suspended in a solution; these larger masses are then (usually) precipitated. In immunohematology Hemagglutination Hemagglutination is the process by which red blood cells agglutinate, meaning clump or clog. The agglutin involved in hemagglutination is called hemagglutinin. In cross-matching, donor red blood cells and the recipient's serum or plasma are incubated together. If agglutination occurs, this indicates that the donor and recipient blood types are incompatible. When a person produces antibodies against their own red blood cells, as in cold agglutinin disease and other autoimmune conditions, the cells may agglutinate spontaneously. This is called autoagglutination and it can interfere with laboratory tests such as blood typing and the complete blood count. Leukoagglutination Leukoagglutination occurs when the particles involved are white blood cells. An example is the PH-L form of phytohaemagglutinin. In microbiology Agglutination is commonly used as a method of identifying specific bac
https://en.wikipedia.org/wiki/Hemagglutination
Hemagglutination, or haemagglutination, is a specific form of agglutination that involves red blood cells (RBCs). It has two common uses in the laboratory: blood typing and the quantification of virus dilutions in a haemagglutination assay. Blood typing Blood type can be determined by using antibodies that bind to the A or B blood group antigens in a sample of blood. For example, if antibodies that bind the A blood group are added and agglutination occurs, the blood is either type A or type AB. To determine between type A or type AB, antibodies that bind the B group are added and if agglutination does not occur, the blood is type A. If agglutination does not occur with either antibodies that bind to type A or type B antigens, then neither antigen is present on the blood cells, which means the blood is type O. In blood grouping, the patient's serum is tested against RBCs of known blood groups and also the patient's RBCs are tested against known serum types. In this way the patient's blood group is confirmed from both RBCs and serum. A direct Coombs test is also done on the patient's blood sample in case there are any confounding antibodies. Viral hemagglutination assay Many viruses attach to molecules present on the surface of RBCs. A consequence of this is that at certain concentrations, a viral suspension may bind together (agglutinate) the RBCs, thus preventing them from settling out of suspension. Since agglutination is not linked to infectivity, attenuated viruses can therefore be used in assays while an additional assay such as a plaque assay must be used to determine infectivity. By serially diluting a virus suspension into an assay tray (a series of wells of uniform volume) and adding a standard amount of blood cells, an estimation of the number of virus particles can be made. While less accurate than a plaque assay, it is cheaper and quicker (taking just 30 minutes). This assay may be modified to include the addition of an antiserum. By using a standa
https://en.wikipedia.org/wiki/Gaussian%20polar%20coordinates
In the theory of Lorentzian manifolds, spherically symmetric spacetimes admit a family of nested round spheres. In each of these spheres, every point can be carried to any other by an appropriate rotation about the center of symmetry. There are several different types of coordinate chart which are adapted to this family of nested spheres, each introducing a different kind of distortion. The best known alternative is the Schwarzschild chart, which correctly represents distances within each sphere, but (in general) distorts radial distances and angles. Another popular choice is the isotropic chart, which correctly represents angles (but in general distorts both radial and transverse distances). A third choice is the Gaussian polar chart, which correctly represents radial distances, but distorts transverse distances and angles. There are other possible charts; the article on spherically symmetric spacetime describes a coordinate system with intuitively appealing features for studying infalling matter. In all cases, the nested geometric spheres are represented by coordinate spheres, so we can say that their roundness is correctly represented. Definition In a Gaussian polar chart (on a static spherically symmetric spacetime), the metric (aka line element) takes the form Depending on context, it may be appropriate to regard and as undetermined functions of the radial coordinate . Alternatively, we can plug in specific functions (possibly depending on some parameters) to obtain an isotropic coordinate chart on a specific Lorentzian spacetime. Applications Gaussian charts are often less convenient than Schwarzschild or isotropic charts. However, they have found occasional application in the theory of static spherically symmetric perfect fluids. See also Static spacetime Static spherically symmetric perfect fluids Schwarzschild coordinates Isotropic coordinates Frame fields in general relativity for more about frame fields and coframe fields. Coordina
https://en.wikipedia.org/wiki/Chevalley%E2%80%93Warning%20theorem
In number theory, the Chevalley–Warning theorem implies that certain polynomial equations in sufficiently many variables over a finite field have solutions. It was proved by and a slightly weaker form of the theorem, known as Chevalley's theorem, was proved by . Chevalley's theorem implied Artin's and Dickson's conjecture that finite fields are quasi-algebraically closed fields . Statement of the theorems Let be a finite field and be a set of polynomials such that the number of variables satisfies where is the total degree of . The theorems are statements about the solutions of the following system of polynomial equations The Chevalley–Warning theorem states that the number of common solutions is divisible by the characteristic of . Or in other words, the cardinality of the vanishing set of is modulo . The Chevalley theorem states that if the system has the trivial solution , that is, if the polynomials have no constant terms, then the system also has a non-trivial solution . Chevalley's theorem is an immediate consequence of the Chevalley–Warning theorem since is at least 2. Both theorems are best possible in the sense that, given any , the list has total degree and only the trivial solution. Alternatively, using just one polynomial, we can take f1 to be the degree n polynomial given by the norm of x1a1 + ... + xnan where the elements a form a basis of the finite field of order pn. Warning proved another theorem, known as Warning's second theorem, which states that if the system of polynomial equations has the trivial solution, then it has at least solutions where is the size of the finite field and . Chevalley's theorem also follows directly from this. Proof of Warning's theorem Remark: If then so the sum over of any polynomial in of degree less than also vanishes. The total number of common solutions modulo of is equal to because each term is 1 for a solution and 0 otherwise. If the sum of the degrees of the polynomials is less
https://en.wikipedia.org/wiki/Cognitive%20robotics
Cognitive Robotics or Cognitive Technology is a subfield of robotics concerned with endowing a robot with intelligent behavior by providing it with a processing architecture that will allow it to learn and reason about how to behave in response to complex goals in a complex world. Cognitive robotics may be considered the engineering branch of embodied cognitive science and embodied embedded cognition, consisting of Robotic Process Automation, Artificial Intelligence, Machine Learning, Deep Learning, Optical Character Recognition, Image Processing, Process Mining, Analytics, Software Development and System Integration. Core issues While traditional cognitive modeling approaches have assumed symbolic coding schemes as a means for depicting the world, translating the world into these kinds of symbolic representations has proven to be problematic if not untenable. Perception and action and the notion of symbolic representation are therefore core issues to be addressed in cognitive robotics. Starting point Cognitive robotics views human or animal cognition as a starting point for the development of robotic information processing, as opposed to more traditional Artificial Intelligence techniques. Target robotic cognitive capabilities include perception processing, attention allocation, anticipation, planning, complex motor coordination, reasoning about other agents and perhaps even about their own mental states. Robotic cognition embodies the behavior of intelligent agents in the physical world (or a virtual world, in the case of simulated cognitive robotics). Ultimately the robot must be able to act in the real world. Learning techniques Motor Babble A preliminary robot learning technique called motor babbling involves correlating pseudo-random complex motor movements by the robot with resulting visual and/or auditory feedback such that the robot may begin to expect a pattern of sensory feedback given a pattern of motor output. Desired sensory feedback may then b
https://en.wikipedia.org/wiki/Golden%20Telecom
Golden Telecom is an internet services provider in Russia and the Commonwealth of Independent States (CIS). It was acquired by VimpelCom in 2007. History Founded in 1996 by the global corporation Global Telesystems ("GTS"). NYSE-listed Global TeleSystems ("GTS") was the owner of EBONE one of Europe's leading broadband optical and IP network service providers (a Tier 1 network). GTS had its IPO on the NASDAQ in 1998. In 2000, a new management including Robert A. Schriesheim as CFO, was brought in to help restructure and refocus the company. GTS was a pan-European communications services provider, backed by Alan B. Slifka and affiliates of George Soros and Soros Private Equity, with revenues of over $1 billion and operations in 20 countries in Europe. In October 1999 GTS did an IPO of on the NASDAQ of, Golden Telecom, which held communications assets in Russia, with GTS retaining a 65% interest. GTS was essentially a portfolio of communications assets in Western and Eastern Europe which went through an IPO in 1998 and subsequently experienced challenges with an over-levered balance sheet as Europe went through deregulation and from the overall fall-out of the dot.com boom and bust which impacted its customer base. In 2001, to facilitate the sale of GTS, Mr. Schriesheim helped lead it through a pre-arranged filing under Chapter 11 of the United States Bankruptcy Code (U.S.B.C.) and, in prearranged proceedings, a petition for surseance (moratorium), offering a composition, in the Netherlands to restructure its debt, in excess of $2 billion. All such proceedings were approved, confirmed and completed by March 31, 2002, as part of the sale of the company. David A. Stewart served as Senior Vice President, chief financial officer and Treasurer at that time and left Golden Telecom in 2004. Golden Telecom was acquired by VimpelCom in 2007. List of acquisitions by Golden Telecom July 2000 — IT Infoart Stars July 2000 — search engine aport.ru April 2001 – internet service
https://en.wikipedia.org/wiki/Run-time%20algorithm%20specialization
In computer science, run-time algorithm specialization is a methodology for creating efficient algorithms for costly computation tasks of certain kinds. The methodology originates in the field of automated theorem proving and, more specifically, in the Vampire theorem prover project. The idea is inspired by the use of partial evaluation in optimising program translation. Many core operations in theorem provers exhibit the following pattern. Suppose that we need to execute some algorithm in a situation where a value of is fixed for potentially many different values of . In order to do this efficiently, we can try to find a specialization of for every fixed , i.e., such an algorithm , that executing is equivalent to executing . The specialized algorithm may be more efficient than the generic one, since it can exploit some particular properties of the fixed value . Typically, can avoid some operations that would have to perform, if they are known to be redundant for this particular parameter . In particular, we can often identify some tests that are true or false for , unroll loops and recursion, etc. Difference from partial evaluation The key difference between run-time specialization and partial evaluation is that the values of on which is specialised are not known statically, so the specialization takes place at run-time. There is also an important technical difference. Partial evaluation is applied to algorithms explicitly represented as codes in some programming language. At run-time, we do not need any concrete representation of . We only have to imagine when we program the specialization procedure. All we need is a concrete representation of the specialized version . This also means that we cannot use any universal methods for specializing algorithms, which is usually the case with partial evaluation. Instead, we have to program a specialization procedure for every particular algorithm . An important advantage of doing so is that we can use some
https://en.wikipedia.org/wiki/Noetherian%20topological%20space
In mathematics, a Noetherian topological space, named for Emmy Noether, is a topological space in which closed subsets satisfy the descending chain condition. Equivalently, we could say that the open subsets satisfy the ascending chain condition, since they are the complements of the closed subsets. The Noetherian property of a topological space can also be seen as a strong compactness condition, namely that every open subset of such a space is compact, and in fact it is equivalent to the seemingly stronger statement that every subset is compact. Definition A topological space is called Noetherian if it satisfies the descending chain condition for closed subsets: for any sequence of closed subsets of , there is an integer such that Properties A topological space is Noetherian if and only if every subspace of is compact (i.e., is hereditarily compact), and if and only if every open subset of is compact. Every subspace of a Noetherian space is Noetherian. The continuous image of a Noetherian space is Noetherian. A finite union of Noetherian subspaces of a topological space is Noetherian. Every Hausdorff Noetherian space is finite with the discrete topology. Proof: Every subset of X is compact in a Hausdorff space, hence closed. So X has the discrete topology, and being compact, it must be finite. Every Noetherian space X has a finite number of irreducible components. If the irreducible components are , then , and none of the components is contained in the union of the other components. From algebraic geometry Many examples of Noetherian topological spaces come from algebraic geometry, where for the Zariski topology an irreducible set has the intuitive property that any closed proper subset has smaller dimension. Since dimension can only 'jump down' a finite number of times, and algebraic sets are made up of finite unions of irreducible sets, descending chains of Zariski closed sets must eventually be constant. A more algebraic way to see
https://en.wikipedia.org/wiki/Chief%20cell
In human anatomy, there are three types of chief cells, the gastric chief cell, the parathyroid chief cell, and the type 1 chief cells found in the carotid body. Cell types The gastric chief cell (also known as a zymogenic cell or peptic cell) is a cell in the stomach that releases pepsinogen and chymosin. Pepsinogen is activated into the digestive enzyme pepsin when it comes in contact with hydrochloric acid produced by gastric parietal cells. This type of cell also secretes gastric lipase enzymes, which help digest triglycerides into free fatty acids and di- and mono-glycerides. There is also evidence that the gastric chief cell secretes leptin in response to the presence of food in the stomach. Leptin has been found in the pepsinogen granules of chief cells. Gastric pit cells are replaced every 2–4 days. This high rate of turnover is a protective mechanism designed to protect the epithelial lining of the stomach from both the proteolytic action of pepsin and the acid produced by parietal cells. Gastric chief cells are much longer lived and are believed to differentiate from stem cells located higher in the gastric unit in the isthmus. These stem cells differentiate into mucous neck cells in the isthmus and transition into chief cells as they migrate towards the base. Since the mucus neck cells do not divide as it becomes a chief cell this process is known as transdifferentiation. The gene Mist1 has been shown to regulate mucus neck cell to chief cell transdifferentiation and plays a role in the normal development of the chief cell organelles and structures. The parathyroid chief cell is the primary cell of the parathyroid gland. It produces and secretes parathyroid hormone in response to low calcium levels. PTH plays an important role in regulating blood calcium levels by raising the amount of calcium in the blood. Parathyroid tissue seems to have a low turn-over rate. Histology Gastric chief cells are epithelial cells which are found within the gastric uni
https://en.wikipedia.org/wiki/Trusted%20Platform%20Module
Trusted Platform Module (TPM, also known as ISO/IEC 11889) is an international standard for a secure cryptoprocessor, a dedicated microcontroller designed to secure hardware through integrated cryptographic keys. The term can also refer to a chip conforming to the standard. One of Windows 11's system requirements is TPM 2.0. Microsoft has stated that this is to help increase security against firmware attacks. History Trusted Platform Module (TPM) was conceived by a computer industry consortium called Trusted Computing Group (TCG). It evolved into TPM Main Specification Version 1.2 which was standardized by International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) in 2009 as ISO/IEC 11889:2009. TPM Main Specification Version 1.2 was finalized on March 3, 2011, completing its revision. On April 9, 2014, the Trusted Computing Group announced a major upgrade to their specification entitled TPM Library Specification 2.0. The group continues work on the standard incorporating errata, algorithmic additions and new commands, with its most recent edition published as 2.0 in November 2019. This version became ISO/IEC 11889:2015. When a new revision is released it is divided into multiple parts by the Trusted Computing Group. Each part consists of a document that makes up the whole of the new TPM specification. Part 1 – Architecture (renamed from Design Principles) Part 2 – Structures of the TPM Part 3 – Commands Part 4 – Supporting Routines (added in TPM 2.0) Overview Trusted Platform Module provides A hardware random number generator Facilities for the secure generation of cryptographic keys for limited uses. Remote attestation: Creates a nearly unforgeable hash key summary of the hardware and software configuration. One could use the hash to verify that the hardware and software have not been changed. The software in charge of hashing the setup determines the extent of the summary. Binding: Encrypts data using t
https://en.wikipedia.org/wiki/Image%20scaling
In computer graphics and digital imaging, image scaling refers to the resizing of a digital image. In video technology, the magnification of digital material is known as upscaling or resolution enhancement. When scaling a vector graphic image, the graphic primitives that make up the image can be scaled using geometric transformations, with no loss of image quality. When scaling a raster graphics image, a new image with a higher or lower number of pixels must be generated. In the case of decreasing the pixel number (scaling down) this usually results in a visible quality loss. From the standpoint of digital signal processing, the scaling of raster graphics is a two-dimensional example of sample-rate conversion, the conversion of a discrete signal from a sampling rate (in this case the local sampling rate) to another. Mathematical Image scaling can be interpreted as a form of image resampling or image reconstruction from the view of the Nyquist sampling theorem. According to the theorem, downsampling to a smaller image from a higher-resolution original can only be carried out after applying a suitable 2D anti-aliasing filter to prevent aliasing artifacts. The image is reduced to the information that can be carried by the smaller image. In the case of up sampling, a reconstruction filter takes the place of the anti-aliasing filter. A more sophisticated approach to upscaling treats the problem as an inverse problem, solving the question of generating a plausible image, which, when scaled down, would look like the input image. A variety of techniques have been applied for this, including optimization techniques with regularization terms and the use of machine learning from examples. Algorithms An image size can be changed in several ways. Nearest-neighbor interpolation One of the simpler ways of increasing image size is nearest-neighbor interpolation, replacing every pixel with the nearest pixel in the output; for upscaling this means multiple pixels of the sam
https://en.wikipedia.org/wiki/Concordance%20%28genetics%29
In genetics, concordance is the probability that a pair of individuals will both have a certain characteristic (phenotypic trait) given that one of the pair has the characteristic. Concordance can be measured with concordance rates, reflecting the odds of one person having the trait if the other does. Important clinical examples include the chance of offspring having a certain disease if the mother has it, if the father has it, or if both parents have it. Concordance among siblings is similarly of interest: what are the odds of a subsequent offspring having the disease if an older child does? In research, concordance is often discussed in the context of both members of a pair of twins. Twins are concordant when both have or both lack a given trait. The ideal example of concordance is that of identical twins, because the genome is the same, an equivalence that helps in discovering causation via deconfounding, regarding genetic effects versus epigenetic and environmental effects (nature versus nurture). In contrast, discordance occurs when a similar trait is not shared by the persons. Studies of twins have shown that genetic traits of monozygotic twins are fully concordant, whereas in dizygotic twins, half of genetic traits are concordant, while the other half are discordant. Discordant rates that are higher than concordant rates express the influence of the environment on twin traits. Studies A twin study compares the concordance rate of identical twins to that of fraternal twins. This can help suggest whether a disease or a certain trait has a genetic cause. Controversial uses of twin data have looked at concordance rates for homosexuality and intelligence. Other studies have involved looking at the genetic and environmental factors that can lead to increased LDL in women twins. Because identical twins are genetically virtually identical, it follows that a genetic pattern carried by one would very likely also be carried by the other. If a characteristic ident
https://en.wikipedia.org/wiki/Basal%20cell
Basal cell may refer to: the epidermal cell in the stratum basale the airway basal cell, an epithelial cell in the respiratory epithelium
https://en.wikipedia.org/wiki/Nanophase%20material
Nanophase materials are materials that have grain sizes under 100 nanometres. They have different mechanical and optical properties compared to the large grained materials of the same chemical composition. Transparency and different transparent colours can be achieved with nanophase materials by varying the grain size. Nanophase materials Nanophase metals usually are many times harder but more brittle than regular metals. nanophase copper is a superhard material nanophase aluminum nanophase iron is iron with a grain size in the nanometer range. Nanocrystalline iron has a tensile strength of around 6 GPA, twice that of the best maraging steels. Nanophase ceramics usually are more ductile and less brittle than regular ceramics. Footnotes External links Creating Nanophase Materials. Scientific American (subscription required) Nanophase Materials, Michigan Tech Research on Nanophase Materials, Louisiana State University Materials
https://en.wikipedia.org/wiki/Thermophotovoltaic%20energy%20conversion
Thermophotovoltaic (TPV) energy conversion is a direct conversion process from heat to electricity via photons. A basic thermophotovoltaic system consists of a hot object emitting thermal radiation and a photovoltaic cell similar to a solar cell but tuned to the spectrum being admitted from the hot object. As TPV systems generally work at lower temperatures than solar cells, their efficiencies tend to be low. Offsetting this through the use of multi-junction cells based on non-silicon materials is common, but generally very expensive. This currently limits TPV to niche roles like spacecraft power and waste heat collection from larger systems like steam turbines. General concept PV Typical photovoltaics work by creating a p–n junction near the front surface of a thin semiconductor material. When photons above the bandgap energy of the material hit atoms within the bulk lower layer, below the junction, an electron is photoexcited and becomes free of its atom. The junction creates an electric field that accelerates the electron forward within the cell until it passes the junction and is free to move to the thin electrodes patterned on the surface. Connecting a wire from the front to the rear allows the electrons to flow back into the bulk and complete the circuit. Photons with less energy than the bandgap do not eject electrons. Photons with energy above the bandgap will eject higher-energy electrons which tend to thermalize within the material and lose their extra energy as heat. If the cell's bandgap is raised, the electrons that are emitted will have higher energy when they reach the junction and thus result in a higher voltage, but this will reduce the number of electrons emitted as more photons will be below the bandgap energy and thus generate a lower current. As electrical power is the product of voltage and current, there is a sweet spot where the total output is maximized. Terrestrial solar radiation is typically characterized by a standard known as Air M
https://en.wikipedia.org/wiki/Horizontal%20blanking%20interval
Horizontal blanking interval refers to a part of the process of displaying images on a computer monitor or television screen via raster scanning. CRT screens display images by moving beams of electrons very quickly across the screen. Once the beam of the monitor has reached the edge of the screen, it is switched off, and the deflection circuit voltages (or currents) are returned to the values they had for the other edge of the screen; this would have the effect of retracing the screen in the opposite direction, so the beam is turned off during this time. This part of the line display process is the Horizontal Blank. In detail, the Horizontal blanking interval consists of: front porch – blank while still moving right, past the end of the scanline, sync pulse – blank while rapidly moving left; in terms of amplitude, "blacker than black". back porch – blank while moving right again, before the start of the next scanline. Colorburst occurs during the back porch, and unblanking happens at the end of the back porch. In the NTSC television standard, horizontal blanking occupies out of every scan line (17.2%). In PAL, it occupies out of every scan line (18.8%). Some modern monitors and video cards support reduced blanking, standardized with Coordinated Video Timings. In the PAL television standard, the blanking level corresponds to the black level, whilst other standards, most notably NTSC, set the black level slightly above the blanking level on a pedestal. HBlank effects Some graphics systems can count horizontal blanks and change how the display is generated during this blank time in the signal; this is called a raster effect, of which an example is raster bars. In video games, the horizontal blanking interval was used to create some notable effects. Some methods of parallax scrolling use a raster effect to simulate depth in consoles that do not natively support multiple background layers or do not support enough background layers to achieve the desired
https://en.wikipedia.org/wiki/Constraint-based%20Routing%20Label%20Distribution%20Protocol
Constraint-based Routing Label Distribution Protocol (CR-LDP) is a control protocol used in some computer networks. As of February 2003, the IETF MPLS working group deprecated CR-LDP and decided to focus purely on RSVP-TE. It is an extension of the Label Distribution Protocol (LDP), one of the protocols in the Multiprotocol Label Switching architecture. CR-LDP contains extensions for LDP to extend its capabilities such as setup paths beyond what is available for the routing protocol. For instance, a label-switched path can be set up based on explicit route constraints, quality of service constraints, and other constraints. Constraint-based routing (CR) is a mechanism used to meet traffic engineering requirements. These requirements are met by extending LDP for support of constraint-based routed label-switched paths (CR-LSPs). Other uses for CR-LSPs include MPLS-based virtual private networks. CR-LDP is almost same as basic LDP, in packet structure, but it contains some extra TLVs which basically set up the constraint-based LSP. References MPLS networking Network protocols
https://en.wikipedia.org/wiki/Scalar%20processor
Scalar processors are a class of computer processors that process only one data item at a time. Typical data items include integers and floating point numbers. Classification A scalar processor is classified as a single instruction, single data (SISD) processor in Flynn's taxonomy. The Intel 486 is an example of a scalar processor. It is to be contrasted with a vector processor where a single instruction operates simultaneously on multiple data items (and thus is referred to as a single instruction, multiple data (SIMD) processor). The difference is analogous to the difference between scalar and vector arithmetic. The term scalar in computing dates to the 1970 and 1980s when vector processors were first introduced. It was originally used to distinguish the older designs from the new vector processors. Superscalar processor A superscalar processor (such as the Intel P5) may execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to redundant functional units on the processor. Each functional unit is not a separate CPU core but an execution resource within a single CPU such as an arithmetic logic unit, a bit shifter, or a multiplier. The Cortex-M7, like many consumer CPUs today, is a superscalar processor. Scalar data type A scalar data type, or just scalar, is any non-composite value. Generally, all basic primitive data types are considered scalar: The boolean data type (bool) Numeric types (int, the floating point types float and double) Character types (char and string) See also Instruction pipeline Parallel computing References Central processing unit
https://en.wikipedia.org/wiki/Kuramoto%20model
The Kuramoto model (or Kuramoto–Daido model), first proposed by , is a mathematical model used in describing synchronization. More specifically, it is a model for the behavior of a large set of coupled oscillators. Its formulation was motivated by the behavior of systems of chemical and biological oscillators, and it has found widespread applications in areas such as neuroscience and oscillating flame dynamics. Kuramoto was quite surprised when the behavior of some physical systems, namely coupled arrays of Josephson junctions, followed his model. The model makes several assumptions, including that there is weak coupling, that the oscillators are identical or nearly identical, and that interactions depend sinusoidally on the phase difference between each pair of objects. Definition In the most popular version of the Kuramoto model, each of the oscillators is considered to have its own intrinsic natural frequency , and each is coupled equally to all other oscillators. Surprisingly, this fully nonlinear model can be solved exactly in the limit of infinite oscillators, N→ ∞; alternatively, using self-consistency arguments one may obtain steady-state solutions of the order parameter. The most popular form of the model has the following governing equations: , where the system is composed of N limit-cycle oscillators, with phases and coupling constant K. Noise can be added to the system. In that case, the original equation is altered to , where is the fluctuation and a function of time. If we consider the noise to be white noise, then , with denoting the strength of noise. Transformation The transformation that allows this model to be solved exactly (at least in the N → ∞ limit) is as follows: Define the "order" parameters r and ψ as . Here r represents the phase-coherence of the population of oscillators and ψ indicates the average phase. Multiplying this equation with and only considering the imaginary part gives . Thus the oscillators' equations are no
https://en.wikipedia.org/wiki/Functional%20group%20%28ecology%29
A functional group is merely a set of species, or collection of organisms, that share alike characteristics within a community. Ideally, the lifeforms would perform equivalent tasks based on domain forces, rather than a common ancestor or evolutionary relationship. This could potentially lead to analogous structures that overrule the possibility of homology. More specifically, these beings produce resembling effects to external factors of an inhabiting system. Due to the fact that a majority of these creatures share an ecological niche, it is practical to assume they require similar structures in order to achieve the greatest amount of fitness. This refers to such as the ability to successfully reproduce to create offspring, and furthermore sustain life by avoiding alike predators and sharing meals. Scientific investigation Rather than the idea of this concept based upon a set of theories, functional groups are directly observed and determined by research specialists. It is important that this information is witnessed first-hand in order to state as usable evidence. Behavior and overall contribution to others are common key points to look for. Individuals use the corresponding perceived traits to further link genetic profiles to one another. Although, the life-forms themselves are different, variables based upon overall function and performance are interchangeable. These groups share an indistinguishable part within their energy flow, providing a key position within food chains and relationships within environment(s). What is an ecosystem and why is that important? An ecosystem is the biological organization that defines and expands on various environment factors- abiotic and biotic, that relate to simultaneous interaction. Whether it be a producer or relative consumer, each and every piece of life maintains a critical position in the ongoing survival rates of its own surroundings. As it pertains, a functional groups shares a very specific role within any given e
https://en.wikipedia.org/wiki/Plesiomorphy%20and%20symplesiomorphy
In phylogenetics, a plesiomorphy ("near form") and symplesiomorphy are synonyms for an ancestral character shared by all members of a clade, which does not distinguish the clade from other clades. Plesiomorphy, symplesiomorphy, apomorphy, and synapomorphy, all mean a trait shared between species because they share an ancestral species. Apomorphic and synapomorphic characteristics convey much information about evolutionary clades and can be used to define taxa. However, plesiomorphic and symplesiomorphic characteristics cannot. The term symplesiomorphy was introduced in 1950 by German entomologist Willi Hennig. Examples A backbone is a plesiomorphic trait shared by birds and mammals, and does not help in placing an animal in one or the other of these two clades. Birds and mammals share this trait because both clades are descended from the same far distant ancestor. Other clades, e.g. snakes, lizards, turtles, fish, frogs, all have backbones and none are either birds nor mammals. Being a hexapod is plesiomorphic trait shared by ants and beetles, and does not help in placing an animal in one or the other of these two clades. Ants and beetles share this trait because both clades are descended from the same far distant ancestor. Other clades, e.g. bugs, flies, bees, aphids, and many more clades, all are hexapods and none are either ants nor beetles. Elytra are a synapomorphy for placing any living species into the beetle clade, Elytra are plesiomorphic between clades of beetles, e.g. they do not distinguish the dung beetles from the horned beetles. The metapleural gland is a synapomorphy for placing any living species into the ant clade. Feathers are a synapomorphy for placing any living species into the bird clade, hair is a synapomorphy for placing any living species into the mammal clade. Note that some mammal species have lost their hair, so the absence of hair does not exclude a species from being a mammal. Another mammalian synapomorphy is milk. All mam
https://en.wikipedia.org/wiki/Polyvinylpolypyrrolidone
Polyvinylpolypyrrolidone (polyvinyl polypyrrolidone, PVPP, crospovidone, crospolividone, or E1202) is a highly cross-linked modification of polyvinylpyrrolidone (PVP). The cross-linked form of PVP is used as a disintegrant (see also excipients) in pharmaceutical tablets. PVPP is a highly cross-linked version of PVP, making it insoluble in water, though it still absorbs water and swells very rapidly generating a swelling force. This property makes it useful as a disintegrant in tablets. PVPP can be used as a drug, taken as a tablet or suspension to absorb compounds (so-called endotoxins) that cause diarrhea. (Cf. bone char, charcoal.) It is also used as a fining to extract impurities (via agglomeration followed by filtration). It is used in winemaking. Using the same principle it is used to remove polyphenols in beer production and thus clear beers with stable foam are produced. One such commercial product is called Polyclar. PVPP forms bonds similar to peptidic bonds in protein (especially, like proline residues) and that is why it can precipitate tannins the same way as proteins do. PVPP has E number code E1202 and is used as a stabiliser. See also Polyethylene glycol-polyvinyl alcohol References Vinyl polymers Food additives Pyrrolidones Excipients E-number additives
https://en.wikipedia.org/wiki/Rejuvenation
Rejuvenation is a medical discipline focused on the practical reversal of the aging process. Rejuvenation is distinct from life extension. Life extension strategies often study the causes of aging and try to oppose those causes in order to slow aging. Rejuvenation is the reversal of aging and thus requires a different strategy, namely repair of the damage that is associated with aging or replacement of damaged tissue with new tissue. Rejuvenation can be a means of life extension, but most life extension strategies do not involve rejuvenation. Historical and cultural background Various myths tell the stories about the quest for rejuvenation. It was believed that magic or intervention of a supernatural power can bring back youth and many mythical adventurers set out on a journey to do that, for themselves, their relatives or some authority that sent them anonymously. An ancient Chinese emperor actually sent out ships of young men and women to find a pearl that would rejuvenate him. This led to a myth among modern Chinese that Japan was founded by these people. In some religions, people were to be rejuvenated after death prior to placing them in heaven. The stories continued well into the 16th century. The Spanish explorer Juan Ponce de León led an expedition around the Caribbean islands and into Florida to find the Fountain of Youth. Led by the rumors, the expedition continued the search and many perished. The Fountain was nowhere to be found as locals were unaware of its exact location. Since the emergence of philosophy, sages and self-proclaimed wizards always made enormous efforts to find the secret of youth, both for themselves and for their noble patrons and sponsors. It was widely believed that some potions may restore the youth. Another commonly cited approach was attempting to transfer the essence of youth from young people to old. Some examples of this approach were sleeping with virgins or children (sometimes literally sleeping, not necessarily having
https://en.wikipedia.org/wiki/Ball%20detent
A ball detent is a simple mechanical arrangement used to hold a moving part in a temporarily fixed position relative to another part. Usually the moving parts slide with respect to each other, or one part rotates within the other. The ball is a single, usually metal sphere, sliding within a bored cylinder, against the pressure of a spring, which pushes the ball against the other part of the mechanism, which carries the detent - which can be as simple as a hole of smaller diameter than the ball. When the hole is in line with the cylinder, the ball is partially pushed into the hole under spring pressure, holding the parts at that position. Additional force applied to the moving parts will compressing the spring, causing the ball to be depressed back into its cylinder, and allowing the parts to move to another position. Applications Ball detents are commonly found in the selector mechanism of a gearbox, holding the selector rods in the correct position to engage the desired gear. Other applications include clutches that slip at a preset torque, and calibrated ball detent mechanisms are typically found in a torque wrench. Ball detents are one of the mechanisms often used in folding knives to prevent unwanted opening of the blade when carrying. Ball detents were used in the Curta mechanical calculator to enforce discrete values. Use in paintball markers The term "ball detent" is also used when referring to a mechanism in paintball markers designed to prevent the paintball from rolling out of the firing chamber before being fired. Some designs are similar to those outlined above, with a cartridge utilizing a ball bearing in a bore with spring pressure. The cartridge is installed perpendicular to the barrel bore axis, just ahead of where the ball rests before being fired. Other designs use elastic rubber protrusions that block the ball until it is pushed over it by the bolt. Some designs use precisely calibrated rings or "barrel sizers" that are selected to have a s
https://en.wikipedia.org/wiki/Scatchard%20equation
The Scatchard equation is an equation used in molecular biology to calculate the affinity and number of binding sites of a receptor for a ligand. It is named after the American chemist George Scatchard. Equation Throughout this article, [RL] denotes the concentration of a receptor-ligand complex, [R] the concentration of free receptor, and [L] the concentration of free ligand (so that the total concentration of the receptor and ligand are [R]+[RL] and [L]+[RL], respectively). Let n be the number of binding sites for ligand on each receptor molecule, and let represent the average number of ligands bound to a receptor. Let Kd denote the dissociation constant between the ligand and receptor. The Scatchard equation is given by By plotting /[L] versus , the Scatchard plot shows that the slope equals to -1/Kd while the x-intercept equals the number of ligand binding sites n. Derivation n=1 Ligand When each receptor has a single ligand binding site, the system is described by with an on-rate (kon) and off-rate (koff) related to the dissociation constant through Kd=koff/kon. When the system equilibrates, so that the average number of ligands bound to each receptor is given by which is the Scatchard equation for n=1. n=2 Ligands When each receptor has two ligand binding sites, the system is governed by At equilibrium, the average number of ligands bound to each receptor is given by which is equivalent to the Scatchard equation. General Case of n Ligands For a receptor with n binding sites that independently bind to the ligand, each binding site will have an average occupancy of [L]/(Kd + [L]). Hence, by considering all n binding sites, there will ligands bound to each receptor on average, from which the Scatchard equation follows. Problems with the method The Scatchard method is less used nowadays because of the availability of computer programs that directly fit parameters to binding data. Mathematically, the Scatchard equation is related to Eadie-Hofst
https://en.wikipedia.org/wiki/Napoleon%27s%20problem
Napoleon's problem is a compass construction problem. In it, a circle and its center are given. The challenge is to divide the circle into four equal arcs using only a compass. Napoleon was known to be an amateur mathematician, but it is not known if he either created or solved the problem. Napoleon's friend the Italian mathematician Lorenzo Mascheroni introduced the limitation of using only a compass (no straight edge) into geometric constructions. But actually, the challenge above is easier than the real Napoleon's problem, consisting in finding the center of a given circle with compass alone. The following sections will describe solutions to three problems and proofs that they work. Georg Mohr's 1672 book "Euclides Danicus" anticipated Mascheroni's idea, though the book was only rediscovered in 1928. Dividing a given circle into four equal arcs given its centre Centred on any point X on circle C, draw an arc through O (the centre of C) which intersects C at points V and Y. Do the same centred on Y through O, intersecting C at X and Z. Note that the line segments OV, OX, OY, OZ, VX, XY, YZ have the same length, all distances being equal to the radius of the circle C. Now draw an arc centred on V which goes through Y and an arc centred on Z which goes through X; call where these two arcs intersect T. Note that the distances VY and XZ are times the radius of the circle C. Put the compass radius equal to the distance OT ( times the radius of the circle C) and draw an arc centred on Z which intersects the circle C at U and W. UVWZ is a square and the arcs of C UV, VW, WZ, and ZU are each equal to a quarter of the circumference of C. Finding the centre of a given circle Let (C) be the circle, whose centre is to be found. Let A be a point on (C). A circle (C1) centered at A meets (C) at B and B'. Two circles (C2) centered at B and B', with radius AB, cross again at point C. A circle (C3) centered at C with radius AC meets (C1) at D and D'. Two cir
https://en.wikipedia.org/wiki/Restriction%20fragment
A restriction fragment is a DNA fragment resulting from the cutting of a DNA strand by a restriction enzyme (restriction endonucleases), a process called restriction. Each restriction enzyme is highly specific, recognising a particular short DNA sequence, or restriction site, and cutting both DNA strands at specific points within this site. Most restriction sites are palindromic, (the sequence of nucleotides is the same on both strands when read in the 5' to 3' direction of each strand), and are four to eight nucleotides long. Many cuts are made by one restriction enzyme because of the chance repetition of these sequences in a long DNA molecule, yielding a set of restriction fragments. A particular DNA molecule will always yield the same set of restriction fragments when exposed to the same restriction enzyme. Restriction fragments can be analyzed using techniques such as gel electrophoresis or used in recombinant DNA technology. Applications In recombinant DNA technology, specific restriction endonucleases are used that will isolate a particular gene and cleave the sugar phosphate backbones at different points (retaining symmetry), so that the double-stranded restriction fragments have single-stranded ends. These short extensions, called sticky ends, can form hydrogen bonded base pairs with complementary sticky ends on any other DNA cut with the same enzyme (such as a bacterial plasmid). In agarose gel electrophoresis, the restriction fragments yield a band pattern characteristic of the original DNA molecule and restriction enzyme used, for example the relatively small DNA molecules of viruses and plasmids can be identified simply by their restriction fragment patterns. If the nucleotide differences of two different alleles occur within the restriction site of a particular restriction enzyme, digestion of segments of DNA from individuals with different alleles for that particular gene with that enzyme would produce different fragments and that will each yield di
https://en.wikipedia.org/wiki/Powered%20speakers
Powered speakers, also known as self-powered speakers and active speakers, are loudspeakers that have built-in amplifiers. Powered speakers are used in a range of settings, including in sound reinforcement systems (used at live music concerts), both for the main speakers facing the audience and the monitor speakers facing the performers; by DJs performing at dance events and raves; in private homes as part of hi-fi or home cinema audio systems and as computer speakers. They can be connected directly to a mixing console or other low-level audio signal source without the need for an external amplifier. Some active speakers designed for sound reinforcement system use have an onboard mixing console and microphone preamplifier, which enables microphones to be connected directly to the speaker. Active speakers have several advantages, the most obvious being their compactness and simplicity. Additionally the amplifier(s) can be designed to closely match the optimal requirements of the speaker it will power; and the speaker designer is not required to include a passive crossover, decreasing production cost and possibly sound quality. Some also claim that the shorter distances between components can decrease external interference and increase fidelity; although this is highly dubious, and the reciprocal argument can also be made. Disadvantages include heavier loudspeaker enclosures; reduced reliability due to active electronic components within; and the need to supply both the audio signal and power to every unit separately, typically requiring two cables to be run to each speaker (as opposed to the single cable required with passive speakers and an external amplifier). Powered speakers are available with passive or active crossovers built into them. Since the early 2000s, powered speakers with active crossovers and other DSP have become common in sound reinforcement applications and in studio monitors. Home theater and add-on domestic/automotive subwoofers have used acti
https://en.wikipedia.org/wiki/List%20of%20World%20Series%20broadcasters
The following is a list of national American television and radio networks and announcers that have broadcast World Series games over the years, as well as local flagship radio stations that have aired them since 1982. Television Television coverage of the World Series began in 1947. Since that time, eight different men have called eight or more different World Series telecasts as either a play-by-play announcer or color commentator. They are (through 2023) Joe Buck (24), Tim McCarver (24), Curt Gowdy (12), Mel Allen (11), Vin Scully (11), Joe Garagiola (10), Tony Kubek (8), Al Michaels (8), and John Smoltz (8). 2020s Per the current broadcast agreement, the World Series will be televised by Fox through 2028. 2010s Notes 2010 – For the second consecutive year, World Series games had earlier start times in hopes of attracting younger viewers. First pitch was just before 8 p.m. EDT for Games 1–2, and 5, while Game 3 started at 7 p.m. EDT. Game 4, however, started at 8:22 p.m. EDT to accommodate Fox's football coverage of the game between the Tampa Bay Buccaneers and Arizona Cardinals. Many viewers in the New York City and Philadelphia markets were unable to watch Games 1 and 2 because News Corporation, Fox's parent company, pulled WNYW and WTXF from cable provider Cablevision on October 16 because of a carriage dispute. The agreement was reached just before Game 3. MLB International syndicated its own telecast of the series, with announcers Gary Thorne and Rick Sutcliffe, to various networks outside the U.S. ESPN America broadcast the series live in the UK and in Europe. Additionally, the American Forces Network and Canadian Forces Radio and Television carried the games to U.S. and Canadian service personnel stationed around the globe. Fox Deportes carried the Series in Spanish on American cable and satellite TV. The overall national Nielsen rating for the five games was 8.4, tied with the 2008 World Series for the event's lowest-ever TV rating. Game 4 was beaten
https://en.wikipedia.org/wiki/Carus%20Mathematical%20Monographs
The Carus Mathematical Monographs is a monograph series published by the Mathematical Association of America. Books in this series are intended to appeal to a wide range of readers in mathematics and science. Scope and audience While the books are intended to cover nontrivial material, the emphasis is on exposition and clear communication rather than novel results and a systematic Bourbaki-style presentation. The webpage for the series states: The exposition of mathematical subjects that the monographs contain are set forth in a manner comprehensible not only to teachers and students specializing in mathematics, but also to scientific workers in other fields. More generally, the monographs are intended for the wide circle of thoughtful people familiar with basic graduate or advanced undergraduate mathematics encountered in the study of mathematics itself or in the context of related disciplines who wish to extend their knowledge without prolonged and critical study of the mathematical journals and treatises. Many of the books in the series have become classics in the genre of general mathematical exposition. Series listing Calculus of Variations, by G. A. Bliss (out of print) Analytic Functions of a Complex Variable, by D. R. Curtiss (out of print) Mathematical Statistics, by H. L. Rietz (out of print) Projective Geometry, by J. W. Young (out of print) A History of Mathematics in America before 1900, by D. E. Smith and Jekuthiel Ginsburg (out of print) Fourier Series and Orthogonal Polynomials, by Dunham Jackson (out of print) Vectors and Matrices, by C. C. MacDuffee (out of print) Rings and Ideals, by N. H. McCoy (out of print) The Theory of Algebraic Numbers, second edition, by Harry Pollard and Harold G. Diamond The Arithmetic Theory of Quadratic Forms, by B. W. Jones (out of print) Irrational Numbers, by Ivan Niven Statistical Independence in Probability, Analysis and Number Theory, by Mark Kac A Primer of Real Functions, third edition, by Ralph P. Boas, Jr.
https://en.wikipedia.org/wiki/Maladaptation
In evolution, a maladaptation () is a trait that is (or has become) more harmful than helpful, in contrast with an adaptation, which is more helpful than harmful. All organisms, from bacteria to humans, display maladaptive and adaptive traits. In animals (including humans), adaptive behaviors contrast with maladaptive ones. Like adaptation, maladaptation may be viewed as occurring over geological time, or within the lifetime of one individual or a group. It can also signify an adaptation that, whilst reasonable at the time, has become less and less suitable and more of a problem or hindrance in its own right, as time goes on. This is because it is possible for an adaptation to be poorly selected or become more of a dysfunction than a positive adaptation, over time. It can be noted that the concept of maladaptation, as initially discussed in a late 19th-century context, is based on a flawed view of evolutionary theory. It was believed that an inherent tendency for an organism's adaptations to degenerate would translate into maladaptations and soon become crippling if not "weeded out" (see also eugenics). In reality, the advantages conferred by any one adaptation are rarely decisive for survival on its own, but rather balanced against other synergistic and antagonistic adaptations, which consequently cannot change without affecting others. In other words, it is usually impossible to gain an advantageous adaptation without incurring "maladaptations". Consider a seemingly trivial example: it is apparently extremely hard for an animal to evolve the ability to breathe well in air and in water. Better adapting to one means being less able to do the other. Examples A term used known as neuroplasticity is defined as "the brain's ability to reorganize itself by forming new neural connections throughout life". Neuroplasticity is seen as an adaptation that helps humans to adapt to new stimuli, especially through motor functions in musically inclined people, as well as sev
https://en.wikipedia.org/wiki/Email%20hub
The term Mail Hub is used to denote an MTA (message transfer agent) or system of MTAs used to route email but not act as a mail server (having no end-user email store) since there is no MUA (mail user agent) access. Examples could include dedicated anti-SPAM appliances, anti-virus engines running on dedicated hardware, email gateways and so forth. DNS Based Mail Hub A first example for a Mail Hub consisting of a network of MTAs would be that of a typical small-to-medium size Internet service provider (ISP), or for a FOSS corporate mail system. This solution is very good for developing nation ISPs and NGOs. As well as any other low-budget but high availability mail system needs. This is mostly due to not using expensive Network level switches and hardware. Simple DNS MX record based Mail Hub cluster with parallelism and front-end failover and load balancing is illustrated in the following diagram: The servers would be all Linux x86 servers with low cost SATA or PATA hard disk storage. The front-end servers would most likely run Postfix with Spamassassin and ClamAV. This RAIS server Cluster would then overcome the problem with Perl based Spamassassin being too CPU and memory hungry for low cost servers. The solution presented here is based on all GPL FOSS free software, but of course there are alternative configurations using other free or non-free software. References Mail Clustering, , ISOC, 2005. Email
https://en.wikipedia.org/wiki/Antibiotic%20sensitivity%20testing
Antibiotic sensitivity testing or antibiotic susceptibility testing is the measurement of the susceptibility of bacteria to antibiotics. It is used because bacteria may have resistance to some antibiotics. Sensitivity testing results can allow a clinician to change the choice of antibiotics from empiric therapy, which is when an antibiotic is selected based on clinical suspicion about the site of an infection and common causative bacteria, to directed therapy, in which the choice of antibiotic is based on knowledge of the organism and its sensitivities. Sensitivity testing usually occurs in a medical laboratory, and uses culture methods that expose bacteria to antibiotics, or genetic methods that test to see if bacteria have genes that confer resistance. Culture methods often involve measuring the diameter of areas without bacterial growth, called zones of inhibition, around paper discs containing antibiotics on agar culture dishes that have been evenly inoculated with bacteria. The minimum inhibitory concentration, which is the lowest concentration of the antibiotic that stops the growth of bacteria, can be estimated from the size of the zone of inhibition. Antibiotic susceptibility testing has been needed since the discovery of the beta-lactam antibiotic penicillin. Initial methods were phenotypic, and involved culture or dilution. The Etest, an antibiotic impregnated strip, has been available since the 1980s, and genetic methods such as polymerase chain reaction (PCR) testing have been available since the early 2000s. Research is ongoing into improving current methods by making them faster or more accurate, as well as developing new methods for testing, such as microfluidics. Uses In clinical medicine, antibiotics are most frequently prescribed on the basis of a person's symptoms and medical guidelines. This method of antibiotic selection is called empiric therapy, and it is based on knowledge about what bacteria cause an infection, and to what antibiotics ba
https://en.wikipedia.org/wiki/4104
4104 (four thousand one hundred [and] four) is the natural number following 4103 and preceding 4105. It is the second positive integer which can be expressed as the sum of two positive cubes in two different ways. The first such number, 1729, is called the "Ramanujan–Hardy number". 4104 is the sum of 4096 + 8 (that is, 163 + 23), and also the sum of 3375 + 729 (that is, 153 + 93). See also Taxicab number 1729 External links MathWorld: Hardy–Ramanujan Number Integers
https://en.wikipedia.org/wiki/Eurocodes
The Eurocodes are the ten European standards (EN; harmonised technical rules) specifying how structural design should be conducted within the European Union (EU). These were developed by the European Committee for Standardization upon the request of the European Commission. The purpose of the Eurocodes is to provide: a means to prove compliance with the requirements for mechanical strength and stability and safety in case of fire established by European Union law. a basis for construction and engineering contract specifications. a framework for creating harmonized technical specifications for building products (CE mark). By March 2010, the Eurocodes are mandatory for the specification of European public works and are intended to become the de facto standard for the private sector. The Eurocodes therefore replace the existing national building codes published by national standard bodies (e.g. BS 5950), although many countries had a period of co-existence. Additionally, each country is expected to issue a National Annex to the Eurocodes which will need referencing for a particular country (e.g. The UK National Annex). At present, take-up of Eurocodes is slow on private sector projects and existing national codes are still widely used by engineers. The motto of the Eurocodes is “Building the future”. The second generation of the Eurocodes (2G Eurocodes) is being prepared. History In 1975, the Commission of the European Community (presently the European Commission), decided on an action programme in the field of construction, based on article 95 of the Treaty. The objective of the programme was to eliminate technical obstacles to trade and the harmonisation of technical specifications. Within this action programme, the Commission took the initiative to establish a set of harmonised technical rules for the design of construction works which, in a first would serve as an alternative to the national rules in force in the member states of the European Union (EU) an
https://en.wikipedia.org/wiki/Internet%20Authentication%20Service
Internet Authentication Service (IAS) is a component of Windows Server operating systems that provides centralized user authentication, authorization and accounting. Overview While Routing and Remote Access Service (RRAS) security is sufficient for small networks, larger companies often need a dedicated infrastructure for authentication. RADIUS is a standard for dedicated authentication servers. Windows 2000 Server and Windows Server 2003 include the Internet Authentication Service (IAS), an implementation of RADIUS server. IAS supports authentication for Windows-based clients, as well as for third-party clients that adhere to the RADIUS standard. IAS stores its authentication information in Active Directory, and can be managed with Remote Access Policies. IAS first showed up for Windows NT 4.0 in the Windows NT 4.0 Option Pack and in Microsoft Commercial Internet System (MCIS) 2.0 and 2.5. While IAS requires the use of an additional server component, it provides a number of advantages over the standard methods of RRAS authentication. These advantages include centralized authentication for users, auditing and accounting features, scalability, and seamless integration with the existing features of RRAS. In Windows Server 2008, Network Policy Server (NPS) replaces the Internet Authentication Service (IAS). NPS performs all of the functions of IAS in Windows Server 2003 for VPN and 802.1X-based wireless and wired connections and performs health evaluation and the granting of either unlimited or limited access for Network Access Protection clients. Logging By default, IAS logs to local files (%systemroot%\LogFiles\IAS\*) though it can be configured to log to SQL as well (or in place of). When logging to SQL, IAS appears to wrap the data into XML, then calls the stored procedure report_event, passing the XML data as text... the stored procedure can then unwrap the XML and save data as desired by the user. History The initial version of Internet Authentication Se
https://en.wikipedia.org/wiki/Lefschetz%20zeta%20function
In mathematics, the Lefschetz zeta-function is a tool used in topological periodic and fixed point theory, and dynamical systems. Given a continuous map , the zeta-function is defined as the formal series where is the Lefschetz number of the -th iterate of . This zeta-function is of note in topological periodic point theory because it is a single invariant containing information about all iterates of . Examples The identity map on has Lefschetz zeta function where is the Euler characteristic of , i.e., the Lefschetz number of the identity map. For a less trivial example, let be the unit circle, and let be reflection in the x-axis, that is, . Then has Lefschetz number 2, while is the identity map, which has Lefschetz number 0. Likewise, all odd iterates have Lefschetz number 2, while all even iterates have Lefschetz number 0. Therefore, the zeta function of is Formula If f is a continuous map on a compact manifold X of dimension n (or more generally any compact polyhedron), the zeta function is given by the formula Thus it is a rational function. The polynomials occurring in the numerator and denominator are essentially the characteristic polynomials of the map induced by f on the various homology spaces. Connections This generating function is essentially an algebraic form of the Artin–Mazur zeta function, which gives geometric information about the fixed and periodic points of f. See also Lefschetz fixed-point theorem Artin–Mazur zeta function Ruelle zeta function References Zeta and L-functions Dynamical systems Fixed points (mathematics)
https://en.wikipedia.org/wiki/Hemicontinuity
In mathematics, the notion of the continuity of functions is not immediately extensible to set-valued functions between two sets A and B. The dual concepts of upper hemicontinuity and lower hemicontinuity facilitate such an extension. A set-valued function that has both properties is said to be continuous in an analogy to the property of the same name for single-valued functions. Roughly speaking, a function is upper hemicontinuous if when (1) a convergent sequence of points in the domain maps to a sequence of sets in the range which (2) contain another convergent sequence, then the image of the limiting point in the domain must contain the limit of the sequence in the range. Lower hemicontinuity essentially reverses this, saying if a sequence in the domain converges, given a point in the range of the limit, then you can find a sub-sequence whose image contains a convergent sequence to the given point. Upper hemicontinuity A set-valued function is said to be upper hemicontinuous at the point if, for any open with , there exists a neighbourhood of such that for all is a subset of Sequential characterization For a set-valued function with closed values, if is upper hemicontinuous at then for all sequences in and all sequences such that if and then If B is compact, the converse is also true. Closed graph theorem The graph of a set-valued function is the set defined by If is an upper hemicontinuous set-valued function with closed domain (that is, the set of points where is not the empty set is closed) and closed values (i.e. is closed for all ), then is closed. If is compact, then the converse is also true. Lower hemicontinuity A set-valued function is said to be lower hemicontinuous at the point if for any open set intersecting there exists a neighbourhood of such that intersects for all (Here means nonempty intersection ). Sequential characterization is lower hemicontinuous at if and only if for every sequence
https://en.wikipedia.org/wiki/Precordium
In anatomy, the precordium or praecordium is the portion of the body over the heart and lower chest. Defined anatomically, it is the area of the anterior chest wall over the heart. It is therefore usually on the left side, except in conditions like dextrocardia, where the individual's heart is on the right side. In such a case, the precordium is on the right side as well. The precordium is naturally a cardiac area of dullness. During examination of the chest, the percussion note will therefore be dull. In fact, this area only gives a resonant percussion note in hyperinflation, emphysema or tension pneumothorax. Precordial chest pain can be an indication of a variety of illnesses, including costochondritis and viral pericarditis. See also Precordial thump Precordial examination Commotio cordis Hyperdynamic precordium Precordial catch syndrome References Anatomy
https://en.wikipedia.org/wiki/Physical%20computing
Physical computing involves interactive systems that can sense and respond to the world around them. While this definition is broad enough to encompass systems such as smart automotive traffic control systems or factory automation processes, it is not commonly used to describe them. In a broader sense, physical computing is a creative framework for understanding human beings' relationship to the digital world. In practical use, the term most often describes handmade art, design or DIY hobby projects that use sensors and microcontrollers to translate analog input to a software system, and/or control electro-mechanical devices such as motors, servos, lighting or other hardware. Physical computing intersects the range of activities often referred to in academia and industry as electrical engineering, mechatronics, robotics, computer science, and especially embedded development. Examples Physical computing is used in a wide variety of domains and applications. Education The advantage of physicality in education and playfulness has been reflected in diverse informal learning environments. The Exploratorium, a pioneer in inquiry based learning, developed some of the earliest interactive exhibitry involving computers, and continues to include more and more examples of physical computing and tangible interfaces as associated technologies progress. Art In the art world, projects that implement physical computing include the work of Scott Snibbe, Daniel Rozin, Rafael Lozano-Hemmer, Jonah Brucker-Cohen, and Camille Utterback. Product design Physical computing practices also exist in the product and interaction design sphere, where hand-built embedded systems are sometimes used to rapidly prototype new digital product concepts in a cost-efficient way. Firms such as IDEO and Teague are known to approach product design in this way. Commercial applications Commercial implementations range from consumer devices such as the Sony Eyetoy or games such as Dance Dance Revolution
https://en.wikipedia.org/wiki/Sutton%27s%20law
Sutton's law states that when diagnosing, one should first consider the obvious. It suggests that one should first conduct those tests which could confirm (or rule out) the most likely diagnosis. It is taught in medical schools to suggest to medical students that they might best order tests in that sequence which is most likely to result in a quick diagnosis, hence treatment, while minimizing unnecessary costs. It is also applied in pharmacology, when choosing a drug to treat a specific disease you want the drug to reach the disease. It is applicable to any process of diagnosis, e.g. debugging computer programs. Computer-aided diagnosis provides a statistical and quantitative approach. A more thorough analysis will consider the false positive rate of the test and the possibility that a less likely diagnosis might have more serious consequences. A competing principle is the idea of performing simple tests before more complex and expensive tests, moving from bedside tests to blood results and simple imaging such as ultrasound and then more complex such as MRI then specialty imaging. The law can also be applied in prioritizing tests when resources are limited, so a test for a treatable condition should be performed before an equally probable but less treatable condition. The law is named after the bank robber Willie Sutton, who reputedly replied to a reporter's inquiry as to why he robbed banks by saying "because that's where the money is." In Sutton's 1976 book Where the Money Was, Sutton denies having said this, but added that "If anybody had asked me, I'd have probably said it. That's what almost anybody would say... it couldn't be more obvious." A similar idea is contained in the physician's adage, "When you hear hoofbeats, think horses, not zebras." See also Occam's razor References Adages Debugging Heuristics Medical diagnosis
https://en.wikipedia.org/wiki/Logical%20access%20control
In computers, logical access controls are tools and protocols used for identification, authentication, authorization, and accountability in computer information systems. Logical access is often needed for remote access of hardware and is often contrasted with the term "physical access", which refers to interactions (such as a lock and key) with hardware in the physical environment, where equipment is stored and used. Models Logical access controls enforce access control measures for systems, programs, processes, and information. The controls can be embedded within operating systems, applications, add-on security packages, or database and telecommunication management systems. The line between logical access and physical access can be blurred when physical access is controlled by software. For example, entry to a room may be controlled by a chip and PIN card and an electronic lock controlled by software. Only those in possession of an appropriate card, with an appropriate security level and with knowledge of the PIN are permitted entry to the room. On swiping the card into a card reader and entering the correct PIN code. Logical controls, also called logical access controls and technical controls, protect data and the systems, networks, and environments that protect them. In order to authenticate, authorize, or maintain accountability a variety of methodologies are used such as password protocols, devices coupled with protocols and software, encryption, firewalls, or other systems that can detect intruders and maintain security, reduce vulnerabilities and protect the data and systems from threats. Businesses, organizations and other entities use a wide spectrum of logical access controls to protect hardware from unauthorized remote access. These can include sophisticated password programs, advanced biometric security features, or any other setups that effectively identify and screen users at any administrative level. The particular logical access controls use
https://en.wikipedia.org/wiki/BlackDog
The BlackDog is a pocket-sized, self-contained computer with a built-in biometric fingerprint reader which was developed in 2005 by Realm Systems, which is plugged into and powered by the USB port of a host computer using its peripheral devices for input and output. It is a mobile personal server which allows a user to use Linux, ones applications and data on any computer with a USB port. The host machine's monitor, keyboard, mouse, and Internet connection are used by the BlackDog for the duration of the session. As the system is self-contained and isolated from the host, requiring no additional installation, it is possible to make use of untrusted computers, yet using a secure system. Various hardware iterations exist, and the original developer Realm Systems closed down in 2007, being picked up by the successor Inaura, Inc. Hardware history Original Black Dog & Project BlackDog Skills Contest Identified as the BlackDog, the Project BlackDog, or Original BlackDog, the first hardware version was touted as "unlike any other mobile computing device, BlackDog contains its own processor, memory and storage, and is completely powered by the USB port of a host computer with no external power adapter required." It was created in conjunction with Realm System's Project BlackDog Skills Contest (announced on Oct 27, 2005) which was supposed to raise interest, and create a developer community surrounding the product. The BlackDog was publicly available for purchase from the Project BlackDog website in September 2005 for those who wished to enter the contest or to experiment with the platform. Production ended in mid January 2006 when the contest closed. On 7 February 2006, the winners of the contest were announced for the categories: Security (the Michael Chenetz), Entertainment (Michael King), Productivity (Terry Bayne) and "Dogpile" (Paul Chandler). On Feb 15, 2006, during the Open Source Business Conference, San Francisco, Terry Bayne was announced the grand prize wi
https://en.wikipedia.org/wiki/RealMagic
RealMagic (or ReelMagic), from Sigma Designs, was one of the first fully compliant MPEG playback boards on the market in the mid-1990s. RealMagic is a hardware-accelerated MPEG decoder that mixes its video stream into a computer video card's output through the video card's feature connector. It is also a SoundBlaster-compatible sound card. Successors Sigma design's Realmagic superseded by Realmagic Hollywood+ Realmagic XCard Realmagic NetStream2000 - 4000 Several software companies in 1993 promised to support the card, including Access, Interplay, and Sierra. Software written for RealMagic includes: Under a Killing Moon - Access Software Gabriel Knight Escape from Cybercity Kings Quest VI - Sierra Online Dragon's Lair Police Quest IV - Sierra Online Return to Zork - Infocom Lord of the Rings - Interplay Entertainment Note: the above titles were on a REELMAGIC demo CD that came with the hardware. The CD also contained corporate promotion videos, training videos, news footage of John F. Kennedy and the Apollo Moon mission. Also included in the bundle, was a complete version of The Horde - published by Crystal Dynamics (1994) Other software includes: The Psychotron (an interactive mystery movie) - Merit Software References Graphics cards
https://en.wikipedia.org/wiki/Integrated%20circuit%20design
Integrated circuit design, or IC design, is a sub-field of electronics engineering, encompassing the particular logic and circuit design techniques required to design integrated circuits, or ICs. ICs consist of miniaturized electronic components built into an electrical network on a monolithic semiconductor substrate by photolithography. IC design can be divided into the broad categories of digital and analog IC design. Digital IC design is to produce components such as microprocessors, FPGAs, memories (RAM, ROM, and flash) and digital ASICs. Digital design focuses on logical correctness, maximizing circuit density, and placing circuits so that clock and timing signals are routed efficiently. Analog IC design also has specializations in power IC design and RF IC design. Analog IC design is used in the design of op-amps, linear regulators, phase locked loops, oscillators and active filters. Analog design is more concerned with the physics of the semiconductor devices such as gain, matching, power dissipation, and resistance. Fidelity of analog signal amplification and filtering is usually critical, and as a result analog ICs use larger area active devices than digital designs and are usually less dense in circuitry. Modern ICs are enormously complicated. An average desktop computer chip, as of 2015, has over 1 billion transistors. The rules for what can and cannot be manufactured are also extremely complex. Common IC processes of 2015 have more than 500 rules. Furthermore, since the manufacturing process itself is not completely predictable, designers must account for its statistical nature. The complexity of modern IC design, as well as market pressure to produce designs rapidly, has led to the extensive use of automated design tools in the IC design process. In short, the design of an IC using EDA software is the design, test, and verification of the instructions that the IC is to carry out. Fundamentals Integrated circuit design involves the creation of ele
https://en.wikipedia.org/wiki/Towed%20array%20sonar
A towed array sonar is a system of hydrophones towed behind a submarine or a surface ship on a cable. Trailing the hydrophones behind the vessel, on a cable that can be kilometers long, keeps the array's sensors away from the ship's own noise sources, greatly improving its signal-to-noise ratio, and hence the effectiveness of detecting and tracking faint contacts, such as quiet, low noise-emitting submarine threats, or seismic signals. A towed array offers superior resolution and range compared with hull-mounted sonar. It also covers the baffles, the blind spot of hull-mounted sonar. However, effective use of the system limits a vessel's speed and care must be taken to protect the cable from damage. History During World War I, a towed sonar array known as the "Electric Eel" was developed by Harvey Hayes, a U.S. Navy physicist. This system is believed to be the first towed sonar array design. It employed two cables, each with a dozen hydrophones attached. The project was discontinued after the war. The U.S. Navy resumed development of towed array technology during the 1960s in response to the development of nuclear-powered submarines by the Soviet Union. Current use of towed arrays On surface ships, towed array cables are normally stored in drums, then spooled out behind the vessel when in use. U.S. Navy submarines typically store towed arrays inside an outboard tube, mounted along the vessel's hull, with an opening on the starboard tail. There is also equipment located in a ballast tank (free flood area) while the cabinet used to operate the system is inside the submarine. Hydrophones in a towed array system are placed at specific distances along the cable, the end elements far enough apart to gain a basic ability to triangulate on a sound source. Similarly, various elements are angled up or down giving an ability to triangulate an estimated vertical depth of target. Alternatively three or more arrays are used to aid in depth detection. On the first few hun
https://en.wikipedia.org/wiki/Session-based%20testing
Session-based testing is a software test method that aims to combine accountability and exploratory testing to provide rapid defect discovery, creative on-the-fly test design, management control and metrics reporting. The method can also be used in conjunction with scenario testing. Session-based testing was developed in 2000 by Jonathan and James Marcus Bach. Session-based testing can be used to introduce measurement and control to an immature test process and can form a foundation for significant improvements in productivity and error detection. Session-based testing can offer benefits when formal requirements are not present, incomplete, or changing rapidly. Elements of session-based testing Mission The mission in Session Based Test Management identifies the purpose of the session, helping to focus the session while still allowing for exploration of the system under test. According to Jon Bach, one of the co-founders of the methodology, the mission explains "what we are testing or what problems we are looking for." Charter A charter is a goal or agenda for a test session. Charters are created by the test team prior to the start of testing, but they may be added or changed at any time. Often charters are created from a specification, test plan, or by examining results from previous sessions. Session An uninterrupted period of time spent testing, ideally lasting one to two hours. Each session is focused on a charter, but testers can also explore new opportunities or issues during this time. The tester creates and executes tests based on ideas, heuristics or whatever frameworks to guide them and records their progress. This might be through the use of written notes, video capture tools or by whatever method as deemed appropriate by the tester. Session report The session report records the test session. Usually this includes: Charter. Area tested. Detailed notes on how testing was conducted. A list of any bugs found. A list of issues (open questions, product
https://en.wikipedia.org/wiki/Exploratory%20testing
Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Cem Kaner, who coined the term in 1984, defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project." While the software is being tested, the tester learns things that together with experience and creativity generates new good tests to run. Exploratory testing is often thought of as a black box testing technique. Instead, those who have studied it consider it a test approach that can be applied to any test technique, at any stage in the development process. The key is not the test technique nor the item being tested or reviewed; the key is the cognitive engagement of the tester, and the tester's responsibility for managing his or her time. History Exploratory testing has always been performed by skilled testers. In the early 1990s, ad hoc was too often synonymous with sloppy and careless work. As a result, a group of test methodologists (now calling themselves the Context-Driven School) began using the term "exploratory" seeking to emphasize the dominant thought process involved in unscripted testing, and to begin to develop the practice into a teachable discipline. This new terminology was first published by Cem Kaner in his book Testing Computer Software and expanded upon in Lessons Learned in Software Testing. Exploratory testing can be as disciplined as any other intellectual activity. Description Exploratory testing seeks to find out how the software testing services actually works, and to ask questions about how it will handle difficult and easy cases. The quality of the testing is dependent on the tester's skill of
https://en.wikipedia.org/wiki/Notarikon
Notarikon ( Noṭriqōn) is a Talmudic and Kabbalistic method of deriving a word, by using each of its initial (Hebrew: ) or final letters () to stand for another, to form a sentence or idea out of the words. Another variation uses the first and last letters, or the two middle letters of a word, in order to form another word. The word "notarikon" is borrowed from the Greek language (νοταρικόν), and was derived from the Latin word "notarius" meaning "shorthand writer." Notarikon is one of the three ancient methods used by the Kabbalists (the other two are gematria and temurah) to rearrange words and sentences. These methods were used in order to derive the esoteric substratum and deeper spiritual meaning of the words in the Bible. Notarikon was also used in alchemy. The term is mostly used in the context of Kabbalah. Common Hebrew abbreviations are described by ordinary linguistic terms. Usage in the Talmud Until the end of the Talmudic period, notarikon is understood in Judaism as a common method of Scripture interpretation by which the letters of individual words in the Bible text indicate the first letters of independent words. Usage in Kabbalah A common usage of notarikon in the practice of Kabbalah, is to form sacred names of God derived from religious or biblical verses. AGLA, an acronym for Atah Gibor Le-olam Adonai, translated, "You, O Lord, are mighty forever," is one of the most famous examples of notarikon. Dozens of examples are found in the Berit Menuchah, as is referenced in the following passage: The Sefer Gematriot of Judah ben Samuel of Regensburg is another book where many examples of notarikon for use on talismans are given from Biblical verses. See also AGLA, notarikon for Atah Gibor Le-olam Adonai Bible code, a purported set of secret messages encoded within the Torah. Biblical and Talmudic units of measurement Chol HaMoed, the intermediate days during Passover and Sukkot. Chronology of the Bible Counting of the Omer Gematria, Jewi
https://en.wikipedia.org/wiki/Cotton%20effect
The Cotton effect in physics, is the characteristic change in optical rotatory dispersion and/or circular dichroism in the vicinity of an absorption band of a substance. In a wavelength region where the light is absorbed, the absolute magnitude of the optical rotation at first varies rapidly with wavelength, crosses zero at absorption maxima and then again varies rapidly with wavelength but in the opposite direction. This phenomenon was discovered in 1895 by the French physicist Aimé Cotton (1869–1951). The Cotton effect is called positive if the optical rotation first increases as the wavelength decreases (as first observed by Cotton), and negative if the rotation first decreases. A protein structure such as a beta sheet shows a negative Cotton effect. See also Cotton–Mouton effect References Polarization (waves) Atomic, molecular, and optical physics
https://en.wikipedia.org/wiki/Optical%20rotatory%20dispersion
Optical rotatory dispersion is the variation in the optical rotation of a substance with a change in the wavelength of light. Optical rotatory dispersion can be used to find the absolute configuration of metal complexes. For example, when plane-polarized white light from an overhead projector is passed through a cylinder of sucrose solution, a spiral rainbow is observed perpendicular to the cylinder. Principles of operation When white light passes through a polarizer, the extent of rotation of light depends on its wavelength. Short wavelengths are rotated more than longer wavelengths, per unit of distance. Because the wavelength of light determines its color, the variation of color with distance through the tube is observed. This dependence of specific rotation on wavelength is called optical rotatory dispersion. In all materials the rotation varies with wavelength. The variation is caused by two quite different phenomena. The first accounts in most cases for the majority of the variation in rotation and should not strictly be termed rotatory dispersion. It depends on the fact that optical activity is actually circular birefringence. In other words, a substance which is optically active transmits right circularly polarized light with a different velocity from left circularly polarized light. In addition to this pseudodispersion which depends on the material thickness, there is a true rotatory dispersion which depends on the variation with wavelength of the indices of refraction for right and left circularly polarized light. For wavelengths that are absorbed by the optically active sample, the two circularly polarized components will be absorbed to differing extents. This unequal absorption is known as circular dichroism. Circular dichroism causes incident linearly polarized light to become elliptically polarized. The two phenomena are closely related, just as are ordinary absorption and dispersion. If the entire optical rotatory dispersion spectrum is known,
https://en.wikipedia.org/wiki/Beijing%E2%80%93Shanghai%20high-speed%20railway
The Beijing–Shanghai high-speed railway (or Jinghu high-speed railway, from its name in Mandarin) is a high-speed railway that connects two major economic zones in the People's Republic of China: the Bohai Economic Rim and the Yangtze River Delta. Construction began on April 18, 2008, with the line opened to the public for commercial service on June 30, 2011. The long high-speed line is the world's longest high-speed line ever constructed in a single phase. The line is one of the busiest high speed railways in the world, transporting over 210 million passengers in 2019, more than the annual ridership of the entire TGV or Intercity Express network. It is also China's most profitable high speed rail line, reporting a ¥11.9 billion Yuan ($1.86 billion USD) net profit in 2019. The railway line was the first one designed for a maximum speed of in commercial operations. The non-stop train from Beijing South to Shanghai Hongqiao was expected to take 3 hours and 58 minutes, making it the fastest scheduled train in the world, compared to 9 hours and 49 minutes on the fastest trains running on the parallel conventional railway. However, at first trains were limited to a maximum speed of , with the fastest train taking 4 hours and 48 minutes to travel from Beijing South to Shanghai Hongqiao, with one stop at Nanjing South. On September 21, 2017, operation was restored with the introduction of China Standardized EMU. This reduced travel times between Beijing and Shanghai to about 4 hours 18 minutes on the fastest scheduled trains, attaining an average speed of over a journey of making those services the fastest in the world. The Beijing–Shanghai high-speed railway went public on Shanghai Stock Exchange () in 2020. Specifications The Beijing–Shanghai High-Speed Railway Co., Ltd. was in charge of construction. The project was expected to cost 220 billion yuan (about $32 billion). An estimated 220,000 passengers are expected to use the trains each day, which is double th
https://en.wikipedia.org/wiki/Layer%202%20MPLS%20VPN
A Layer 2 MPLS VPN is a term in computer networking. It is a method that Internet service providers use to segregate their network for their customers, to allow them to transmit data over an IP network. This is often sold as a service to businesses. Layer 2 VPNs are a type of Virtual Private Network (VPN) that uses MPLS labels to transport data. The communication occurs between routers that are known as Provider Edge routers (PEs), as they sit on the edge of the provider's network, next to the customer's network. Internet providers who have an existing Layer 2 network (such as ATM or Frame Relay) may choose to use these VPNs instead of the other common MPLS VPN, Layer 3. There is no one IETF standard for Layer 2 MPLS VPNs. Instead, two methodologies may be used. Both methods use a standard MPLS header to encapsulate data. However, they differ in their signaling protocols. Types of Layer 2 MPLS VPNs BGP-based The BGP-based type is based on a draft specification by Kireeti Kompella, from Juniper Networks. It uses the Border Gateway Protocol (BGP) as the mechanism for PE routers to communicate with each other about their customer connections. Each router connects to a central cloud, using BGP. This means that when new customers are added (usually to new routers), the existing routers will communicate with each other, via BGP, and automatically add the new customers to the service. LDP-based The second type is based on a draft specification by Chandan Mishra from Cisco Systems. This method is also known as a Layer 2 circuit. It uses the Label Distribution Protocol (LDP) to communicate between PE routers. In this case, every LDP-speaking router will exchange FECs (forwarding equivalence classes) and establish LSPs with every other LDP-speaking router on the network (or just the other PE router, in the case when LDP is tunnelled over RSVP-TE), which differs from the BGP-based methodology. The LDP-based style of layer 2 VPN defines new TLVs and parameters for L
https://en.wikipedia.org/wiki/Retene
Retene, methyl isopropyl phenanthrene or 1-methyl-7-isopropyl phenanthrene, C18H18, is a polycyclic aromatic hydrocarbon present in the coal tar fraction, boiling above 360 °C. It occurs naturally in the tars obtained by the distillation of resinous woods. It crystallizes in large plates, which melt at 98.5 °C and boil at 390 °C. It is readily soluble in warm ether and in hot glacial acetic acid. Sodium and boiling amyl alcohol reduce it to a tetrahydroretene, but if it heated with phosphorus and hydriodic acid to 260 °C, a dodecahydride is formed. Chromic acid oxidizes it to retene quinone, phthalic acid and acetic acid. It forms a picrate that melts at 123-124 °C. Retene is derived by degradation of specific diterpenoids biologically produced by conifer trees. The presence of traces of retene in the air is an indicator of forest fires; it is a major product of pyrolysis of conifer trees. It is also present in effluents from wood pulp and paper mills. Retene, together with cadalene, simonellite and ip-iHMN, is a biomarker of vascular plants, which makes it useful for paleobotanic analysis of rock sediments. The ratio of retene/cadalene in sediments can reveal the ratio of the genus Pinaceae in the biosphere. Health effects A recent study has shown retene, which is a component of the Amazonian organic PM10, is cytotoxic to human lung cells. References Petroleum products Phenanthrenes Biomarkers Isopropyl compounds Polycyclic aromatic hydrocarbons
https://en.wikipedia.org/wiki/Ipfirewall
ipfirewall or ipfw is a FreeBSD IP, stateful firewall, packet filter and traffic accounting facility. Its ruleset logic is similar to many other packet filters except IPFilter. ipfw is authored and maintained by FreeBSD volunteer staff members. Its syntax enables use of sophisticated filtering capabilities and thus enables users to satisfy advanced requirements. It can either be used as a loadable kernel module or incorporated into the kernel; use as a loadable kernel module where possible is highly recommended. ipfw was the built-in firewall of Mac OS X until Mac OS X 10.7 Lion in 2011 when it was replaced with the OpenBSD project's PF. Like FreeBSD, ipfw is open source. It is used in many FreeBSD-based firewall products, including m0n0wall and FreeNAS. A port of an early version of ipfw was used since Linux 1.1 as the first implementation of firewall available for Linux, until it was replaced by ipchains. A modern port of ipfw and the dummynet traffic shaper is available for Linux (including a prebuilt package for OpenWrt) and Microsoft Windows. wipfw is a Windows port of an old (2001) version of ipfw. Alternative user interfaces for ipfw See also netfilter/iptables, a Linux-based descendant of ipchains NPF, a NetBSD packet filter PF, another widely deployed BSD firewall solution References External links ipfw section of the FreeBSD Handbook. The dummynet project - including versions for Linux, OpenWrt and Windows wipfw Windows port of an old (2001) version of ipfw Firewall software BSD software
https://en.wikipedia.org/wiki/Hooking
In computer programming, the term hooking covers a range of techniques used to alter or augment the behaviour of an operating system, of applications, or of other software components by intercepting function calls or messages or events passed between software components. Code that handles such intercepted function calls, events or messages is called a hook. Hook methods are of particular importance in the Template Method Pattern where common code in an abstract class can be augmented by custom code in a subclass. In this case each hook method is defined in the abstract class with an empty implementation which then allows a different implementation to be supplied in each concrete subclass. Hooking is used for many purposes, including debugging and extending functionality. Examples might include intercepting keyboard or mouse event messages before they reach an application, or intercepting operating system calls in order to monitor behavior or modify the function of an application or other component. It is also widely used in benchmarking programs, for example frame rate measuring in 3D games, where the output and input is done through hooking. Hooking can also be used by malicious code. For example, rootkits, pieces of software that try to make themselves invisible by faking the output of API calls that would otherwise reveal their existence, often use hooking techniques. Methods Typically hooks are inserted while software is already running, but hooking is a tactic that can also be employed prior to the application being started. Both these techniques are described in greater detail below. Source modification Hooking can be achieved by modifying the source of the executable or library before an application is running, through techniques of reverse engineering. This is typically used to intercept function calls to either monitor or replace them entirely. For example, by using a disassembler, the entry point of a function within a module can be found. It can t
https://en.wikipedia.org/wiki/Canopy%20%28biology%29
In biology, the canopy is the aboveground portion of a plant cropping or crop, formed by the collection of individual plant crowns. In forest ecology, canopy refers to the upper layer or habitat zone, formed by mature tree crowns and including other biological organisms (epiphytes, lianas, arboreal animals, etc.). The communities that inhabit the canopy layer are thought to be involved in maintaining forest diversity, resilience, and functioning. Shade trees normally have a dense canopy that blocks light from lower growing plants. Observation Early observations of canopies were made from the ground using binoculars or by examining fallen material. Researchers would sometimes erroneously rely on extrapolation by using more reachable samples taken from the understory. In some cases, they would use unconventional methods such as chairs suspended on vines or hot-air dirigibles, among others. Modern technology, including adapted mountaineering gear, has made canopy observation significantly easier and more accurate, allowed for longer and more collaborative work, and broadened the scope of canopy study. Structure Canopy structure is the organization or spatial arrangement (three-dimensional geometry) of a plant canopy. Leaf area index, leaf area per unit ground area, is a key measure used to understand and compare plant canopies. The canopy is taller than the understory layer. The canopy holds 90% of the animals in the rainforest. Canopies can cover vast distances and appear to be unbroken when observed from an airplane. However, despite overlapping tree branches, rainforest canopy trees rarely touch each other. Rather, they are usually separated by a few feet. Dominant and co-dominant canopy trees form the uneven canopy layer. Canopy trees are able to photosynthesize relatively rapidly with abundant light, so it supports the majority of primary productivity in forests. The canopy layer provides protection from strong winds and storms while also intercepting sunlig
https://en.wikipedia.org/wiki/Active%20Directory%20Rights%20Management%20Services
Active Directory Rights Management Services (AD RMS, known as Rights Management Services or RMS before Windows Server 2008) is a server software for information rights management shipped with Windows Server. It uses encryption and a form of selective functionality denial for limiting access to documents such as corporate e-mails, Microsoft Word documents, and web pages, and the operations authorized users can perform on them. Companies can use this technology to encrypt information stored in such document formats, and through policies embedded in the documents, prevent the protected content from being decrypted except by specified people or groups, in certain environments, under certain conditions, and for certain periods of time. Specific operations like printing, copying, editing, forwarding, and deleting can be allowed or disallowed by content authors for individual pieces of content, and RMS administrators can deploy RMS templates that group these rights together into predefined rights that can be applied en masse. RMS debuted in Windows Server 2003, with client API libraries made available for Windows 2000 and later. The Rights Management Client is included in Windows Vista and later, is available for Windows XP, Windows 2000 or Windows Server 2003. In addition, there is an implementation of AD RMS in Office for Mac to use rights protection in OS X and some third-party products are available to use rights protection on Android, Blackberry OS, iOS and Windows RT. Attacks against policy enforcement capabilities In April 2016, an alleged attack on RMS implementations (including Azure RMS) was published and reported to Microsoft. The published code allows an authorized user that has been granted the right to view an RMS protected document to remove the protection and preserve the file formatting. This sort of manipulation requires that the user has been granted rights to decrypt the content to be able to view it. While Rights Management Services makes certain s
https://en.wikipedia.org/wiki/Iterative%20learning%20control
Iterative Learning Control (ILC) is a method of tracking control for systems that work in a repetitive mode. Examples of systems that operate in a repetitive manner include robot arm manipulators, chemical batch processes and reliability testing rigs. In each of these tasks the system is required to perform the same action over and over again with high precision. This action is represented by the objective of accurately tracking a chosen reference signal on a finite time interval. Repetition allows the system to improve tracking accuracy from repetition to repetition, in effect learning the required input needed to track the reference exactly. The learning process uses information from previous repetitions to improve the control signal ultimately enabling a suitable control action can be found iteratively. The internal model principle yields conditions under which perfect tracking can be achieved but the design of the control algorithm still leaves many decisions to be made to suit the application. A typical, simple control law is of the form: where is the input to the system during the pth repetition, is the tracking error during the pth repetition and K is a design parameter representing operations on . Achieving perfect tracking through iteration is represented by the mathematical requirement of convergence of the input signals as becomes large whilst the rate of this convergence represents the desirable practical need for the learning process to be rapid. There is also the need to ensure good algorithm performance even in the presence of uncertainty about the details of process dynamics. The operation is crucial to achieving design objectives and ranges from simple scalar gains to sophisticated optimization computations. References External links Southampton Sheffield Iterative Learning Control (SSILC) Control theory
https://en.wikipedia.org/wiki/Filtered%20algebra
In mathematics, a filtered algebra is a generalization of the notion of a graded algebra. Examples appear in many branches of mathematics, especially in homological algebra and representation theory. A filtered algebra over the field is an algebra over that has an increasing sequence of subspaces of such that and that is compatible with the multiplication in the following sense: Associated graded algebra In general there is the following construction that produces a graded algebra out of a filtered algebra. If is a filtered algebra then the associated graded algebra is defined as follows: The multiplication is well-defined and endows with the structure of a graded algebra, with gradation Furthermore if is associative then so is . Also if is unital, such that the unit lies in , then will be unital as well. As algebras and are distinct (with the exception of the trivial case that is graded) but as vector spaces they are isomorphic. (One can prove by induction that is isomorphic to as vector spaces). Examples Any graded algebra graded by , for example , has a filtration given by . An example of a filtered algebra is the Clifford algebra of a vector space endowed with a quadratic form The associated graded algebra is , the exterior algebra of The symmetric algebra on the dual of an affine space is a filtered algebra of polynomials; on a vector space, one instead obtains a graded algebra. The universal enveloping algebra of a Lie algebra is also naturally filtered. The PBW theorem states that the associated graded algebra is simply . Scalar differential operators on a manifold form a filtered algebra where the filtration is given by the degree of differential operators. The associated graded algebra is the commutative algebra of smooth functions on the cotangent bundle which are polynomial along the fibers of the projection . The group algebra of a group with a length function is a filtered algebra. See also Filtration (math
https://en.wikipedia.org/wiki/Kali%20%28software%29
Kali is an IPX network emulator for DOS and Windows, enabling legacy multiplayer games to work over a modern TCP/IP network such as the Internet. Later versions of the software also functioned as a server browser for games that natively supported TCP/IP. Versions were also created for OS2 and Mac, but neither version was well polished. Today, Kali's network is still operational but development has largely ceased. Kali also features an Internet Game Browser for TCP/IP native games, a buddy system, a chat system, and supports 400+ games including Doom 3, many of the Command & Conquer games, the Mechwarrior 2 series, Unreal Tournament 2004, Battlefield Vietnam, Counter-Strike: Condition Zero, and Master of Orion II. The Kali software is free to download, and once had a time-based cap for unregistered versions. For a one-time $20 fee, the time restriction was removed. However, as of January 2023, Kali.net offers the download and a registration code generator on the website, so registration is currently free. History The original MS-DOS version of Kali was created by Scott Coleman, Alex Markovich and Jay Cotton in the spring of 1995. It was the successor to a program called iDOOM (later Frag) that Cotton wrote so he could play id Software's DOS game DOOM over the Internet. After the release of Descent, Coleman, Markovich and Cotton wrote a new program to allow Descent, or any other game which supported LAN play using the IPX protocol, to be played over the Internet; this new program was named Kali. In the summer of 1995, Coleman went off to work for Interplay Productions, Markovich left the project and Cotton formed a new company, Kali Inc., to develop and market Kali. Cotton and his team developed the first Windows version (Kali95) and all subsequent versions. Initially Kali appealed only to hardcore computer tinkerers, due to the difficulty of getting TCP/IP running on MS-DOS. Kali95 took advantage of the greater network support of Windows 95, allowing Kali to achi
https://en.wikipedia.org/wiki/Glomeromycota
Glomeromycota (often referred to as glomeromycetes, as they include only one class, Glomeromycetes) are one of eight currently recognized divisions within the kingdom Fungi, with approximately 230 described species. Members of the Glomeromycota form arbuscular mycorrhizas (AMs) with the thalli of bryophytes and the roots of vascular land plants. Not all species have been shown to form AMs, and one, Geosiphon pyriformis, is known not to do so. Instead, it forms an endocytobiotic association with Nostoc cyanobacteria. The majority of evidence shows that the Glomeromycota are dependent on land plants (Nostoc in the case of Geosiphon) for carbon and energy, but there is recent circumstantial evidence that some species may be able to lead an independent existence. The arbuscular mycorrhizal species are terrestrial and widely distributed in soils worldwide where they form symbioses with the roots of the majority of plant species (>80%). They can also be found in wetlands, including salt-marshes, and associated with epiphytic plants. According to multigene phylogenetic analyses, this taxon is located as a member of the phylum Mucoromycota. Currently the phylum name Glomeromycota may be invalid, and the subphylum Glomeromycotina or the class Glomeromycetes is preferable to describe this taxon. Reproduction The Glomeromycota have generally coenocytic (occasionally sparsely septate) mycelia and reproduce asexually through blastic development of the hyphal tip to produce spores (Glomerospores) with diameters of 80–500 μm. In some, complex spores form within a terminal saccule. Recently it was shown that Glomus species contain 51 genes encoding all the tools necessary for meiosis. Based on these and related findings, it was suggested that Glomus species may have a cryptic sexual cycle. Colonization New colonization of AM fungi largely depends on the amount of inoculum present in the soil. Although pre-existing hyphae and infected root fragments have been shown to succes
https://en.wikipedia.org/wiki/Satplan
Satplan (better known as Planning as Satisfiability) is a method for automated planning. It converts the planning problem instance into an instance of the Boolean satisfiability problem, which is then solved using a method for establishing satisfiability such as the DPLL algorithm or WalkSAT. Given a problem instance in planning, with a given initial state, a given set of actions, a goal, and a horizon length, a formula is generated so that the formula is satisfiable if and only if there is a plan with the given horizon length. This is similar to simulation of Turing machines with the satisfiability problem in the proof of Cook's theorem. A plan can be found by testing the satisfiability of the formulas for different horizon lengths. The simplest way of doing this is to go through horizon lengths sequentially, 0, 1, 2, and so on. See also Graphplan References H. A. Kautz and B. Selman (1992). Planning as satisfiability. In Proceedings of the Tenth European Conference on Artificial Intelligence (ECAI'92), pages 359–363. H. A. Kautz and B. Selman (1996). Pushing the envelope: planning, propositional logic, and stochastic search. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI'96), pages 1194–1201. J. Rintanen (2009). Planning and SAT. In A. Biere, H. van Maaren, M. Heule and Toby Walsh, Eds., Handbook of Satisfiability, pages 483–504, IOS Press. Automated planning and scheduling
https://en.wikipedia.org/wiki/Grid-leak%20detector
A grid leak detector is an electronic circuit that demodulates an amplitude modulated alternating current and amplifies the recovered modulating voltage. The circuit utilizes the non-linear cathode to control grid conduction characteristic and the amplification factor of a vacuum tube. Invented by Lee De Forest around 1912, it was used as the detector (demodulator) in the first vacuum tube radio receivers until the 1930s. History Early applications of triode tubes (Audions) as detectors usually did not include a resistor in the grid circuit. First use of a resistance in the grid circuit of a vacuum tube detector circuit may have been by Sewall Cabot in 1906. Cabot wrote that he made a pencil mark to discharge the grid condenser, after finding that touching the grid terminal of the tube would cause the detector to resume operation after having stopped. Edwin H. Armstrong, in 1915, describes the use of "a resistance of several hundred thousand ohms placed across the grid condenser" for the purpose of discharging the grid condenser. The heyday for grid leak detectors was the 1920s, when battery operated, multiple dial tuned radio frequency receivers using low amplification factor triodes with directly heated cathodes were the contemporary technology. The Zenith Models 11, 12, and 14 are examples of these kinds of radios. After screen-grid tubes became available for new designs in 1927, most manufacturers switched to plate detectors, and later to diode detectors. The grid leak detector has been popular for many years with amateur radio operators and shortwave listeners who construct their own receivers. Functional overview The stage performs two functions: Detection: The control grid and cathode operate as a diode. At small radio frequency signal (carrier) amplitudes, square-law detection takes place due to non-linear curvature of the grid current versus grid voltage characteristic. Detection transitions at larger carrier amplitudes to linear detection behavior
https://en.wikipedia.org/wiki/Social-desirability%20bias
In social science research, social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences. Topics subject to social-desirability bias Topics where socially desirable responding (SDR) is of special concern are self-reports of abilities, personality, sexual behavior, and drug use. When confronted with the question "How often do you masturbate?," for example, respondents may be pressured by the societal taboo against masturbation, and either under-report the frequency or avoid answering the question. Therefore, the mean rates of masturbation derived from self-report surveys are likely to be severely underestimated. When confronted with the question, "Do you use drugs/illicit substances?" the respondent may be influenced by the fact that controlled substances, including the more commonly used marijuana, are generally illegal. Respondents may feel pressured to deny any drug use or rationalize it, e.g. "I only smoke marijuana when my friends are around." The bias can also influence reports of number of sexual partners. In fact, the bias may operate in opposite directions for different subgroups: Whereas men tend to inflate the numbers, women tend to underestimate theirs. In either case, the mean reports from both groups are likely to be distorted by social desirability bias. Other topics that are sensitive to social-desirability bias include: Self-reported personality traits will correlate strongly with social desirability bias Personal income and earnings, often inflated when low and deflated when high Feelings of low self-worth and/or powerlessness, often denied Excretory functi
https://en.wikipedia.org/wiki/Whitespace%20character
In computer programming, whitespace is any character or series of characters that represent horizontal or vertical space in typography. When rendered, a whitespace character does not correspond to a visible mark, but typically does occupy an area on a page. For example, the common whitespace symbol (also ASCII 32) represents a blank space punctuation character in text, used as a word divider in Western scripts. Overview With many keyboard layouts, a whitespace character may be entered by pressing . Horizontal whitespace may also be entered on many keyboards with the key, although the length of the space may vary. Vertical whitespace may be input by typing , which creates a 'newline' code sequence in most programs. In some systems has a separate meaning but in others the two are conflated. Many early computer games used whitespace characters to draw a screen (e.g. Kingdom of Kroz). The term "whitespace" is based on the appearance of the characters on ordinary paper. However, within an application, whitespace characters can be processed in the same way as any other character code and different programs may define their own semantics for the characters. Unicode The table below lists the twenty-five characters defined as whitespace ("WSpace=Y", "WS") characters in the Unicode Character Database. Seventeen use a definition of whitespace consistent with the algorithm for bidirectional writing ("Bidirectional Character Type=WS") and are known as "Bidi-WS" characters. The remaining characters may also be used, but are not of this "Bidi" type. Note: Depending on the browser and fonts used to view the following table, not all spaces may be displayed properly. Substitute images Unicode also provides some visible characters that can be used to represent various whitespace characters, in contexts where a visible symbol must be displayed: Exact space The Cambridge Z88 provided a special "exact space" (code point 160 aka 0xA0) (invokable by key shortcut ), displayed
https://en.wikipedia.org/wiki/LibATA
libATA is a library used inside the Linux kernel to support ATA host controllers and devices. libATA provides an ATA driver API, class transports for ATA and ATAPI devices, and SCSI / ATA Translation for ATA devices according to the T10 SAT specification. Features include power management, Self-Monitoring, Analysis, and Reporting Technology, PATA/SATA, ATAPI, port multiplier, hot swapping and Native Command Queuing. References External links Linux ATA wiki libATA feature table AT Attachment Linux kernel
https://en.wikipedia.org/wiki/Cairo%20pentagonal%20tiling
In geometry, a Cairo pentagonal tiling is a tessellation of the Euclidean plane by congruent convex pentagons, formed by overlaying two tessellations of the plane by hexagons and named for its use as a paving design in Cairo. It is also called MacMahon's net after Percy Alexander MacMahon, who depicted it in his 1921 publication New Mathematical Pastimes. John Horton Conway called it a 4-fold pentille. Infinitely many different pentagons can form this pattern, belonging to two of the 15 families of convex pentagons that can tile the plane. Their tilings have varying symmetries; all are face-symmetric. One particular form of the tiling, dual to the snub square tiling, has tiles with the minimum possible perimeter among all pentagonal tilings. Another, overlaying two flattened tilings by regular hexagons, is the form used in Cairo and has the property that every edge is collinear with infinitely many other edges. In architecture, beyond Cairo, the Cairo tiling has been used in Mughal architecture in 18th-century India, in the early 20th-century Laeiszhalle in Germany, and in many modern buildings and installations. It has also been studied as a crystal structure and appears in the art of M. C. Escher. Structure and classification The union of all edges of a Cairo tiling is the same as the union of two tilings of the plane by hexagons. Each hexagon of one tiling surrounds two vertices of the other tiling, and is divided by the hexagons of the other tiling into four of the pentagons in the Cairo tiling. Infinitely many different pentagons can form Cairo tilings, all with the same pattern of adjacencies between tiles and with the same decomposition into hexagons, but with varying edge lengths, angles, and symmetries. The pentagons that form these tilings can be grouped into two different infinite families, drawn from the 15 families of convex pentagons that can tile the plane, and the five families of pentagon found by Karl Reinhardt in 1918 that can tile the plane i
https://en.wikipedia.org/wiki/233%20%28number%29
233 (two hundred [and] thirty-three) is the natural number following 232 and preceding 234. Additionally: 233 is a prime number, 233 is a Sophie Germain prime, a Pillai prime, and a Ramanujan prime. It is a Fibonacci number, one of the Fibonacci primes. There are exactly 233 maximal planar graphs with ten vertices, and 233 connected topological spaces with four points. References Integers