source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Paraboloidal%20coordinates
Paraboloidal coordinates are three-dimensional orthogonal coordinates that generalize two-dimensional parabolic coordinates. They possess elliptic paraboloids as one-coordinate surfaces. As such, they should be distinguished from parabolic cylindrical coordinates and parabolic rotational coordinates, both of which are also generalizations of two-dimensional parabolic coordinates. The coordinate surfaces of the former are parabolic cylinders, and the coordinate surfaces of the latter are circular paraboloids. Differently from cylindrical and rotational parabolic coordinates, but similarly to the related ellipsoidal coordinates, the coordinate surfaces of the paraboloidal coordinate system are not produced by rotating or projecting any two-dimensional orthogonal coordinate system. Basic formulas The Cartesian coordinates can be produced from the ellipsoidal coordinates by the equations with Consequently, surfaces of constant are downward opening elliptic paraboloids: Similarly, surfaces of constant are upward opening elliptic paraboloids, whereas surfaces of constant are hyperbolic paraboloids: Scale factors The scale factors for the paraboloidal coordinates are Hence, the infinitesimal volume element is Differential operators Common differential operators can be expressed in the coordinates by substituting the scale factors into the general formulas for these operators, which are applicable to any three-dimensional orthogonal coordinates. For instance, the gradient operator is and the Laplacian is Applications Paraboloidal coordinates can be useful for solving certain partial differential equations. For instance, the Laplace equation and Helmholtz equation are both separable in paraboloidal coordinates. Hence, the coordinates can be used to solve these equations in geometries with paraboloidal symmetry, i.e. with boundary conditions specified on sections of paraboloids. The Helmholtz equation is . Taking , the separated equations are where
https://en.wikipedia.org/wiki/Oblate%20spheroidal%20coordinates
Oblate spheroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional elliptic coordinate system about the non-focal axis of the ellipse, i.e., the symmetry axis that separates the foci. Thus, the two foci are transformed into a ring of radius in the x-y plane. (Rotation about the other axis produces prolate spheroidal coordinates.) Oblate spheroidal coordinates can also be considered as a limiting case of ellipsoidal coordinates in which the two largest semi-axes are equal in length. Oblate spheroidal coordinates are often useful in solving partial differential equations when the boundary conditions are defined on an oblate spheroid or a hyperboloid of revolution. For example, they played an important role in the calculation of the Perrin friction factors, which contributed to the awarding of the 1926 Nobel Prize in Physics to Jean Baptiste Perrin. These friction factors determine the rotational diffusion of molecules, which affects the feasibility of many techniques such as protein NMR and from which the hydrodynamic volume and shape of molecules can be inferred. Oblate spheroidal coordinates are also useful in problems of electromagnetism (e.g., dielectric constant of charged oblate molecules), acoustics (e.g., scattering of sound through a circular hole), fluid dynamics (e.g., the flow of water through a firehose nozzle) and the diffusion of materials and heat (e.g., cooling of a red-hot coin in a water bath) Definition (µ,ν,φ) The most common definition of oblate spheroidal coordinates is where is a nonnegative real number and the angle . The azimuthal angle can fall anywhere on a full circle, between . These coordinates are favored over the alternatives below because they are not degenerate; the set of coordinates describes a unique point in Cartesian coordinates . The reverse is also true, except on the -axis and the disk in the -plane inside the focal ring. Coordinate surfaces The
https://en.wikipedia.org/wiki/Ellipsoidal%20coordinates
Ellipsoidal coordinates are a three-dimensional orthogonal coordinate system that generalizes the two-dimensional elliptic coordinate system. Unlike most three-dimensional orthogonal coordinate systems that feature quadratic coordinate surfaces, the ellipsoidal coordinate system is based on confocal quadrics. Basic formulae The Cartesian coordinates can be produced from the ellipsoidal coordinates by the equations where the following limits apply to the coordinates Consequently, surfaces of constant are ellipsoids whereas surfaces of constant are hyperboloids of one sheet because the last term in the lhs is negative, and surfaces of constant are hyperboloids of two sheets because the last two terms in the lhs are negative. The orthogonal system of quadrics used for the ellipsoidal coordinates are confocal quadrics. Scale factors and differential operators For brevity in the equations below, we introduce a function where can represent any of the three variables . Using this function, the scale factors can be written Hence, the infinitesimal volume element equals and the Laplacian is defined by Other differential operators such as and can be expressed in the coordinates by substituting the scale factors into the general formulae found in orthogonal coordinates. Angular parametrization An alternative parametrization exists that closely follows the angular parametrization of spherical coordinates: Here, parametrizes the concentric ellipsoids around the origin and and are the usual polar and azimuthal angles of spherical coordinates, respectively. The corresponding volume element is See also Ellipsoidal latitude Focaloid (shell given by two coordinate surfaces) Map projection of the triaxial ellipsoid References Bibliography Unusual convention Uses (ξ, η, ζ) coordinates that have the units of distance squared. External links MathWorld description of confocal ellipsoidal coordinates Three-dimensional coordinate systems Orthog
https://en.wikipedia.org/wiki/Conical%20coordinates
Conical coordinates, sometimes called sphero-conal or sphero-conical coordinates, are a three-dimensional orthogonal coordinate system consisting of concentric spheres (described by their radius ) and by two families of perpendicular elliptic cones, aligned along the - and -axes, respectively. The intersection between one of the cones and the sphere forms a spherical conic. Basic definitions The conical coordinates are defined by with the following limitations on the coordinates Surfaces of constant are spheres of that radius centered on the origin whereas surfaces of constant and are mutually perpendicular cones and In this coordinate system, both Laplace's equation and the Helmholtz equation are separable. Scale factors The scale factor for the radius is one (), as in spherical coordinates. The scale factors for the two conical coordinates are and References Bibliography External links MathWorld description of conical coordinates Three-dimensional coordinate systems Orthogonal coordinate systems
https://en.wikipedia.org/wiki/Micro-operation
In computer central processing units, micro-operations (also known as micro-ops or μops, historically also as micro-actions) are detailed low-level instructions used in some designs to implement complex machine instructions (sometimes termed macro-instructions in this context). Usually, micro-operations perform basic operations on data stored in one or more registers, including transferring data between registers or between registers and external buses of the central processing unit (CPU), and performing arithmetic or logical operations on registers. In a typical fetch-decode-execute cycle, each step of a macro-instruction is decomposed during its execution so the CPU determines and steps through a series of micro-operations. The execution of micro-operations is performed under control of the CPU's control unit, which decides on their execution while performing various optimizations such as reordering, fusion and caching. Optimizations Various forms of μops have long been the basis for traditional microcode routines used to simplify the implementation of a particular CPU design or perhaps just the sequencing of certain multi-step operations or addressing modes. More recently, μops have also been employed in a different way in order to let modern CISC processors more easily handle asynchronous parallel and speculative execution: As with traditional microcode, one or more table lookups (or equivalent) is done to locate the appropriate μop-sequence based on the encoding and semantics of the machine instruction (the decoding or translation step), however, instead of having rigid μop-sequences controlling the CPU directly from a microcode-ROM, μops are here dynamically buffered for rescheduling before being executed. This buffering means that the fetch and decode stages can be more detached from the execution units than is feasible in a more traditional microcoded (or hard-wired) design. As this allows a degree of freedom regarding execution order, it makes some ex
https://en.wikipedia.org/wiki/Metalink
Metalink is an extensible metadata file format that describes one or more computer files available for download. It specifies files appropriate for the user's language and operating system; facilitates file verification and recovery from data corruption; and lists alternate download sources (mirror URIs). The metadata is encoded in HTTP header fields and/or in an XML file with extension or . The duplicate download locations provide reliability in case one method fails. Some clients also achieve faster download speeds by allowing different chunks/segments of each file to be downloaded from multiple resources at the same time (segmented downloading). Metalink supports listing multiple partial and full file hashes along with PGP signatures. Most clients only support verifying MD5, SHA-1, and SHA-256, however. Besides FTP and HTTP mirror locations and rsync, it also supports listing the P2P methods BitTorrent, ed2k, magnet link or any other that uses a URI. Development history Metalink 3.0 was publicly released in 2005. It was designed to aid in downloading Linux ISO images and other large files on release day, when servers would be overloaded (each server would have to be tried manually) and to repair large downloads by replacing only the parts with errors instead of fully re-downloading them. It was initially adopted by download managers, and was used by open source projects such as OpenOffice.org and Linux distributions. A community developed around it, more download programs supported it (including proprietary ones) and it saw commercial adoption. In 2008, the community took their work to the Internet Engineering Task Force which resulted in Metalink 4.0 in 2010, described in a Standards Track RFC. Metalink 3.0 (with the extension ) and Metalink 4.0 (with the extension ) are incompatible because they have a slightly different format. In 2011, another Standards Track RFC described Metalink in HTTP header fields. Client programs Client libraries libmetalink (
https://en.wikipedia.org/wiki/Inauthentic%20text
An inauthentic text is a computer-generated expository document meant to appear as genuine, but which is actually meaningless. Frequently they are created in order to be intermixed with genuine documents and thus manipulate the results of search engines, as with Spam blogs. They are also carried along in email in order to fool spam filters by giving the spam the superficial characteristics of legitimate text. Sometimes nonsensical documents are created with computer assistance for humorous effect, as with Dissociated press or Flarf poetry. They have also been used to challenge the veracity of a publication—MIT students submitted papers generated by a computer program called SCIgen to a conference, where they were initially accepted. This led the students to claim that the bar for submissions was too low. With the amount of computer generated text outpacing the ability of people to humans to curate it, there needs some means of distinguishing between the two. Yet automated approaches to determining absolutely whether a text is authentic or not face intrinsic challenges of semantics. Noam Chomsky coined the phrase "Colorless green ideas sleep furiously" giving an example of grammatically-correct, but semantically incoherent sentence; some will point out that in certain contexts one could give this sentence (or any phrase) meaning. The first group to use the expression in this regard can be found below from Indiana University. Their work explains in detail an attempt to detect inauthentic texts and identify pernicious problems of inauthentic texts in cyberspace. The site has a means of submitting text that assesses, based on supervised learning, whether a corpus is inauthentic or not. Many users have submitted incorrect types of data and have correspondingly commented on the scores. This application is meant for a specific kind of data; therefore, submitting, say, an email, will not return a meaningful score. See also Scraper site Spamdexing Stochastic
https://en.wikipedia.org/wiki/Microsoft%20Fingerprint%20Reader
Microsoft Fingerprint Reader was a device sold by Microsoft, primarily for homes and small businesses. The underlying software providing the biometrics was developed by Digital Persona. Fingerprint readers are more secure, reliable and convenient than a normal traditional password, although they have been subject to spoofing. A fingerprint recognition system is more tightly linked to a specific user than, e.g., an access card, which can be stolen. History First released on September 4 2004, this device was supported by Windows XP and Windows Vista x86 operating systems. It was discontinued shortly after Windows Vista was released. Functionality The Fingerprint Reader's software allows the registration of up to ten fingerprints per device. Login names and passwords associated with the registered fingerprints were stored in a database on the user's computer. On presentation of an authorized fingerprint, the software passes the associated login names and passwords to compatible applications and websites, allowing login without a keyboard. If the software finds that the particular fingerprint does not match one it its database, it declines the access. Application 64-bit Windows The Microsoft Fingerprint Reader may be modified to work with 64-bit Windows. Firefox browser The reader works with Firefox using the FingerFox Add-on. See also Fingerprint Fingerprint Verification Competition References External links WebArchive of MS Fingerprint home page FingerprintReader Biometrics Computer access control
https://en.wikipedia.org/wiki/ClamTk
ClamTk is a free software graphical interface for the ClamAV command line antivirus software program, for Linux desktop users. It provides both on-demand and scheduled scanning. The project was started by Dave Mauroni in February 2004 and remains under development. ClamTk was originally written using the Tk widget toolkit, for which it is named, but it was later re-written in Perl, using the GTK toolkit. The interface has evolved considerably over time and recent versions are quite different than early releases, adding features and changing the interface presentation. It is dual-licensed under the GNU General Public License version 1 or later, and the Artistic License. Features The ClamTk interface allows scanning of single files or directories. It can be configured for recursive scans, scanning all sub-directories, for whitelists, to scan for potentially unwanted applications (PUAs), to exclude hidden files, or large files over 20 MB. In 2017 GHacks reviewer Mike Turcotte-McCusker noted the high rate of false positives that the PUA-inclusive scans return. The history selection allows reviewing the results of previous scans and quarantined files. ClamTk allows manual or automatic updates to be configured for ClamAV's virus definitions. The application interfaces with thunar-sendto-clamtk, nemo-sendto-clamtk, clamtk-gnome and clamtk-kde, each of which provide context menu functionality for the associated file managers, Thunar, Nemo, GNOME Files and Dolphin, allowing users to directly send files to ClamTk for scanning. ClamTk can also be run from the command-line interface, although the main reason that command line access exists is for interface with the various file managers. Use ClamTk has been included in the repositories of many Linux distributions, including ALT Linux, Arch Linux, CentOS, Debian, Fedora, Gentoo, Linux Mint, Mandriva, openSUSE, PCLinuxOS, Red Hat Enterprise Linux, Ubuntu, as well as FreeBSD. Most users install ClamTk from the repositori
https://en.wikipedia.org/wiki/Enterprise%20Distributed%20Object%20Computing
The UML profile for Enterprise Distributed Object Computing (EDOC) is a standard of the Object Management Group in support of open distributed computing using model-driven architecture and service-oriented architecture. Its aim is to simplify the development of component based (EDOC) systems by providing a UML-based modeling framework conforming to the MDA of the OMG. The basis of EDOC is the Enterprise Collaboration Architecture, ECA, meta model that defines how roles interact within communities in the performance of collaborative business processes. The seven EDOC specifications EDOC is composed of seven specifications: The Enterprise Collaboration Architecture, ECA The Metamodel and UML Profile for Java and EJB The Flow Composition Model, FCM The UML Profile for Patterns The UML Profile for ECA The UML Profile for Meta Object Facility The UML Profile for Relationships See also Model Driven Engineering (MDE) Model-driven architecture (MDA) Meta-model Meta-modeling Meta-Object Facility (MOF) Unified Modeling Language (UML) External links OMG EDOC Standard at the Internet Archive Unified Modeling Language Year of introduction missing
https://en.wikipedia.org/wiki/Reglet
A reglet is found on the exterior of a building along a masonry wall, chimney or parapet that meets the roof. It is a groove cut within a mortar joint that receives counter-flashing meant to cover surface flashing used to deflect water infiltration. Reglet can also refer to the counter-flashing itself when it is applied on the surface, known as "face reglet" or "reglet-flashing". Description Reglet Groove The reglet is created typically with a grinder or masonry cutting saw that cuts 3/4" to 1-1/2" deep into a mortar joint between two bricks. The counter-flashing is then inserted to the reglet and held in place with thin metal wedge covered with a sealant. Face Reglet A face reglet (also known as reglet-flashing) is counter-flashing that is typically made out of either copper or lead-coated copper. It is applied on the surface of the wall or parapet and screwed into place, with additional sealant placed between the surface and the counter-flashing. It is easily removable for roof repair and flashing replacements. A face reglet can also be called a raggle and may be related to regle, a groove. Assembly See also Flashing References Moisture protection Building engineering
https://en.wikipedia.org/wiki/Korea%20Atomic%20Energy%20Research%20Institute
The Korea Atomic Energy Research Institute (KAERI) in Daejeon, South Korea was established in 1959 as the sole professional research-oriented institute for nuclear power in South Korea, and has rapidly built a reputation for research and development in various fields. History KAERI was established in 1959 as the Atomic Energy Research Institute (national research institute). Spin off companies and institutions KAERI has made significant contributions to the nation's nuclear technology development. After Korea achieved self-reliance in nuclear core technologies, KAERI have transferred highly developed technologies to local industries for practical applications. The Korea Institute of Nuclear Safety (KINS), responsible for supporting the government in regulatory and licensing works, and the Nuclear Environment Technology Institute, responsible for low and medium level radioactive waste management, are also originally spin-offs from KAERI. KAERI established the present KEPCO E&C (full name: KEPCO Engineering & Construction Company, INC., formerly: KOPEC), responsible for not only the architect engineering works of nuclear power plants, but also for designing nuclear steam supply systems. KAERI also established the present Korea Nuclear Fuel Co., Ltd. (KNFC), responsible for designing and manufacturing PWR as well as PHWR fuels. In 2004 the Korea Institute of Nuclear Nonproliferation and Control was spun out of KAERI. KAERI developed chad commercialized technologies In 1995 KAERI designed and constructed the nation's first multipurpose research reactor, HANARO based on the Canadian MAPLE design. . Reactor based on this design exported to Jordan as Jordan Research and Training Reactor (JRTR). KAERI is dedicated to finding a wide range of uses for atomic energy. As examples, KAERI developed the world's first radiopharmaceutical "Milican injection" for treating liver cancer. The System-Integrated Modular Advanced Reactor (SMART). See also Korea Univers
https://en.wikipedia.org/wiki/Memory%20latency
Memory latency is the time (the latency) between initiating a request for a byte or word in memory until it is retrieved by a processor. If the data are not in the processor's cache, it takes longer to obtain them, as the processor will have to communicate with the external memory cells. Latency is therefore a fundamental measure of the speed of memory: the less the latency, the faster the reading operation. Latency should not be confused with memory bandwidth, which measures the throughput of memory. Latency can be expressed in clock cycles or in time measured in nanoseconds. Over time, memory latencies expressed in clock cycles have been fairly stable, but they have improved in time. See also Burst mode (computing) CAS latency Multi-channel memory architecture Interleaved memory SDRAM burst ordering SDRAM latency References External links Overview of the different kinds of Memory Latency Article and Analogy of the Effects of Memory Latency Computer memory ar:كمون ذاكرة el:Λανθάνων χρόνος προσπέλασης μνήμης he:זמן אחזור hu:Memóriakésleltetés ru:Тайминги
https://en.wikipedia.org/wiki/Dermatoglyphics
Dermatoglyphics (from Ancient Greek derma, "skin", and glyph, "carving") is the scientific study of fingerprints, lines, mounts and shapes of hands, as distinct from the superficially similar pseudoscience of palmistry. Dermatoglyphics also refers to the making of naturally occurring ridges on certain body parts, namely palms, fingers, soles, and toes. These are areas where hair usually does not grow, and these ridges allow for increased leverage when picking up objects or walking barefoot. In a 2009 report, the scientific basis underlying dermatoglyphics was questioned by the National Academy of Sciences, for the discipline's reliance on subjective comparisons instead of conclusions drawn from the scientific method. History 1823 marks the beginning of the scientific study of papillary ridges of the hands and feet, with the work of Jan Evangelista Purkyně. By 1858, Sir William Herschel, 2nd Baronet, while in India, became the first European to realize the value of fingerprints for identification. Sir Francis Galton conducted extensive research on the importance of skin-ridge patterns, demonstrating their permanence and advancing the science of fingerprint identification with his 1892 book Fingerprints. In 1893, Sir Edward Henry published the book The classification and uses of fingerprints, which marked the beginning of the modern era of fingerprint identification and is the basis for other classification systems. In 1929, Harold Cummins and Charles Midlo M.D., together with others, published the influential book Fingerprints, Palms and Soles, a bible in the field of dermatoglyphics. In 1945, Lionel Penrose, inspired by the works of Cummins and Midlo, conducted his own dermatoglyphic investigations as a part of his research into Down syndrome and other congenital medical disorders. In 1976, Schaumann and Alter published the book Dermatoglyphics in Medical Disorders, which summarizes the findings of dermatoglyphic patterns under disease conditions. In 1982,
https://en.wikipedia.org/wiki/Rushbrooke%20inequality
In statistical mechanics, the Rushbrooke inequality relates the critical exponents of a magnetic system which exhibits a first-order phase transition in the thermodynamic limit for non-zero temperature T. Since the Helmholtz free energy is extensive, the normalization to free energy per site is given as The magnetization M per site in the thermodynamic limit, depending on the external magnetic field H and temperature T is given by where is the spin at the i-th site, and the magnetic susceptibility and specific heat at constant temperature and field are given by, respectively and Definitions The critical exponents and are defined in terms of the behaviour of the order parameters and response functions near the critical point as follows where measures the temperature relative to the critical point. Derivation For the magnetic analogue of the Maxwell relations for the response functions, the relation follows, and with thermodynamic stability requiring that , one has which, under the conditions and the definition of the critical exponents gives which gives the Rushbrooke inequality Remarkably, in experiment and in exactly solved models, the inequality actually holds as an equality. Critical phenomena Statistical mechanics
https://en.wikipedia.org/wiki/Bulk%20soil
Bulk soil is soil outside the rhizosphere that is not penetrated by plant roots. The bulk soil is like an ecosystem, it is made up of many things such as: nutrients, ions, soil particles, and root exudates. There are many different interactions that occur between all the members of the bulk soil. Natural organic compounds are much lower in bulk soil than in the rhizosphere. Furthermore, bulk soil inhabitants are generally smaller than identical species in the rhizosphere. The main two aspects of bulk soil are its chemistry and microbial community composition. Chemistry of bulk soil Soil is made up of layers called soil horizons, these make up a vertical soil profile. There are five master horizons O, A, E, B, and C. The O horizon contains organic matter, A is considered the topsoil, E is present or absent depending on the type of soil and conditions, B is the subsoil, and C is unconsolidated rock. There are many chemical interactions and properties that are in all the soil. Chemical properties of the bulk soil are organic matter, carbon, nutrient content, cation-exchange capacity (CEC), free ions (cations or anions), pH, and base saturation and organisms. These can impact many chemical processes such as nutrient cycling, soil formation, biological activity, and erosion. Microbial communities Soil is composed of a diverse community of microbes such as: fungi, bacteria, archaea, viruses and microfauna. There are microbes in the bulk soil and the rhizosphere, the variation of microbes increases in the bulk soil and the abundance of microbes increases in the rhizosphere. Some microbes can form symbioses with plants that are beneficial or pathogenic. All these microbes have a special role in many soil processes such as soil formation, organic matter decomposition, nutrient cycling. For example, there are microbes in the rhizosphere (on the plant) that can break down nitrogen, and microbes out in the bulk can break down nitrogen as well. Both have different factors
https://en.wikipedia.org/wiki/Raise%20borer
A raise borer is a machine used in underground mining, to excavate a circular hole between two levels of a mine without the use of explosives. The raise borer is set up on the upper level of the two levels to be connected, on an evenly laid platform (typically a concrete pad). A small-diameter hole (pilot hole) is drilled to the level required; the diameter of this hole is typically 230mm - 445mm (9" - 17.5"), large enough to accommodate the drill string. Once the drill has broken into the opening on the target level, the bit is removed and a reamer head, of the required diameter of the excavation, is attached to the drill string and raised back towards the machine. The drill cuttings from the reamer head fall to the floor of the lower level. The finished raise has smooth walls and may not require rock bolting or other forms of ground support. One impressive use of raise boring is the 7.1 m diameter shafts for Sasol's Middelbult and Bosjesspruit Mines in South Africa. The boxhole borer (or machine roger) is a variant of a raise borer that is used when there is not enough space on the higher of the two levels to be connected. The boxhole borer is set up on the lower level, drills a pilot hole as a guide, then drives the reamer bit along the pilot hole from the lower level to the upper. Precautions have to be taken to redirect falling drill cuttings away from the machine, and to reinforce the drill string. See also 2010 Copiapó mining accident - Rescue References External links Dictionary of Mining, Mineral, and Related Terms Mining equipment Underground mining
https://en.wikipedia.org/wiki/Co-adaptation
In biology, co-adaptation is the process by which two or more species, genes or phenotypic traits undergo adaptation as a pair or group. This occurs when two or more interacting characteristics undergo natural selection together in response to the same selective pressure or when selective pressures alter one characteristic and consecutively alter the interactive characteristic. These interacting characteristics are only beneficial when together, sometimes leading to increased interdependence. Co-adaptation and coevolution, although similar in process, are not the same; co-adaptation refers to the interactions between two units, whereas co-evolution refers to their evolutionary history. Co-adaptation and its examples are often seen as evidence for co-evolution. Genes and Protein Complexes At genetic level, co-adaptation is the accumulation of interacting genes in the gene pool of a population by selection. Selection pressures on one of the genes will affect its interacting proteins, after which compensatory changes occur. Proteins often act in complex interactions with other proteins and functionally related proteins often show a similar evolutionary path. A possible explanation is co-adaptation. An example of this is the interaction between proteins encoded by mitochondrial DNA (mtDNA) and nuclear DNA (nDNA). MtDNA has a higher rate of evolution/mutation than nDNA, especially in specific coding regions. However, in order to maintain physiological functionality, selection for functionally interacting proteins, and therefore co-adapted nDNA will be favourable. Co-adaptation between mtDNA and nDNA sequences has been studied in the copepod Tigriopus californicus. The mtDNA of COII coding sequences among conspecific populations of this species diverges extensively. When mtDNA of one population was placed in a nuclear background of another population, cytochrome c oxidase activity is significantly decreased, suggesting co-adaptation. Results show an unlikely relation
https://en.wikipedia.org/wiki/HM%20Government%20Communications%20Centre
His Majesty's Government Communications Centre (HMGCC) is an organisation which provides electronics and software to support the communication needs of the British Government. Based at Hanslope Park, near Milton Keynes in Buckinghamshire, it is closely linked with the Foreign, Commonwealth and Development Office and the British intelligence community. History HMGCC used to have a communications centre at Signal Hill near Gawcott, in Buckinghamshire. Stephen Ball was Chief Executive until 2000, when Dr John Widdowson took over; Widdowson moved to GCHQ in 2005. Sarah-Jill Lennard was CEO from 2008 to 2011. Juliette Wilcox was CEO from 2019 to 2021. Structure The organisation employs more than 380 personnel and the solutions it provides are bespoke to fit the needs of the government, its organisations, and specifically its intelligence assets. HMGCC is responsible for research and design in the following disciplines: RF engineering Signal processing Software engineering Acoustics Audio engineering Operating systems GUI design Embedded systems System engineering Manufacture and application of microcircuits Study of power sources Operational research Mechanical engineering See also GCHQ MI5 MI6 Bletchley Park Hanslope Park References External links Official HMGCC Website Hi-res aerial photography (UK Secret Bases, Jan 2007) reveals £30 million expansion project Telegraph March 2008 British intelligence agencies Computer security in the United Kingdom Cryptography organizations Information technology organisations based in the United Kingdom Organisations based in Milton Keynes Research institutes in Buckinghamshire
https://en.wikipedia.org/wiki/Svenskt%20Diplomatarium
Svenskt Diplomatarium (also known under the Latin name Diplomatarium Suecanum) is a series of critical editions of medieval Swedish documents or documents pertaining to the history of Sweden (in Swedish, Latin and other languages). Begun in the 1820s by the antiquarian Johan Gustaf Liljegren and inactive for periods, the work is since 1976 in the hands of a department within the Swedish National Archives. The editorial committee works through the material in chronological order, and the fascicle published in 2004 included documents dating to the 1370s. Since 1999, the editorial committee has published the index of known medieval documents (including those from the periods not yet covered by the printed fascicles in a database available first on a compact disc, later on the World Wide Web. Many documents are also available online in full text and with colour images of the originals. Historiography of Sweden Online databases
https://en.wikipedia.org/wiki/TMS6100
The Texas Instruments TMS6100 is a 1 or 4-bit serial mask (factory)-programmed read-only memory IC. It is a companion chip to the TMS5100, CD2802, TMS5110, (rarely) TMS5200, and (rarely) TMS5220 speech synthesizer ICs, and was mask-programmed with LPC data required for a specific product. It holds 128Kib (16KiB) of data, and is mask-programmed with a start address for said data on a 16KiB boundary. It is also mask-programmable whether the /CE line needs to be high or low to activate, and also what the two (or four) 'internal' CE bits need to be set to activate, effectively making the total addressable area 18 bits. Finally, it is mask-programmable whether the bits are read out 1-bit serially or 4 at a time. TMS6125 The TMS6125 is a smaller, 32Kib (4KiB) version of effectively the same chip, with some minor changes to the 'address load' command format to reflect its smaller size. Texas Instruments calls both of these serial roms (TMS6100 and TMS6125) "VSM"s (Voice Synthesis Memory) on their datasheets and literature, and they will be referred to as such for the rest of this article. Both VSMs use 'local addressing', meaning the chip keeps track of its own address pointer once loaded. Hence every bit in the chip can be sequentially read out, even though internally the chip stores data in 8-bit bytes. (For the following section, CE stands for "Chip Enable" and is used as a way to enable one specific VSM) Commands The VSM has supports 4 basic commands, based on two input pins called 'M0' and 'M1': no operation/idle: this command tells the chip to 'do nothing' or 'continue doing what was being done before'. load address: this command parallel-loads 4 bits from the data bus. to fully load an address, this command must be executed 5 times in sequence, for a load of a 20 bit block (LSB-first 14 bit address, 4 CE bits, and two unused bits, effectively 18 address bits) into the internal address pointer. On the TMS6125 the command must be executed 4 times instead, and o
https://en.wikipedia.org/wiki/Conway%20polyhedron%20notation
In geometry, Conway polyhedron notation, invented by John Horton Conway and promoted by George W. Hart, is used to describe polyhedra based on a seed polyhedron modified by various prefix operations. Conway and Hart extended the idea of using operators, like truncation as defined by Kepler, to build related polyhedra of the same symmetry. For example, represents a truncated cube, and , parsed as , is (topologically) a truncated cuboctahedron. The simplest operator dual swaps vertex and face elements; e.g., a dual cube is an octahedron: . Applied in a series, these operators allow many higher order polyhedra to be generated. Conway defined the operators (ambo), (bevel), (dual), (expand), (gyro), (join), (kis), (meta), (ortho), (snub), and (truncate), while Hart added (reflect) and (propellor). Later implementations named further operators, sometimes referred to as "extended" operators. Conway's basic operations are sufficient to generate the Archimedean and Catalan solids from the Platonic solids. Some basic operations can be made as composites of others: for instance, ambo applied twice is the expand operation (), while a truncation after ambo produces bevel (). Polyhedra can be studied topologically, in terms of how their vertices, edges, and faces connect together, or geometrically, in terms of the placement of those elements in space. Different implementations of these operators may create polyhedra that are geometrically different but topologically equivalent. These topologically equivalent polyhedra can be thought of as one of many embeddings of a polyhedral graph on the sphere. Unless otherwise specified, in this article (and in the literature on Conway operators in general) topology is the primary concern. Polyhedra with genus 0 (i.e. topologically equivalent to a sphere) are often put into canonical form to avoid ambiguity. Operators In Conway's notation, operations on polyhedra are applied like functions, from right to left. For example, a
https://en.wikipedia.org/wiki/End%20of%20message
End of message or EOM (as in "(EOM)" or "<EOM>") signifies the end of a message, often an e-mail message. Usage The subject of an e-mail message may contain such an abbreviation to signify that all content is in the subject line so that the message itself does not need to be opened (e.g., "No classes Monday (EOM)" or "Midterm delayed <EOM>"). This practice can save the time of the receiver and has been recommended to increase productivity. EOM can also be used in conjunction with no reply necessary, or NRN, to signify that the sender does not require (or would prefer not to receive) a response (e.g., "Campaign has launched (EOM/NRN)") or reply requested or RR to signify that the sender wishes a response (e.g., "Got a minute? (EOM/RR)"). These are examples of Internet slang. EOM is often used this way, as a synonym to NRN, in blogs and forums online. It is often a snide way for commenters to imply that their message is so perfect that there can be no logical response to it. Or it can be used as a way of telling another specific poster to stop writing back. EOM can also be defined as the final 3 buzzes of an alert of the Emergency Alert System to know when the alert is finished. Origin In earlier communications methods, an end of message ("EOM") sequence of characters indicated to a receiving device or operator that the current message has ended. In teleprinter systems, the sequence "NNNN", on a line by itself, is an end of message indicator. In several Morse code conventions, including amateur radio, the prosign AR (dit dah dit dah dit) means end of message. In the original ASCII code, "EOM" corresponded to code 03hex, which has since been renamed to "ETX" ("end of text"). See also EOM (disambiguation) End-of-file, also abbreviated EOF List of computing and IT abbreviations -30- References Communication Email
https://en.wikipedia.org/wiki/Non-explosive%20demolition%20agents
Non-explosive demolition agents are chemicals that are an alternative to explosives and gas pressure blasting products in demolition, mining, and quarrying. To use non-explosive demolition agents in demolition or quarrying, holes are drilled in the base rock as they would be for use with conventional explosives. A slurry mixture of the non-explosive demolition agent and water is poured into the drill holes. Over the next few hours the slurry expands, cracking the rock in a pattern somewhat like the cracking that would occur from conventional explosives. Non-explosive demolition agents offer many advantages including that they are silent and do not produce vibration the way a conventional explosive would. In some applications conventional explosives are more economical than non-explosive demolition agents. In many countries these are available without restriction, unlike explosives which are highly regulated. The active ingredient is typically calcium oxide "burnt lime" typically mixed with a little Portland cement and plausibly modifiers. These agents are much safer than explosives, but they have to be used as directed to avoid steam explosions during the first few hours after being placed. Many patents describe non-explosive demolition agents containing CaO, SiO2 and/or cement. See also Plug and feather Explosive material Mining Quarry References Building engineering Chemical engineering
https://en.wikipedia.org/wiki/Efferent%20coupling
Efferent coupling is a coupling metric in software development. It measures the number of data types a class knows about. This includes inheritance, interface implementation, parameter types, variable types, and exceptions. This has also been referred to by Robert C. Martin as the Fan-out stability metric which in his book Clean Architecture he describes as Outgoing dependencies. This metric identifies the number of classes inside this component that depend on classes outside the component. This metric is often used to calculate instability of a component in software architecture as I = Fan-out / (Fan-in + Fan-out). This metric has a range [0,1]. I = 0 is maximally stable while I = 1 is maximally unstable. References Software metrics
https://en.wikipedia.org/wiki/University%20of%20New%20Hampshire%20InterOperability%20Laboratory
The University of New Hampshire InterOperability Laboratory (UNH-IOL) is an independent test facility that provides interoperability and standards conformance testing for networking, telecommunications, data storage, and consumer technology products. Founded in 1988, it employs approximately 25 full-time staff members and over 100 part-time undergraduate and graduate students, and counts over 150 companies as members. History The UNH-IOL began as a project of the University's Research Computing Center (RCC). In 1988 the RCC was testing Fiber Distributed Data Interface (FDDI) equipment with the intention of deploying it in its network. The RCC found that equipment from two vendors did not work together and contacted the vendors to find a solution. The two vendors cooperated with the RCC to solve the problem which was caused by differences between the draft and final FDDI specification. During this same time period the RCC was testing 10BASE-T Ethernet interfaces for another project. The University recognized the need for interoperability testing of networking equipment and also the opportunity to provide students with hands-on experience in emerging technologies. With the idea of providing testing services to companies in a vendor-neutral environment the first UNH-IOL consortium (10BASE-T Ethernet) was founded in 1990. Over the next decade the UNH-IOL grew to twelve consortia with over 100 member companies. In 2002, having outgrown several smaller locations, the UNH-IOL moved to a 32,000 square foot facility on the outskirts of the UNH campus. One area in which the UNH-IOL has been influential is IPv6 standardization and deployment. Between 2003 and 2007 the UNH-IOL organized the Moonv6 project, which was a multi-site, IPv6 based network designed to test the interoperability of IPv6 implementations. At the time the Moonv6 project was the largest permanently deployed multi-vendor IPv6 network in the world. The UNH-IOL is also the only North American laboratory of
https://en.wikipedia.org/wiki/Rose%20%28heraldry%29
The rose is a common device in heraldry. It is often used both as a charge on a coat of arms and by itself as an heraldic badge. The heraldic rose has a stylized form consisting of five symmetrical lobes, five barbs, and a circular seed. The rose is one of the most common plant symbols in heraldry, together with the lily, which also has a stylistic representation in the fleur-de-lis. The rose was the symbol of the English Tudor dynasty, and the ten-petaled Tudor rose (termed a double rose) is associated with England. Roses also feature prominently in the arms of the princely House of Lippe and on the seal of Martin Luther. Appearance The normal appearance of the heraldic rose is a five-petaled rose, mimicking the look of a wild rose on a hedgerow. It is shown singly and full-faced. It most commonly has yellow seeds in the center and five green barbs as backing; such a rose is blazoned as barbed and seeded proper. If the seeds and barbs are of a different colour, then the rose is barbed and seeded of that/those tinctures. The rose of Lippe shown below, for example, is blazoned a Rose Gules, barbed and seeded Or. Some variations on the rose have been used. Roses may appear with a stem, in which case they are described as slipped or stalked. A rose with a stalk and leaves may also be referred to as a damask rose, stalked and leaved, as appearing on the Canting arms of the House of Rossetti. Rose branches, slips, and leaves have occasionally appeared in arms alone, without the flower. A combination of two roses, one within the other, is termed a double rose, famously used by the Tudors. A rose sometimes appears surrounded by rays, which makes it a rose-en-soleil (rose in the sun). A rose may be crowned. Roses may appear within a chaplet, a garland of leaves with four flowers. In badges, it is not uncommon for a rose to be conjoined with another device. Catherine of Aragon's famous badge was a pomegranate conjoined with the double rose of her husband, Henry VI
https://en.wikipedia.org/wiki/Apollonian%20circles
In geometry, Apollonian circles are two families (pencils) of circles such that every circle in the first family intersects every circle in the second family orthogonally, and vice versa. These circles form the basis for bipolar coordinates. They were discovered by Apollonius of Perga, a renowned Greek geometer. Definition The Apollonian circles are defined in two different ways by a line segment denoted . Each circle in the first family (the blue circles in the figure) is associated with a positive real number , and is defined as the locus of points such that the ratio of distances from to and to equals , For values of close to zero, the corresponding circle is close to , while for values of close to , the corresponding circle is close to ; for the intermediate value , the circle degenerates to a line, the perpendicular bisector of . The equation defining these circles as a locus can be generalized to define the Fermat–Apollonius circles of larger sets of weighted points. Each circle in the second family (the red circles in the figure) is associated with an angle , and is defined as the locus of points such that the inscribed angle equals , Scanning from 0 to π generates the set of all circles passing through the two points and . The two points where all the red circles cross are the limiting points of pairs of circles in the blue family. Bipolar coordinates A given blue circle and a given red circle intersect in two points. In order to obtain bipolar coordinates, a method is required to specify which point is the right one. An isoptic arc is the locus of points that sees points under a given oriented angle of vectors i.e. Such an arc is contained into a red circle and is bounded by points . The remaining part of the corresponding red circle is . When we really want the whole red circle, a description using oriented angles of straight lines has to be used: Pencils of circles Both of the families of Apollonian circles are pencils of circles.
https://en.wikipedia.org/wiki/257-gon
In Geometry, 257-gon, also known broadly as the Dihectapentacontakaiheptagon, is a polygon with 257 sides. The sum of the interior angles of any non-self-intersecting 257-gon is 45,900°. Regular 257-gon The area of a regular 257-gon is (with ) A whole regular 257-gon is not visually discernible from a circle, and its perimeter differs from that of the circumscribed circle by about 24 parts per million. Construction The regular 257-gon (one with all sides equal and all angles equal) is of interest for being a constructible polygon: that is, it can be constructed using a compass and an unmarked straightedge. This is because 257 is a Fermat prime, being of the form 22n + 1 (in this case n = 3). Thus, the values and are 128-degree algebraic numbers, and like all constructible numbers they can be written using square roots and no higher-order roots. Although it was known to Gauss by 1801 that the regular 257-gon was constructible, the first explicit constructions of a regular 257-gon were given by Magnus Georg Paucker (1822) and Friedrich Julius Richelot (1832). Another method involves the use of 150 circles, 24 being Carlyle circles: this method is pictured below. One of these Carlyle circles solves the quadratic equation x2 + x − 64 = 0. Symmetry The regular 257-gon has Dih257 symmetry, order 514. Since 257 is a prime number there is one subgroup with dihedral symmetry: Dih1, and 2 cyclic group symmetries: Z257, and Z1. 257-gram A 257-gram is a 257-sided star polygon. As 257 is prime, there are 127 regular forms generated by Schläfli symbols {257/n} for all integers 2 ≤ n ≤ 128 as . Below is a view of {257/128}, with 257 nearly radial edges, with its star vertex internal angles 180°/257 (~0.7°). See also 17-gon List of polygons List of self-intersecting polygons References External links Robert Dixon Mathographics. New York: Dover, p. 53, 1991. Benjamin Bold, Famous Problems of Geometry and How to Solve Them. New York: Dover, p. 70, 1982. H. S. M. Coxe
https://en.wikipedia.org/wiki/65537-gon
In geometry, a 65537-gon is a polygon with 65,537 (216 + 1) sides. The sum of the interior angles of any non–self-intersecting is 11796300°. Regular 65537-gon The area of a regular is (with ) A whole regular is not visually discernible from a circle, and its perimeter differs from that of the circumscribed circle by about 15 parts per billion. Construction The regular 65537-gon (one with all sides equal and all angles equal) is of interest for being a constructible polygon: that is, it can be constructed using a compass and an unmarked straightedge. This is because 65,537 is a Fermat prime, being of the form 22n + 1 (in this case n = 4). Thus, the values and are 32768-degree algebraic numbers, and like any constructible numbers, they can be written in terms of square roots and no higher-order roots. Although it was known to Gauss by 1801 that the regular 65537-gon was constructible, the first explicit construction of a regular 65537-gon was given by Johann Gustav Hermes (1894). The construction is very complex; Hermes spent 10 years completing the 200-page manuscript. Another method involves the use of at most 1332 Carlyle circles, and the first stages of this method are pictured below. This method faces practical problems, as one of these Carlyle circles solves the quadratic equation x2 + x − 16384 = 0 (16384 being 214). Symmetry The regular 65537-gon has Dih65537 symmetry, order 131074. Since 65,537 is a prime number there is one subgroup with dihedral symmetry: Dih1, and 2 cyclic group symmetries: Z65537, and Z1. 65537-gram A 65537-gram is a 65,537-sided star polygon. As 65,537 is prime, there are 32,767 regular forms generated by Schläfli symbols {65537/n} for all integers 2 ≤ n ≤ 32768 as . See also Circle Equilateral triangle Pentagon Heptadecagon (17-sides) 257-gon References Bibliography Robert Dixon Mathographics. New York: Dover, p. 53, 1991. Benjamin Bold, Famous Problems of Geometry and How to Solve Them New York: Dover, p. 70, 1982
https://en.wikipedia.org/wiki/Immune%20tolerance
Immune tolerance, or immunological tolerance, or immunotolerance, is a state of unresponsiveness of the immune system to substances or tissue that would otherwise have the capacity to elicit an immune response in a given organism. It is induced by prior exposure to that specific antigen and contrasts with conventional immune-mediated elimination of foreign antigens (see Immune response). Tolerance is classified into central tolerance or peripheral tolerance depending on where the state is originally induced—in the thymus and bone marrow (central) or in other tissues and lymph nodes (peripheral). The mechanisms by which these forms of tolerance are established are distinct, but the resulting effect is similar. Immune tolerance is important for normal physiology. Central tolerance is the main way the immune system learns to discriminate self from non-self. Peripheral tolerance is key to preventing over-reactivity of the immune system to various environmental entities (allergens, gut microbes, etc.). Deficits in central or peripheral tolerance also cause autoimmune disease, resulting in syndromes such as systemic lupus erythematosus, rheumatoid arthritis, type 1 diabetes, autoimmune polyendocrine syndrome type 1 (APS-1), and immunodysregulation polyendocrinopathy enteropathy X-linked syndrome (IPEX), and potentially contribute to asthma, allergy, and inflammatory bowel disease. And immune tolerance in pregnancy is what allows a mother animal to gestate a genetically distinct offspring with an alloimmune response muted enough to prevent miscarriage. Tolerance, however, also has its negative tradeoffs. It allows for some pathogenic microbes to successfully infect a host and avoid elimination. In addition, inducing peripheral tolerance in the local microenvironment is a common survival strategy for a number of tumors that prevents their elimination by the host immune system. Historical background The phenomenon of immune tolerance was first described by Ray D. Owen
https://en.wikipedia.org/wiki/Macdonald%20identities
In mathematics, the Macdonald identities are some infinite product identities associated to affine root systems, introduced by . They include as special cases the Jacobi triple product identity, Watson's quintuple product identity, several identities found by , and a 10-fold product identity found by . and pointed out that the Macdonald identities are the analogs of the Weyl denominator formula for affine Kac–Moody algebras and superalgebras. References Lie algebras Mathematical identities Infinite products
https://en.wikipedia.org/wiki/Sour%20sanding
Sour sanding, or sour sugar, is a food ingredient that is used to impart a sour flavor, made from citric or tartaric acid and sugar. It is used to coat sour candies such as lemon drops and Sour Patch Kids, or to make hard candies taste tart, such as SweeTarts. See also Acidulant References Food ingredients
https://en.wikipedia.org/wiki/Paris%27%20law
Paris' law (also known as the Paris–Erdogan equation) is a crack growth equation that gives the rate of growth of a fatigue crack. The stress intensity factor characterises the load around a crack tip and the rate of crack growth is experimentally shown to be a function of the range of stress intensity seen in a loading cycle. The Paris equation is where is the crack length and is the fatigue crack growth for a load cycle . The material coefficients and are obtained experimentally and also depend on environment, frequency, temperature and stress ratio. The stress intensity factor range has been found to correlate the rate of crack growth from a variety of different conditions and is the difference between the maximum and minimum stress intensity factors in a load cycle and is defined as Being a power law relationship between the crack growth rate during cyclic loading and the range of the stress intensity factor, the Paris–Erdogan equation can be visualized as a straight line on a log-log plot, where the x-axis is denoted by the range of the stress intensity factor and the y-axis is denoted by the crack growth rate. The ability of ΔK to correlate crack growth rate data depends to a large extent on the fact that alternating stresses causing crack growth are small compared to the yield strength. Therefore crack tip plastic zones are small compared to crack length even in very ductile materials like stainless steels. The equation gives the growth for a single cycle. Single cycles can be readily counted for constant-amplitude loading. Additional cycle identification techniques such as rainflow-counting algorithm need to be used to extract the equivalent constant-amplitude cycles from a variable-amplitude loading sequence. History In a 1961 paper, P. C. Paris introduced the idea that the rate of crack growth may depend on the stress intensity factor. Then in their 1963 paper, Paris and Erdogan indirectly suggested the equation with the aside remark "
https://en.wikipedia.org/wiki/Suslin%20tree
In mathematics, a Suslin tree is a tree of height ω1 such that every branch and every antichain is at most countable. They are named after Mikhail Yakovlevich Suslin. Every Suslin tree is an Aronszajn tree. The existence of a Suslin tree is independent of ZFC, and is equivalent to the existence of a Suslin line (shown by ) or a Suslin algebra. The diamond principle, a consequence of V=L, implies that there is a Suslin tree, and Martin's axiom MA(ℵ1) implies that there are no Suslin trees. More generally, for any infinite cardinal κ, a κ-Suslin tree is a tree of height κ such that every branch and antichain has cardinality less than κ. In particular a Suslin tree is the same as a ω1-Suslin tree. showed that if V=L then there is a κ-Suslin tree for every infinite successor cardinal κ. Whether the Generalized Continuum Hypothesis implies the existence of an ℵ2-Suslin tree, is a longstanding open problem. See also Glossary of set theory Kurepa tree List of statements independent of ZFC List of unsolved problems in set theory Suslin's problem References Thomas Jech, Set Theory, 3rd millennium ed., 2003, Springer Monographs in Mathematics,Springer, erratum, ibid. 4 (1972), 443. Trees (set theory) Independence results
https://en.wikipedia.org/wiki/Sergei%20Adian
Sergei Ivanovich Adian, also Adyan (; ; 1 January 1931 – 5 May 2020), was a Soviet and Armenian mathematician. He was a professor at the Moscow State University and was known for his work in group theory, especially on the Burnside problem. Biography Adian was born near Elizavetpol. He grew up there in an Armenian family. He studied at Yerevan and Moscow pedagogical institutes. His advisor was Pyotr Novikov. He worked at Moscow State University (MSU) since 1965. Alexander Razborov was one of his students. Mathematical career In his first work as a student in 1950, Adian proved that the graph of a function of a real variable satisfying the functional equation and having discontinuities is dense in the plane. (Clearly, all continuous solutions of the equation are linear functions.) This result was not published at the time. About 25 years later the American mathematician Edwin Hewitt from the University of Washington gave preprints of some of his papers to Adian during a visit to MSU, one of which was devoted to exactly the same result, which was published by Hewitt much later. By the beginning of 1955, Adian had managed to prove the undecidability of practically all non-trivial invariant group properties, including the undecidability of being isomorphic to a fixed group , for any group . These results constituted his Ph.D. thesis and his first published work. This is one of the most remarkable, beautiful, and general results in algorithmic group theory and is now known as the Adian–Rabin theorem. What distinguishes the first published work by Adian, is its completeness. In spite of numerous attempts, nobody has added anything fundamentally new to the results during the past 50 years. Adian's result was immediately used by Andrey Markov Jr. in his proof of the algorithmic unsolvability of the classical problem of deciding when topological manifolds are homeomorphic. Burnside problem About the Burnside problem: Very much like Fermat's Last Theorem in number theor
https://en.wikipedia.org/wiki/Waveform%20buffer
In computing, a waveform buffer is a technique for digital synthesis of repeating waveforms. It is common in PC sound cards. The waveform amplitude values are stored in a buffer memory, which is addressed from a phase generator, with the retrieved value then used as the basis of the synthesized signal. In the phase generator, a value proportional to the desired signal frequence is periodically added to an accumulator. The high order bits of the accumulator form the output address, while the typically larger number of bits in the accumulator and addition value results in an arbitrarily high frequency resolution. References Digital signal processing
https://en.wikipedia.org/wiki/Pavement%20engineering
Pavement engineering is a branch of civil engineering that uses engineering techniques to design and maintain flexible (asphalt) and rigid (concrete) pavements. This includes streets and highways and involves knowledge of soils, hydraulics, and material properties. Pavement engineering involves new construction as well as rehabilitation and maintenance of existing pavements. Maintenance often involves using engineering judgment to make maintenance repairs with the highest long-term benefit and lowest cost. The Pavement Condition Index (PCI) is an example of an engineering approach applied to existing pavements. Another example is the use of a falling weight deflectometer (FWD) to non-destructively test existing pavements. Calculation of pavement layer strengths can be performed from the resulting deflection data. The two methods - empirical or mechanistic is used to determine pavement layer thicknesses. The evaluation of existing road pavements is done based on 3 factors: Functional surface condition, where all the distresses such as cracks, potholes, rutting and others are analyzed. Structural condition, which analyzes pavement's structural strength to take loading from trucks. Roughness, using parameters such as the International Roughness Index to evaluate comfort for drivers. See also NCAT Pavement Test Track References External links Pavement engineering Transportation engineering
https://en.wikipedia.org/wiki/Reactive%20planning
In artificial intelligence, reactive planning denotes a group of techniques for action selection by autonomous agents. These techniques differ from classical planning in two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictable environments. Second, they compute just one next action in every instant, based on the current context. Reactive planners often (but not always) exploit reactive plans, which are stored structures describing the agent's priorities and behaviour. The term reactive planning goes back to at least 1988, and is synonymous with the more modern term dynamic planning. Reactive plan representation There are several ways to represent a reactive plan. All require a basic representational unit and a means to compose these units into plans. Condition-action rules (productions) A condition action rule, or if-then rule, is a rule in the form: if condition then action. These rules are called productions. The meaning of the rule is as follows: if the condition holds, perform the action. The action can be either external (e.g., pick something up and move it), or internal (e.g., write a fact into the internal memory, or evaluate a new set of rules). Conditions are normally boolean and the action either can be performed, or not. Production rules may be organized in relatively flat structures, but more often are organized into a hierarchy of some kind. For example, subsumption architecture consists of layers of interconnected behaviors, each actually a finite state machine which acts in response to an appropriate input. These layers are then organized into a simple stack, with higher layers subsuming the goals of the lower ones. Other systems may use trees, or may include special mechanisms for changing which goal / rule subset is currently most important. Flat structures are relatively easy to build, but allow only for description of simple behavior, or require immensely complicated conditions to
https://en.wikipedia.org/wiki/Criticism%20of%20Wikipedia
Most criticism of Wikipedia has been directed toward its content, community of established users, and processes. Critics have questioned its factual reliability, the readability and organization of the articles, the lack of methodical fact-checking, and its political bias. Concerns have also been raised about systemic bias along gender, racial, political, corporate, institutional, and national lines. In addition, conflicts of interest arising from corporate campaigns to influence content have also been highlighted. Further concerns include the vandalism and partisanship facilitated by anonymous editing, clique behavior (from contributors as well as administrators and other top figures), social stratification between a guardian class and newer users, excessive rule-making, edit warring, and uneven policy application. Criticism of content The reliability of Wikipedia is often questioned. In Wikipedia: The Dumbing Down of World Knowledge (2010), journalist Edwin Black characterized the content of articles as a mixture of "truth, half-truth, and some falsehoods". Oliver Kamm, in Wisdom?: More like Dumbness of the Crowds (2007), said that articles usually are dominated by the loudest and most persistent editorial voices or by an interest group with an ideological "axe to grind". In his article The 'Undue Weight' of Truth on Wikipedia (2012), Timothy Messer–Kruse criticized the undue-weight policy that deals with the relative importance of sources, observing that it showed Wikipedia's goal was not to present correct and definitive information about a subject but to present the majority opinion of the sources cited. In their article You Just Type in What You are Looking for: Undergraduates' Use of Library Resources vs. Wikipedia (2012) in an academic librarianship journal, the authors noted another author's point that omissions within an article might give the reader false ideas about a topic, based upon the incomplete content of Wikipedia. Wikipedia is sometimes charac
https://en.wikipedia.org/wiki/Chipkill
Chipkill is IBM's trademark for a form of advanced error checking and correcting (ECC) computer memory technology that protects computer memory systems from any single memory chip failure as well as multi-bit errors from any portion of a single memory chip. One simple scheme to perform this function scatters the bits of a Hamming code ECC word across multiple memory chips, such that the failure of any single memory chip will affect only one ECC bit per word. This allows memory contents to be reconstructed despite the complete failure of one chip. Typical implementations use more advanced codes, such as a BCH code, that can correct multiple bits with less overhead. Chipkill is frequently combined with dynamic bit-steering, so that if a chip fails (or has exceeded a threshold of bit errors), another, spare, memory chip is used to replace the failed chip. The concept is similar to that of RAID, which protects against disk failure, except that now the concept is applied to individual memory chips. The technology was developed by the IBM Corporation in the early and middle 1990s. An important RAS feature, Chipkill technology is deployed primarily on SSDs, mainframes and midrange servers. An equivalent system from Sun Microsystems is called Extended ECC, while equivalent systems from HP are called Advanced ECC and Chipspare. A similar system from Intel, called Lockstep memory, provides double-device data correction (DDDC) functionality. Similar systems from Micron, called redundant array of independent NAND (RAIN), and from SandForce, called RAISE level 2, protect data stored on SSDs from any single NAND flash chip going bad. A 2009 paper using data from Google's datacentres provided evidence demonstrating that in observed Google systems, DRAM errors were recurrent at the same location, and that 8% of DIMMs were affected each year. Specifically, "In more than 85% of the cases a correctable error is followed by at least one more correctable error in the same month"
https://en.wikipedia.org/wiki/Linkwitz%E2%80%93Riley%20filter
A Linkwitz–Riley (L-R) filter is an infinite impulse response filter used in Linkwitz–Riley audio crossovers, named after its inventors Siegfried Linkwitz and Russ Riley. This filter type was originally described in Active Crossover Networks for Noncoincident Drivers in the Journal of the Audio Engineering Society. It is also known as a Butterworth squared filter. A Linkwitz–Riley "L-R" crossover consists of a parallel combination of a low-pass and a high-pass L-R filter. The filters are usually designed by cascading two Butterworth filters, each of which has −3 dB gain at the cut-off frequency. The resulting Linkwitz–Riley filter has −6 dB gain at the cut-off frequency. This means that, upon summing the low-pass and high-pass outputs, the gain at the crossover frequency will be 0 dB, so the crossover behaves like an all-pass filter, having a flat amplitude response with a smoothly changing phase response. This is the biggest advantage of L-R crossovers compared to even-order Butterworth crossovers, whose summed output has a +3 dB peak around the crossover frequency. Since cascading two nth-order Butterworth filters will give a (2n)th-order Linkwitz–Riley filter, theoretically any (2n)th-order Linkwitz–Riley crossover can be designed. However, crossovers of order higher than 4 may have less usability due to their complexity and the increasing size of the peak in group delay around the crossover frequency. Common types Second-order Linkwitz–Riley crossover (LR2, LR-2) Second-order Linkwitz–Riley crossovers (LR2) have a 12 dB/octave (40 dB/decade) slope. They can be realized by cascading two one-pole filters, or using a Sallen Key filter topology with a Q0 value of 0.5. There is a 180° phase difference between the low-pass and high-pass output of the filter, which can be corrected by inverting one signal. In loudspeakers this is usually done by reversing the polarity of one driver if the crossover is passive. For active crossovers inversion is usually done using a
https://en.wikipedia.org/wiki/Autocollimator
An autocollimator is an optical instrument for non-contact measurement of angles. They are typically used to align components and measure deflections in optical or mechanical systems. An autocollimator works by projecting an image onto a target mirror and measuring the deflection of the returned image against a scale, either visually or by means of an electronic detector. A visual autocollimator can measure angles as small as 1 arcsecond (4.85 microradians), while an electronic autocollimator can have up to 100 times more resolution. Visual autocollimators are often used for aligning laser rod ends and checking the face parallelism of optical windows and wedges. Electronic and digital autocollimators are used as angle measurement standards, for monitoring angular movement over long periods of time and for checking angular position repeatability in mechanical systems. Servo autocollimators are specialized compact forms of electronic autocollimators that are used in high-speed servo-feedback loops for stable-platform applications. An electronic autocollimator is typically calibrated to read the actual mirror angle. Electronic autocollimator The electronic autocollimator is a high precision angle measurement instrument capable of measuring angular deviations with accuracy down to fractions of an arcsecond, by electronic means only, with no optical eye-piece. Measuring with an electronic autocollimator is fast, easy, accurate, and will frequently be the most cost effective procedure. Used extensively in workshops, tool rooms, inspection departments and quality control laboratories worldwide, these highly sensitive instruments will measure extremely small angular displacements, squareness, twist and parallelism. Laser analyzing autocollimator Today, a new technology allows to improve the autocollimation instrument to allow direct measurements of incoming laser beams. This new capability opens a gate of inter-alignment between optics, mirrors and lasers. This te
https://en.wikipedia.org/wiki/Eulerian%20number
In combinatorics, the Eulerian number is the number of permutations of the numbers 1 to in which exactly elements are greater than the previous element (permutations with "ascents"). Leonhard Euler investigated them and associated polynomials in his 1755 book Institutiones calculi differentialis. Other notations for are and . Definition The Eulerian polynomials are defined by the exponential generating function The Eulerian numbers may be defined as the coefficients of the Eulerian polynomials: An explicit formula for is Basic properties For fixed there is a single permutation which has 0 ascents: . Indeed, as for all , . This formally includes the empty collection of numbers, . And so . For the explicit formula implies , a sequence in that reads . Fully reversing a permutation with ascents creates another permutation in which there are ascents. Therefore . So there is also a single permutation which has ascents, namely the rising permutation . So also equals . A lavish upper bound is . Between the bounds just discussed, the value exceeds . For , the values are formally zero, meaning many sums over can be written with an upper index only up to . It also means that the polynomials are really of degree for . A tabulation of the numbers in a triangular array is called the Euler triangle or Euler's triangle. It shares some common characteristics with Pascal's triangle. Values of for are: {| class="wikitable" style="text-align:right;" |- ! ! width="50" | 0 ! width="50" | 1 ! width="50" | 2 ! width="50" | 3 ! width="50" | 4 ! width="50" | 5 ! width="50" | 6 ! width="50" | 7 ! width="50" | 8 |- ! 0 | 1 || || || || || || || || |- ! 1 | 1 || || || || || || || || |- ! 2 | 1 || 1 || || || || || || || |- ! 3 | 1 || 4 || 1 || || || || || || |- ! 4 | 1 || 11 || 11 || 1 || || || || || |- ! 5 | 1 || 26 || 66 || 26 || 1 || || || || |- ! 6 | 1 || 57 || 302 || 302 || 57 || 1 || || || |- ! 7 | 1 || 120 || 1191 || 2416 || 1191 || 120 || 1 || || |- !
https://en.wikipedia.org/wiki/Constructive%20set%20theory
Axiomatic constructive set theory is an approach to mathematical constructivism following the program of axiomatic set theory. The same first-order language with "" and "" of classical set theory is usually used, so this is not to be confused with a constructive types approach. On the other hand, some constructive theories are indeed motivated by their interpretability in type theories. In addition to rejecting the principle of excluded middle (), constructive set theories often require some logical quantifiers in their axioms to be set bounded, motivated by results tied to impredicativity. Introduction Constructive outlook Preliminary on the use of intuitionistic logic The logic of the set theories discussed here is constructive in that it rejects the principle of excluded middle , i.e. that the disjunction automatically holds for all propositions . As a rule, to prove the excluded middle for a proposition , i.e. to prove the particular disjunction , either or needs to be explicitly proven. When either such proof is established, one says the proposition is decidable, and this then logically implies the disjunction holds. Similarly, a predicate for in a domain is said to be decidable when the more intricate statement is provable. Non-constructive axioms may enable proofs that formally claim decidability of such (and/or ) in the sense that they prove excluded middle for (resp. the statement using the quantifier above) without demonstrating the truth of either side of the disjunction(s). This is often the case in classical logic. In contrast, axiomatic theories deemed constructive tend to not permit many classical proofs of statements involving properties that are provenly computationally undecidable. The law of noncontradiction is a special case of the propositional form of modus ponens. Using the former with any negated statement , one valid De Morgan's law thus implies already in the more conservative minimal logic. In words, intuitionistic logic st
https://en.wikipedia.org/wiki/Absolute%20value%20%28algebra%29
In algebra, an absolute value (also called a valuation, magnitude, or norm, although "norm" usually refers to a specific kind of absolute value on a field) is a function which measures the "size" of elements in a field or integral domain. More precisely, if D is an integral domain, then an absolute value is any mapping |x| from D to the real numbers R satisfying: It follows from these axioms that |1| = 1 and |-1| = 1. Furthermore, for every positive integer n, |n| = |1 + 1 + ... + 1 (n times)| = |−1 − 1 − ... − 1 (n times)| ≤ n. The classical "absolute value" is one in which, for example, |2|=2, but many other functions fulfill the requirements stated above, for instance the square root of the classical absolute value (but not the square thereof). An absolute value induces a metric (and thus a topology) by Examples The standard absolute value on the integers. The standard absolute value on the complex numbers. The p-adic absolute value on the rational numbers. If R is the field of rational functions over a field F and is a fixed irreducible element of R, then the following defines an absolute value on R: for in R define to be , where and Types of absolute value The trivial absolute value is the absolute value with |x|=0 when x=0 and |x|=1 otherwise. Every integral domain can carry at least the trivial absolute value. The trivial value is the only possible absolute value on a finite field because any non-zero element can be raised to some power to yield 1. If an absolute value satisfies the stronger property |x + y| ≤ max(|x|, |y|) for all x and y, then |x| is called an ultrametric or non-Archimedean absolute value, and otherwise an Archimedean absolute value. Places If |x|1 and |x|2 are two absolute values on the same integral domain D, then the two absolute values are equivalent if |x|1 < 1 if and only if |x|2 < 1 for all x. If two nontrivial absolute values are equivalent, then for some exponent e we have |x|1e = |x|2 for all x. Raising an absolute
https://en.wikipedia.org/wiki/NLTSS
The Network Livermore Timesharing System (NLTSS, also sometimes the New Livermore Time Sharing System) is an operating system that was actively developed at Lawrence Livermore Laboratory (now Lawrence Livermore National Laboratory) from 1979 until about 1988, though it continued to run production applications until 1995. An earlier system, the Livermore Time Sharing System had been developed over a decade earlier. NLTSS ran initially on a CDC 7600 computer, but only ran production from about 1985 until 1994 on Cray computers including the Cray-1, Cray X-MP, and Cray Y-MP models. Characteristics The NLTSS operating system was unusual in many respects and unique in some. Low-level architecture NLTSS was a microkernel message passing system. It was unique in that only one system call was supported by the kernel of the system. That system call, which might be called "communicate" (it didn't have a name because it didn't need to be distinguished from other system calls) accepted a list of "buffer tables" (e.g., see The NLTSS Message System Interface) that contained control information for message communication – either sends or receives. Such communication, both locally within the system and across a network was all the kernel of the system supported directly for user processes. The "message system" (supporting the one call and the network protocols) and drivers for the disks and processor composed the entire kernel of the system. Mid-level architecture NLTSS is a capability-based security client–server system. The two primary servers are the file server and the process server. The file server was a process privileged to be trusted by the drivers for local storage (disk storage,) and the process server was a process privileged to be trusted by the processor driver (software that switched time sharing control between processes in the "alternator", handled interrupts for processes besides the "communicate" call, provided access to memory and process state for the proce
https://en.wikipedia.org/wiki/Wikipedia
Wikipedia is a free-content online encyclopedia written and maintained by a community of volunteers, collectively known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history, and has consistently been one of the 10 most popular websites. Founded by Jimmy Wales and Larry Sanger on January 15, 2001, it is hosted by the Wikimedia Foundation, an American nonprofit organization. Initially only available in English, editions in other languages were quickly developed. Wikipedia's editions when combined, comprise more than articles, attracting around 2billion unique device visits per month and more than 15 million edits per month (about 5.8edits per second on average) . Wikipedia has been praised for its enablement of the democratization of knowledge, extent of coverage, unique structure, and culture. It has been criticized for exhibiting systemic bias, particularly gender bias against women and geographical bias against the Global South. While the reliability of Wikipedia was frequently criticized in the 2000s, it has improved over time, receiving greater praise in the late 2010s and early 2020s, having become an important fact-checking site. It has been censored by some national governments, ranging from specific pages to the entire site. Articles on breaking news are often accessed as sources of frequently updated information about those events. History Nupedia Various collaborative online encyclopedias were attempted before the start of Wikipedia, but with limited success. Wikipedia began as a complementary project for Nupedia, a free online English-language encyclopedia project whose articles were written by experts and reviewed under a formal process. It was founded on March 9, 2000, under the ownership of Bomis, a web portal company. Its main figures were Bomis CEO Jimmy Wales and Larry Sanger, editor-in-chief for Nupedia and later Wikipedia. Nupedia
https://en.wikipedia.org/wiki/Chain%20of%20trust
In computer security, a chain of trust is established by validating each component of hardware and software from the end entity up to the root certificate. It is intended to ensure that only trusted software and hardware can be used while still retaining flexibility. Introduction A chain of trust is designed to allow multiple users to create and use the software on the system, which would be more difficult if all the keys were stored directly in hardware. It starts with hardware that will only boot from software that is digitally signed. The signing authority will only sign boot programs that enforce security, such as only running programs that are themselves signed, or only allowing signed code to have access to certain features of the machine. This process may continue for several layers. This process results in a chain of trust. The final software can be trusted to have certain properties because if it had been illegally modified its signature would be invalid, and the previous software would not have executed it. The previous software can be trusted, because it, in turn, would not have been loaded if its signature had been invalid. The trustworthiness of each layer is guaranteed by the one before, back to the trust anchor. It would be possible to have the hardware check the suitability (signature) for every single piece of software. However, this would not produce the flexibility that a "chain" provides. In a chain, any given link can be replaced with a different version to provide different properties, without having to go all the way back to the trust anchor. This use of multiple layers is an application of a general technique to improve scalability and is analogous to the use of multiple certificates in a certificate chain. Computer security In computer security, digital certificates are verified using a chain of trust. The trust anchor for the digital certificate is the root certificate authority (CA). The certificate hierarchy is a structure of certi
https://en.wikipedia.org/wiki/Belt%20transect
Belt transects are used in biology, more specifically in biostatistics, to estimate the distribution of organisms in relation to a certain area, such as the seashore or a meadow. The belt transect method is similar to the line transect method but gives information on abundance as well as presence, or absence of species. Method The method involves laying out a transect line and then placing quadrats over the line, starting the quadrat at the first marked point of the line. Any consistent measurement size for the quadrat and length of the line can be chosen, depending on the species. With the quadrats applied, all the individuals of a species can be counted, and the species abundance can be estimated. The method is also suitable for long-term observations with a permanent installation. References Ecological techniques Sampling techniques Environmental statistics
https://en.wikipedia.org/wiki/Stabilizer%20code
The theory of quantum error correction plays a prominent role in the practical realization and engineering of quantum computing and quantum communication devices. The first quantum error-correcting codes are strikingly similar to classical block codes in their operation and performance. Quantum error-correcting codes restore a noisy, decohered quantum state to a pure quantum state. A stabilizer quantum error-correcting code appends ancilla qubits to qubits that we want to protect. A unitary encoding circuit rotates the global state into a subspace of a larger Hilbert space. This highly entangled, encoded state corrects for local noisy errors. A quantum error-correcting code makes quantum computation and quantum communication practical by providing a way for a sender and receiver to simulate a noiseless qubit channel given a noisy qubit channel whose noise conforms to a particular error model. The stabilizer theory of quantum error correction allows one to import some classical binary or quaternary codes for use as a quantum code. However, when importing the classical code, it must satisfy the dual-containing (or self-orthogonality) constraint. Researchers have found many examples of classical codes satisfying this constraint, but most classical codes do not. Nevertheless, it is still useful to import classical codes in this way (though, see how the entanglement-assisted stabilizer formalism overcomes this difficulty). Mathematical background The stabilizer formalism exploits elements of the Pauli group in formulating quantum error-correcting codes. The set consists of the Pauli operators: The above operators act on a single qubit – a state represented by a vector in a two-dimensional Hilbert space. Operators in have eigenvalues and either commute or anti-commute. The set consists of -fold tensor products of Pauli operators: Elements of act on a quantum register of qubits. We occasionally omit tensor product symbols in what follows so that The -fold P
https://en.wikipedia.org/wiki/Lightweight%20software%20test%20automation
Lightweight software test automation is the process of creating and using relatively short and simple computer programs, called lightweight test harnesses, designed to test a software system. Lightweight test automation harnesses are not tied to a particular programming language but are most often implemented with the Java, Perl, Visual Basic .NET, and C# programming languages. Lightweight test automation harnesses are generally four pages of source code or less, and are generally written in four hours or less. Lightweight test automation is often associated with Agile software development methodology. The three major alternatives to the use of lightweight software test automation are commercial test automation frameworks, Open Source test automation frameworks, and heavyweight test automation. The primary disadvantage of lightweight test automation is manageability. Because lightweight automation is relatively quick and easy to implement, a test effort can be overwhelmed with harness programs, test case data files, test result files, and so on. However, lightweight test automation has significant advantages. Compared with commercial frameworks, lightweight automation is less expensive in initial cost and is more flexible. Compared with Open Source frameworks, lightweight automation is more stable because there are fewer updates and external dependencies. Compared with heavyweight test automation, lightweight automation is quicker to implement and modify. Lightweight test automation is generally used to complement, not replace these alternative approaches. Lightweight test automation is most useful for regression testing, where the intention is to verify that new source code added to the system under test has not created any new software failures. Lightweight test automation may be used for other areas of software testing such as performance testing, stress testing, load testing, security testing, code coverage analysis, mutation testing, and so on. The most widel
https://en.wikipedia.org/wiki/Self-Protecting%20Digital%20Content
Self Protecting Digital Content (SPDC), is a copy protection (digital rights management) architecture which allows restriction of access to, and copying of, the next generation of optical discs and streaming/downloadable content. Overview Designed by Cryptography Research, Inc. of San Francisco, SPDC executes code from the encrypted content on the DVD player, enabling the content providers to change DRM systems in case an existing system is compromised. It adds functionality to make the system "dynamic", as opposed to "static" systems in which the system and keys for encryption and decryption do not change, thus enabling one compromised key to decode all content released using that encryption system. "Dynamic" systems attempt to make future content released immune to existing methods of circumvention. Playback method If a method of playback used in previously released content is revealed to have a weakness, either by review or because it has already been exploited, code embedded into content released in the future will change the method, and any attackers will have to start over and attack it again. Targeting compromised players If a certain model of players are compromised, code specific to the model can be activated to verify that the particular player has not been compromised. The player can be "fingerprinted" if found to be compromised and the information can be used later. Forensic marking Code inserted into content can add information to the output that specifically identifies the player, and in a large-scale distribution of the content, can be used to trace the player. This may include the fingerprint of a specific player. Weaknesses If an entire class of players is compromised, it is infeasible to revoke the ability to use the content on the entire class because many customers may have purchased players in the class. A fingerprint may be used to try to work around this limitation, but an attacker with access to multiple sources of video may "s
https://en.wikipedia.org/wiki/Stopping%20power%20%28particle%20radiation%29
In nuclear and materials physics, stopping power is the retarding force acting on charged particles, typically alpha and beta particles, due to interaction with matter, resulting in loss of particle kinetic energy. Stopping power is also interpreted as the rate at which a material absorbs the kinetic energy of a charged particle. Its application is important in a wide range of thermodynamic areas such as radiation protection, ion implantation and nuclear medicine. Definition and Bragg curve Both charged and uncharged particles lose energy while passing through matter. Positive ions are considered in most cases below. The stopping power depends on the type and energy of the radiation and on the properties of the material it passes. Since the production of an ion pair (usually a positive ion and a (negative) electron) requires a fixed amount of energy (for example, 33.97 eV in dry air), the number of ionizations per path length is proportional to the stopping power. The stopping power of the material is numerically equal to the loss of energy per unit path length, : The minus sign makes positive. The force usually increases toward the end of range and reaches a maximum, the Bragg peak, shortly before the energy drops to zero. The curve that describes the force as function of the material depth is called the Bragg curve. This is of great practical importance for radiation therapy. The equation above defines the linear stopping power which in the international system is expressed in N but is usually indicated in other units like MeV/mm or similar. If a substance is compared in gaseous and solid form, then the linear stopping powers of the two states are very different just because of the different density. One therefore often divides the force by the density of the material to obtain the mass stopping power which in the international system is expressed in m4/s2 but is usually found in units like MeV/(mg/cm2) or similar. The mass stopping power then depends
https://en.wikipedia.org/wiki/Tallyman
A tallyman is an individual who keeps a numerical record with tally marks, historically often on tally sticks. Vote counter In Ireland, it is common for political parties to provide private observers when ballot boxes are opened. These tallymen keep a tally of the preferences of visible voting papers and allow an early initial estimate of which candidates are likely to win in the drawn-out single transferable vote counting process. Since the public voting process is by then complete, it is usual for tallymen from different parties to share information. Head counter Another possible definition is a person who called to literally do a head count, presumably on behalf of either the town council or the house owners. This is rumoured to have occurred in Liverpool, in the years after the First World War. Mechanical tally counters can make such head counts easier, by removing the need to make any marks. Debt collector In poorer parts of England (including the north and the East End of London), the tallyman was the hire purchase collector, who visited each week to collect the payments for goods purchased on the 'never never', or hire purchase. These people still had such employment up until the 1960s. The title tallyman extended to the keeper of a village pound as animals were often held against debts, and tally sticks were used to prove they could be released. In popular culture "'The tallyman,' Mum told me, 'slice off the top of the stems of the bunches as they take them in. Then him count the little stubs he just sliced off and pay the farmer.'" explains a Ms. Wade in Andrea Levy’s novel "Fruit of the Lemon". Harry Belafonte addresses the tallyman in "Day-O (The Banana Boat Song)." In 1967 Graham Gouldman wrote a song called "Tallyman," which was recorded by Jeff Beck and reached #30 on the British charts. Heavy metal singer Udo Dirkschneider produced a song called "Tallyman." The Tally Man is the name of two super villains in the DC Universe, usually ene
https://en.wikipedia.org/wiki/Maxwell%20bridge
A Maxwell bridge is a modification to a Wheatstone bridge used to measure an unknown inductance (usually of low Q value) in terms of calibrated resistance and inductance or resistance and capacitance. When the calibrated components are a parallel resistor and capacitor, the bridge is known as a Maxwell bridge. It is named for James C. Maxwell, who first described it in 1873. It uses the principle that the positive phase angle of an inductive impedance can be compensated by the negative phase angle of a capacitive impedance when put in the opposite arm and the circuit is at resonance; i.e., no potential difference across the detector (an AC voltmeter or ammeter)) and hence no current flowing through it. The unknown inductance then becomes known in terms of this capacitance. With reference to the picture, in a typical application and are known fixed entities, and and are known variable entities. and are adjusted until the bridge is balanced. and can then be calculated based on the values of the other components: To avoid the difficulties associated with determining the precise value of a variable capacitance, sometimes a fixed-value capacitor will be installed and more than one resistor will be made variable. It cannot be used for the measurement of high Q values. It is also unsuited for the coils with low Q values, less than one, because of balance convergence problem. Its use is limited to the measurement of low Q values from 1 to 10. The frequency of the AC current used to assess the unknown inductor should match the frequency of the circuit the inductor will be used in - the impedance and therefore the assigned inductance of the component varies with frequency. For ideal inductors, this relationship is linear, so that the inductance value at an arbitrary frequency can be calculated from the inductance value measured at some reference frequency. Unfortunately, for real components, this relationship is not linear, and using a derived or calculated v
https://en.wikipedia.org/wiki/Mitotic%20recombination
Mitotic recombination is a type of genetic recombination that may occur in somatic cells during their preparation for mitosis in both sexual and asexual organisms. In asexual organisms, the study of mitotic recombination is one way to understand genetic linkage because it is the only source of recombination within an individual. Additionally, mitotic recombination can result in the expression of recessive alleles in an otherwise heterozygous individual. This expression has important implications for the study of tumorigenesis and lethal recessive alleles. Mitotic homologous recombination occurs mainly between sister chromatids subsequent to replication (but prior to cell division). Inter-sister homologous recombination is ordinarily genetically silent. During mitosis the incidence of recombination between non-sister homologous chromatids is only about 1% of that between sister chromatids. Discovery The discovery of mitotic recombination came from the observation of twin spotting in Drosophila melanogaster. This twin spotting, or mosaic spotting, was observed in D. melanogaster as early as 1925, but it was only in 1936 that Curt Stern explained it as a result of mitotic recombination. Prior to Stern's work, it was hypothesized that twin spotting happened because certain genes had the ability to eliminate the chromosome on which they were located. Later experiments uncovered when mitotic recombination occurs in the cell cycle and the mechanisms behind recombination. Occurrence Mitotic recombination can happen at any locus but is observable in individuals that are heterozygous at a given locus. If a crossover event between non-sister chromatids affects that locus, then both homologous chromosomes will have one chromatid containing each genotype. The resulting phenotype of the daughter cells depends on how the chromosomes line up on the metaphase plate. If the chromatids containing different alleles line up on the same side of the plate, then the resulting
https://en.wikipedia.org/wiki/SiRFstarIII
SiRFstarIII is a range of high sensitivity GPS microcontroller chips manufactured by SiRF Technology. GPS microcontroller chips interpret signals from GPS satellites and determine the position of the GPS receiver. It was announced in 2004. Features SiRFstarIII features: A 20-channel receiver, which can process the signals of all visible GPS and WAAS satellites simultaneously. Power consumption of 62 mW during continuous operation. Assisted GPS client capability can reduce TTFF to less than one second. Receiver sensitivity of -159 dBm while tracking. SBAS (WAAS, MSAS, EGNOS) support References External links SiRFstarIII product web page SiRFstarIII OpenStreetMap Microcontrollers Global Positioning System
https://en.wikipedia.org/wiki/Yojimbo%20%28software%29
Yojimbo is a personal information manager for MacOS by Bare Bones Software. It can store notes, images and media, URLs, web pages, and passwords. Yojimbo can also encrypt any of its contents and store the password in the Keychain. It is Bare Bones' second Cocoa application. History Yojimbo was first released on January 23, 2006. At the time, Bare Bones called it "a completely new information organizer". Yojimbo 1.1 Yojimbo 1.3 Yojimbo 1.5 Yojimbo 2.0 Yojimbo 2.1 Yojimbo 2.2 Yojimbo 3 and new iPad version Yojimbo 4 Yojimbo 4.5 In 2007, another developer, Adrian Ross, created Webjimbo, a web interface through which users can access their Yojimbo libraries. Like other developers, Bare Bones Software faced difficulties adding iCloud sync due to early limitations in Apple's service. Tech reporter Christophe Laporte criticized Yojimbo's transition to iCloud as bungled, and expressed frustration at the lack of updates to the app. References External links Yojimbo homepage Password managers MacOS-only proprietary software Personal information managers Products introduced in 2006
https://en.wikipedia.org/wiki/Marston%20Mat
Marston Mat, more properly called pierced (or perforated) steel planking (PSP), is standardized, perforated steel matting material developed by the United States at the Waterways Experiment Station shortly before World War II, primarily for the rapid construction of temporary runways and landing strips (also misspelled as Marsden matting). The nickname came from Marston, North Carolina, adjacent to Camp Mackall airfield where the material was first used. Description Pierced (pressed, steel planking, named after the manufacturing process) steel planking consisted of steel strips with punched lightening holes in it. These holes were in rows, and a formation of U-shaped channels between the holes. Hooks were formed along one long edge and slots along the other long edge so that adjacent mats could be connected. The short edges were cut straight with no holes or hooks. To achieve lengthwise interlocking, the mats were laid in a staggered pattern. The hooks were usually held in the slots by a steel clip that filled the part of the slot that is empty when the adjacent sheets are properly engaged. The holes were bent up at their edges so that the beveled edge stiffened the area around the hole. In some mats a T-shaped stake could be driven at intervals through the holes to keep the assembly in place on the ground. Sometimes the sheets were welded together. A typical later PSP was the M8 landing mat. A single piece weighed about and was long by wide. The hole pattern for the sheet was produced to allow easier transportation by aircraft, since it weighed about two-thirds as much. A lighter weight aluminum plank version was developed, to ease logistics when constructing airfields in difficult to access areas. This was referred to as PAP, for pierced aluminum planking. PAP was and is not as common as PSP. Aluminum was a controlled strategic material during World War II, so much less was made; it was typically only able to handle half as many loading cycles as steel, and
https://en.wikipedia.org/wiki/Aircrack-ng
Aircrack-ng is a network software suite consisting of a detector, packet sniffer, WEP and WPA/WPA2-PSK cracker and analysis tool for 802.11 wireless LANs. It works with any wireless network interface controller whose driver supports raw monitoring mode and can sniff 802.11a, 802.11b and 802.11g traffic. Packages are released for Linux and Windows. Aircrack-ng is a fork of the original Aircrack project. It can be found as a preinstalled tool in many security-focused Linux distributions such as Kali Linux or Parrot Security OS, which share common attributes as they are developed under the same project (Debian). Development Aircrack was originally developed by French security researcher Christophe Devine, its main goal was to recover 802.11 wireless networks WEP keys using an implementation of the Fluhrer, Mantin and Shamir (FMS) attack alongside the ones shared by a hacker named KoreK. Aircrack was forked by Thomas D'Otreppe in February 2006 and released as Aircrack-ng (Aircrack Next Generation). Wi-Fi security history WEP Wired Equivalent Privacy was the first security algorithm to be released, with the intention of providing data confidentiality comparable to that of a traditional wired network. It was introduced in 1997 as part of the IEEE 802.11 technical standard and based on the RC4 cipher and the CRC-32 checksum algorithm for integrity. Due to U.S. restrictions on the export of cryptographic algorithms, WEP was effectively limited to 64-bit encryption. Of this, 40 bits were allocated to the key and 24 bits to the initialization vector (IV), to form the RC4 key. After the restrictions were lifted, versions of WEP with a stronger encryption were released with 128 bits: 104 bits for the key size and 24 bits for the initialization vector, known as WEP2. The initialization vector works as a seed, which is prepended to the key. Via the key-scheduling algorithm (KSA), the seed is used to initialize the RC4 cipher's state. The output of RC4's pseudo random ge
https://en.wikipedia.org/wiki/Non-interference%20%28security%29
Noninterference is a strict multilevel security policy model, first described by Goguen and Meseguer in 1982, and amplified further in 1984. Introduction In simple terms, a computer is modeled as a machine with inputs and outputs. Inputs and outputs are classified as either low (low sensitivity, not highly classified) or high (sensitive, not to be viewed by uncleared individuals). A computer has the noninterference property if and only if any sequence of low inputs will produce the same low outputs, regardless of what the high level inputs are. That is, if a low (uncleared) user is working on the machine, it will respond in exactly the same manner (on the low outputs) whether or not a high (cleared) user is working with sensitive data. The low user will not be able to acquire any information about the activities (if any) of the high user. Formal expression Let be a memory configuration, and let and be the projection of the memory to the low and high parts, respectively. Let be the function that compares the low parts of the memory configurations, i.e., iff . Let be the execution of the program starting with memory configuration and terminating with the memory configuration . The definition of noninterference for a deterministic program is the following: Limitations Strictness This is a very strict policy, in that a computer system with covert channels may comply with, say, the Bell–LaPadula model, but will not comply with noninterference. The reverse could be true (under reasonable conditions, being that the system should have labelled files, etc.) except for the "No classified information at startup" exceptions noted below. However, noninterference has been shown to be stronger than nondeducibility. This strictness comes with a price. It is very difficult to make a computer system with this property. There may be only one or two commercially available products that have been verified to comply with this policy, and these would essentially be as
https://en.wikipedia.org/wiki/Signal%20conditioning
In electronics and signal processing, signal conditioning is the manipulation of an analog signal in such a way that it meets the requirements of the next stage for further processing. In an analog-to-digital converter (ADC) application, signal conditioning includes voltage or current limiting and anti-aliasing filtering. In control engineering applications, it is common to have a sensing stage (which consists of a sensor), a signal conditioning stage (where usually amplification of the signal is done) and a processing stage (often carried out by an ADC and a micro-controller). Operational amplifiers (op-amps) are commonly employed to carry out the amplification of the signal in the signal conditioning stage. In some transducers, signal conditioning is integrated with the sensor, for example in Hall effect sensors. In power electronics, before processing the input sensed signals by sensors like voltage sensor and current sensor, signal conditioning scales signals to level acceptable to the microprocessor. Inputs Signal inputs accepted by signal conditioners include DC voltage and current, AC voltage and current, frequency and electric charge. Sensor inputs can be accelerometer, thermocouple, thermistor, resistance thermometer, strain gauge or bridge, and LVDT or RVDT. Specialized inputs include encoder, counter or tachometer, timer or clock, relay or switch, and other specialized inputs. Outputs for signal conditioning equipment can be voltage, current, frequency, timer or counter, relay, resistance or potentiometer, and other specialized outputs. Processes Signal conditioning can include amplification, filtering, converting, range matching, isolation and any other processes required to make sensor output suitable for processing after conditioning. Input Coupling Use AC coupling when the signal contains a large DC component. If you enable AC coupling, you remove the large DC offset for the input amplifier and amplify only the AC component. This configurati
https://en.wikipedia.org/wiki/Recursive%20grammar
In computer science, a grammar is informally called a recursive grammar if it contains production rules that are recursive, meaning that expanding a non-terminal according to these rules can eventually lead to a string that includes the same non-terminal again. Otherwise it is called a non-recursive grammar. For example, a grammar for a context-free language is left recursive if there exists a non-terminal symbol A that can be put through the production rules to produce a string with A (as the leftmost symbol). All types of grammars in the Chomsky hierarchy can be recursive and it is recursion that allows the production of infinite sets of words. Properties A non-recursive grammar can produce only a finite language; and each finite language can be produced by a non-recursive grammar. For example, a straight-line grammar produces just a single word. A recursive context-free grammar that contains no useless rules necessarily produces an infinite language. This property forms the basis for an algorithm that can test efficiently whether a context-free grammar produces a finite or infinite language. References Formal languages
https://en.wikipedia.org/wiki/FLAGS%20register
The FLAGS register is the status register that contains the current state of an x86 CPU. The size and meanings of the flag bits are architecture dependent. It usually reflects the result of arithmetic operations as well as information about restrictions placed on the CPU operation at the current time. Some of those restrictions may include preventing some interrupts from triggering, prohibition of execution of a class of "privileged" instructions. Additional status flags may bypass memory mapping and define what action the CPU should take on arithmetic overflow. The carry, parity, auxiliary carry (or half carry), zero and sign flags are included in many architectures. In the i286 architecture, the register is 16 bits wide. Its successors, the EFLAGS and RFLAGS registers, are 32 bits and 64 bits wide, respectively. The wider registers retain compatibility with their smaller predecessors. FLAGS Note: The mask column in the table is the AND bitmask (as hexadecimal value) to query the flag(s) within FLAGS register value. Usage All FLAGS registers contain the condition codes, flag bits that let the results of one machine-language instruction affect another instruction. Arithmetic and logical instructions set some or all of the flags, and conditional jump instructions take variable action based on the value of certain flags. For example, jz (Jump if Zero), jc (Jump if Carry), and jo (Jump if Overflow) depend on specific flags. Other conditional jumps test combinations of several flags. FLAGS registers can be moved from or to the stack. This is part of the job of saving and restoring CPU context, against a routine such as an interrupt service routine whose changes to registers should not be seen by the calling code. Here are the relevant instructions: The PUSHF and POPF instructions transfer the 16-bit FLAGS register. PUSHFD/POPFD (introduced with the i386 architecture) transfer the 32-bit double register EFLAGS. PUSHFQ/POPFQ (introduced with the x64 architecture) tr
https://en.wikipedia.org/wiki/Pixelplus
Pixel Plus, is a proprietary digital filter image processing technology developed by Philips, who claims that it enhances the display of analogue broadcast signals on their TVs. Pixel Plus interpolates the broadcast signal to increase the picture size by one third, from 625 lines to 833 lines. It also doubles the horizontal resolution, although each horizontal line is analogue. Other features include motion interpolation, a processing technique that interpolates (or creates) video fields (or frames) by analyzing fields (or frames) before and after the insertion point. This process is primarily focused on film based content which is filmed in either 24fps or 25fps. The motion interpolation function of Pixel Plus is an alternative to 3:2 pulldown processing which is the standard process of converting film to video. In 2005, Pixelplus 2 was launched. This version was the first to be able to perform motion reinterpolation on 480p and 576p material. In 2006, Pixelplus 3 was launched. This version was the first to be able to perform motion reinterpolation on 720p and 1080i material, except for US products. In 2007, Pixel Perfect HD Engine was launched. This version was the first to be able to perform motion reinterpolation on 1080p material, and introduced 720p and 1080i motion interpolation in US products. Not to be confused with Pixelplus Co., Ltd. (Nasdaq: PXPL) : a fabless semiconductor company in Korea that designs, develops, and markets CMOS image sensors for various consumer electronics applications. References Television technology
https://en.wikipedia.org/wiki/List%20of%20desktop%20publishing%20software
The following is a list of major desktop publishing software. A wide range of related software tools exist in this field including many plug-ins and tools related to the applications listed below. Several software directories provide more comprehensive listings of desktop publishing software, including VersionTracker and Tucows. Free software This section lists free software which does desktop publishing. All of these are required to be open-source. While not required, the software listed in this section is available free of charge. (In principle, in rare cases, free software is sold without being distributed over the Internet.) Desktop publishing software for Windows, macOS, Linux and other operating systems Collabora Online Draw and Collabora Online Writer. The applications for Windows, macOS, Linux and ChromeOS are also known as Collabora Office. LibreOffice Draw and LibreOffice Writer for Windows, macOS, Linux, BSDs and others LyX for Windows, MacOS, Linux, UNIX, OS/2 and Haiku, based on the LaTeX typesetting system, initial release in 1995 Scribus for Windows, macOS, Linux, BSD, Unix, Haiku, OS/2, based on the free Qt toolkit, initial release in 2003 Online desktop publishing software Collabora Online Draw and Collabora Online Writer Scenari, open source single-source publishing tool with support for chain publication Proprietary Desktop publishing software for Windows XEditpro Automated Publishing Tool - DiacriTech, 1997 Adobe InDesign Adobe FrameMaker Adobe PageMaker, discontinued in 2004 Affinity Publisher CatBase Calamus CorelDRAW Corel Ventura, previously Ventura Publisher, originally developed by Xerox, now owned by Corel FrameMaker, now owned by Adobe InPage - DTP which works with English + Urdu, Arabic, Persian, Pashto etc. MadCap Flare Microsoft Publisher PageStream, formerly known as Publishing Partner Prince XML, by YesLogic QuarkXPress RagTime Ready, Set, Go! Xara Designer Pro X Xara Page & Layout Designer Deskt
https://en.wikipedia.org/wiki/Initiation%20factor
Initiation factors are proteins that bind to the small subunit of the ribosome during the initiation of translation, a part of protein biosynthesis. Initiation factors can interact with repressors to slow down or prevent translation. They have the ability to interact with activators to help them start or increase the rate of translation. In bacteria, they are simply called IFs (i.e.., IF1, IF2, & IF3) and in eukaryotes they are known as eIFs (i.e.., eIF1, eIF2, eIF3). Translation initiation is sometimes described as three step process which initiation factors help to carry out. First, the tRNA carrying a methionine amino acid binds to the small ribosome, then binds to the mRNA, and finally joins together with the large ribosome. The initiation factors that help with this process each have different roles and structures. Types The initiation factors are divided into three major groups by taxonomic domains. There are some homologies shared (click the domain names to see the domain-specific factors): Structure and function Many structural domains have been conserved through evolution, as prokaryotic initiation factors share similar structures with eukaryotic factors. The prokaryotic initiation factor, IF3, assists with start site specificity, as well as mRNA binding. This is in comparison with the eukaryotic initiation factor, eIF1, who also performs these functions. The elF1 structure is similar to the C-terminal domain of IF3, as they each contain a five-stranded beta sheet against two alpha helices. The prokaryotic initiation factors IF1 and IF2 are also homologs of the eukaryotic initiation factors eIF1A and eIF5B. IF1 and eIF1A, both containing an OB-fold, bind to the A site and assist in the assembly of initiation complexes at the start codon. IF2 and eIF5B assist in the joining of the small and large ribosomal subunits. The eIF5B factor also contains elongation factors. Domain IV of eIF5B is closely related to the C-terminal domain of IF2, as they both c
https://en.wikipedia.org/wiki/Mercury%28II%29%20iodide
Mercury(II) iodide is a chemical compound with the molecular formula HgI2. It is typically produced synthetically but can also be found in nature as the extremely rare mineral coccinite. Unlike the related mercury(II) chloride it is hardly soluble in water (<100 ppm). Production Mercury(II) iodide is produced by adding an aqueous solution of potassium iodide to an aqueous solution of mercury(II) chloride with stirring; the precipitate is filtered off, washed and dried at 70 °C. HgCl2 + 2 KI → HgI2 + 2 KCl Properties Mercury(II) iodide displays thermochromism; when heated above 126 °C (400 K) it undergoes a phase transition, from the red alpha crystalline form to a pale yellow beta form. As the sample cools, it gradually reacquires its original colour. It has often been used for thermochromism demonstrations. A third form, which is orange, is also known; this can be formed by recrystallisation and is also metastable, eventually converting back to the red alpha form. The various forms can exist in a diverse range of crystal structures and as a result mercury(II) iodide possesses a surprisingly complex phase diagram. Uses Mercury(II) iodide is used for preparation of Nessler's reagent, used for detection of presence of ammonia. Mercury(II) iodide is a semiconductor material, used in some x-ray and gamma ray detection and imaging devices operating at room temperatures. In veterinary medicine, mercury(II) iodide is used in blister ointments in exostoses, bursal enlargement, etc. It can appear as a precipitate in many reactions. See also Mercury(I) iodide, Hg2I2 References Iodides Metal halides Mercury(II) compounds Semiconductor materials
https://en.wikipedia.org/wiki/Park%20Grass%20Experiment
The Park Grass Experiment is a biological study originally set up to test the effect of fertilizers and manures on hay yields. The scientific experiment is located at the Rothamsted Research in the English county of Hertfordshire, and is notable as one of the longest-running experiments of modern science, as it was initiated in 1856 and has been continually monitored ever since. The experiment was originally designed to answer agricultural questions but has since proved an invaluable resource for studying natural selection and biodiversity. The treatments under study were found to be affecting the botanical make-up of the plots and the ecology of the field and it has been studied ever since. In spring, the field is a colourful tapestry of flowers and grasses, some plots still having the wide range of plants that most meadows probably contained hundreds of years ago. Over its history, Park Grass has: demonstrated that conventional field trials probably underestimate threats to plant biodiversity from long term changes, such as soil acidification, shown how plant species richness, biomass and pH are related, demonstrated that competition between plants can make the effects of climatic variation on communities more extreme, provided one of the first demonstrations of local evolutionary change under different selection pressures and endowed us with an archive of soil and hay samples that have been used to track the history of atmospheric pollution, including nuclear fallout. Bibliography Rothamsted Research: Classical Experiments Biodiversity Ecological experiments Grasslands
https://en.wikipedia.org/wiki/Applied%20general%20equilibrium
In mathematical economics, applied general equilibrium (AGE) models were pioneered by Herbert Scarf at Yale University in 1967, in two papers, and a follow-up book with Terje Hansen in 1973, with the aim of empirically estimating the Arrow–Debreu model of general equilibrium theory with empirical data, to provide "“a general method for the explicit numerical solution of the neoclassical model” (Scarf with Hansen 1973: 1) Scarf's method iterated a sequence of simplicial subdivisions which would generate a decreasing sequence of simplices around any solution of the general equilibrium problem. With sufficiently many steps, the sequence would produce a price vector that clears the market. Brouwer's Fixed Point theorem states that a continuous mapping of a simplex into itself has at least one fixed point. This paper describes a numerical algorithm for approximating, in a sense to be explained below, a fixed point of such a mapping (Scarf 1967a: 1326). Scarf never built an AGE model, but hinted that “these novel numerical techniques might be useful in assessing consequences for the economy of a change in the economic environment” (Kehoe et al. 2005, citing Scarf 1967b). His students elaborated the Scarf algorithm into a tool box, where the price vector could be solved for any changes in policies (or exogenous shocks), giving the equilibrium ‘adjustments’ needed for the prices. This method was first used by Shoven and Whalley (1972 and 1973), and then was developed through the 1970s by Scarf’s students and others. Most contemporary applied general equilibrium models are numerical analogs of traditional two-sector general equilibrium models popularized by James Meade, Harry Johnson, Arnold Harberger, and others in the 1950s and 1960s. Earlier analytic work with these models has examined the distortionary effects of taxes, tariffs, and other policies, along with functional incidence questions. More recent applied models, including those discussed here, provide numerical
https://en.wikipedia.org/wiki/Government%20Paperwork%20Elimination%20Act
The Government Paperwork Elimination Act (GPEA, Title XVII) requires that, when practicable, federal agencies use electronic forms, electronic filing, and electronic signatures to conduct official business with the public by 2003. In doing this, agencies will create records with business, legal and, in some cases, historical value. This guidance focuses on records management issues involving records that have been created using electronic signature technology. The Act requires agencies, by October 21, 2003, to allow individuals or entities that deal with the agencies the option to submit information or transact with the agency electronically, when practicable, and to maintain records electronically, when practicable. The Act specifically states that electronic records and their related electronic signatures are not to be denied legal effect, validity, or enforceability merely because they are in electronic form, and encourages Federal government use of a range of electronic signature alternatives. The Act seeks to "preclude agencies or courts from systematically treating electronic documents and signatures less favorably than their paper counterparts", so that citizens can interact with the Federal government electronically. It requires Federal agencies, by October 21, 2003, to provide individuals or entities that deal with agencies the option to submit information or transact with the agency electronically, and to maintain records electronically, when practicable. It also addresses the matter of private employers being able to use electronic means to store, and file with Federal agencies, information pertaining to their employees. GPEA states that electronic records and their related electronic signatures are not to be denied legal effect, validity, or enforceability merely because they are in electronic form. It also encourages Federal government use of a range of electronic signature alternatives. The Act is technology-neutral, meaning that the act does not r
https://en.wikipedia.org/wiki/Postage%20stamp%20problem
The postage stamp problem is a mathematical riddle that asks what is the smallest postage value which cannot be placed on an envelope, if the latter can hold only a limited number of stamps, and these may only have certain specified face values. For example, suppose the envelope can hold only three stamps, and the available stamp values are 1 cent, 2 cents, 5 cents, and 20 cents. Then the solution is 13 cents; since any smaller value can be obtained with at most three stamps (e.g. 4 = 2 + 2, 8 = 5 + 2 + 1, etc.), but to get 13 cents one must use at least four stamps. Mathematical definition Mathematically, the problem can be formulated as follows: Given an integer m and a set V of positive integers, find the smallest integer z that cannot be written as the sum v1 + v2 + ··· + vk of some number k ≤ m of (not necessarily distinct) elements of V. Complexity This problem can be solved by brute force search or backtracking with maximum time proportional to |V |m, where |V | is the number of distinct stamp values allowed. Therefore, if the capacity of the envelope m is fixed, it is a polynomial time problem. If the capacity m is arbitrary, the problem is known to be NP-hard. See also Coin problem Knapsack problem Subset sum problem References External links Additive number theory Recreational mathematics Applied mathematics Mathematical problems
https://en.wikipedia.org/wiki/Unihemispheric%20slow-wave%20sleep
Unihemispheric slow-wave sleep (USWS) is sleep where one half of the brain rests while the other half remains alert. This is in contrast to normal sleep where both eyes are shut and both halves of the brain show unconsciousness. In USWS, also known as asymmetric slow-wave sleep, one half of the brain is in deep sleep, a form of non-rapid eye movement sleep and the eye corresponding to this half is closed while the other eye remains open. When examined by low-voltage electroencephalography (EEG), the characteristic slow-wave sleep tracings are seen from one side while the other side shows a characteristic tracing of wakefulness. The phenomenon has been observed in a number of terrestrial, aquatic and avian species. Unique physiology, including the differential release of the neurotransmitter acetylcholine, has been linked to the phenomenon. USWS offers a number of benefits, including the ability to rest in areas of high predation or during long migratory flights. The behaviour remains an important research topic because USWS is possibly the first animal behaviour which uses different regions of the brain to simultaneously control sleep and wakefulness. The greatest theoretical importance of USWS is its potential role in elucidating the function of sleep by challenging various current notions. Researchers have looked to animals exhibiting USWS to determine if sleep must be essential; otherwise, species exhibiting USWS would have eliminated the behaviour altogether through evolution. The amount of time spent sleeping during the unihemispheric slow-wave stage is considerably less than the bilateral slow-wave sleep. In the past, aquatic animals, such as dolphins and seals, had to regularly surface in order to breathe and regulate body temperature. USWS might have been generated by the need to perform these vital activities simultaneously with sleep. On land, birds can switch between sleeping with both hemispheres to one hemisphere. Due to their poorly webbed feet and
https://en.wikipedia.org/wiki/Trivers%E2%80%93Willard%20hypothesis
In evolutionary biology and evolutionary psychology, the Trivers–Willard hypothesis, formally proposed by Robert Trivers and Dan Willard in 1973, suggests that female mammals adjust the sex ratio of offspring in response to maternal condition, so as to maximize their reproductive success (fitness). For example, it may predict greater parental investment in males by parents in "good conditions" and greater investment in females by parents in "poor conditions" (relative to parents in good conditions). The reasoning for this prediction is as follows: Assume that parents have information on the sex of their offspring and can influence their survival differentially. While selection pressures exist to maintain a 1:1 sex ratio, evolution will favor local deviations from this if one sex has a likely greater reproductive payoff than is usual. Trivers and Willard also identified a circumstance in which reproducing individuals might experience deviations from expected offspring reproductive value—namely, varying maternal condition. In polygynous species, males may mate with multiple females, and low-condition males will achieve fewer or no matings. Parents in relatively good condition would then be under selection for mutations causing production and investment in sons (rather than daughters), because of the increased chance of mating experienced by these good-condition sons. Mating with multiple females conveys a large reproductive benefit, whereas daughters could translate their condition into only smaller benefits. An opposite prediction holds for poor-condition parents—selection will favor production and investment in daughters, so long as daughters are likely to be mated, while sons in poor condition are likely to be out-competed by other males and end up with zero mates (i.e., those sons will be a reproductive dead end). The hypothesis was used to explain why, for example, red deer mothers would produce more sons when they are in good condition, and more daughters when
https://en.wikipedia.org/wiki/Gyration%20tensor
In physics, the gyration tensor is a tensor that describes the second moments of position of a collection of particles where is the Cartesian coordinate of the position vector of the particle. The origin of the coordinate system has been chosen such that i.e. in the system of the center of mass . Where Another definition, which is mathematically identical but gives an alternative calculation method, is: Therefore, the x-y component of the gyration tensor for particles in Cartesian coordinates would be: In the continuum limit, where represents the number density of particles at position . Although they have different units, the gyration tensor is related to the moment of inertia tensor. The key difference is that the particle positions are weighted by mass in the inertia tensor, whereas the gyration tensor depends only on the particle positions; mass plays no role in defining the gyration tensor. Diagonalization Since the gyration tensor is a symmetric 3x3 matrix, a Cartesian coordinate system can be found in which it is diagonal where the axes are chosen such that the diagonal elements are ordered . These diagonal elements are called the principal moments of the gyration tensor. Shape descriptors The principal moments can be combined to give several parameters that describe the distribution of particles. The squared radius of gyration is the sum of the principal moments divided by the number of particles N: The asphericity is defined by which is always non-negative and zero only when the three principal moments are equal, λx = λy = λz. This zero condition is met when the distribution of particles is spherically symmetric (hence the name asphericity) but also whenever the particle distribution is symmetric with respect to the three coordinate axes, e.g., when the particles are distributed uniformly on a cube, tetrahedron or other Platonic solid. Similarly, the acylindricity is defined by which is always non-negative and zero only when
https://en.wikipedia.org/wiki/Active-pixel%20sensor
An active-pixel sensor (APS) is an image sensor, which was invented by Peter J.W. Noble in 1968, where each pixel sensor unit cell has a photodetector (typically a pinned photodiode) and one or more active transistors. In a metal–oxide–semiconductor (MOS) active-pixel sensor, MOS field-effect transistors (MOSFETs) are used as amplifiers. There are different types of APS, including the early NMOS APS and the now much more common complementary MOS (CMOS) APS, also known as the CMOS sensor. CMOS sensors are used in digital camera technologies such as cell phone cameras, web cameras, most modern digital pocket cameras, most digital single-lens reflex cameras (DSLRs), mirrorless interchangeable-lens cameras (MILCs), and lensless imaging for cells. CMOS sensors emerged as an alternative to charge-coupled device (CCD) image sensors and eventually outsold them by the mid-2000s. The term active pixel sensor is also used to refer to the individual pixel sensor itself, as opposed to the image sensor. In this case, the image sensor is sometimes called an active pixel sensor imager, or active-pixel image sensor. History Background While researching metal–oxide–semiconductor (MOS) technology, Willard Boyle and George E. Smith realized that an electric charge could be stored on a tiny MOS capacitor, which became the basic building block of the charge-couple device (CCD), which they invented in 1969. An issue with CCD technology was its need for nearly perfect charge transfer in read out, which, "makes their radiation [tolerance?] 'soft', difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response." At RCA Laboratories, a research team including Paul K. Weimer, W.S. Pike and G. Sadasiv in 1969 proposed a solid-state image sensor with sca
https://en.wikipedia.org/wiki/LCS35
LCS35 is a cryptographic challenge and a puzzle set by Ron Rivest in 1999. The challenge is to calculate the value where t is a 14-digit (or 47-bit) integer, namely 79685186856218, and n is a 616 digit (or 2048 bit) integer which is the product of two large primes (which are not given). The value of w can then be used to decrypt the ciphertext z, another 616 digit integer. The plaintext provides the concealed information about the factorisation of n, allowing the solution to be easily verified. The idea behind the challenge is that the only known way to find the value of w without knowing the factorisation of n is by t successive squarings. The value of t was chosen to make this brute force calculation take about 35 years using 1999 chip speeds as a starting point and taking into account Moore's law. Rivest notes that "just as a failure of Moore's Law could make the puzzle harder than intended, a breakthrough in the art of factoring would make the puzzle easier than intended." The challenge was set at (and takes its name from) the 35th anniversary celebrations of the MIT Laboratory for Computer Science, now part of MIT Computer Science and Artificial Intelligence Laboratory. The LCS35 challenge was solved on April 15, 2019, twenty years later, by Programmer Bernard Fabrot. The actual text was a "!!! Happy Birthday LCS !!!" message. On May 14, 2019, Ronald L. Rivest published a new version of LCS35 (named CSAIL2019) to extend the puzzle out to the year 2034. References External links Description of the LCS35 Time Capsule Crypto-Puzzle, Ronald L. Rivest Time capsule opening ceremony Cryptography contests
https://en.wikipedia.org/wiki/God%27s%20algorithm
God's algorithm is a notion originating in discussions of ways to solve the Rubik's Cube puzzle, but which can also be applied to other combinatorial puzzles and mathematical games. It refers to any algorithm which produces a solution having the fewest possible moves. The allusion to the deity is based on the notion that an omniscient being would know an optimal step from any given configuration. Scope Definition The notion applies to puzzles that can assume a finite number of "configurations", with a relatively small, well-defined arsenal of "moves" that may be applicable to configurations and then lead to a new configuration. Solving the puzzle means to reach a designated "final configuration", a singular configuration, or one of a collection of configurations. To solve the puzzle a sequence of moves is applied, starting from some arbitrary initial configuration. Solution An algorithm can be considered to solve such a puzzle if it takes as input an arbitrary initial configuration and produces as output a sequence of moves leading to a final configuration (if the puzzle is solvable from that initial configuration, otherwise it signals the impossibility of a solution). A solution is optimal if the sequence of moves is as short as possible. The highest value of this, among all initial configurations, is known as God's number, or, more formally, the minimax value. God's algorithm, then, for a given puzzle, is an algorithm that solves the puzzle and produces only optimal solutions. Some writers, such as David Joyner, consider that for an algorithm to be properly referred to as "God's algorithm", it should also be practical, meaning that the algorithm does not require extraordinary amounts of memory or time. For example, using a giant lookup table indexed by initial configurations would allow solutions to be found very quickly, but would require an extraordinary amount of memory. Instead of asking for a full solution, one can equivalently ask for a single move f
https://en.wikipedia.org/wiki/HTTP%20cookie
HTTP cookies (also called web cookies, Internet cookies, browser cookies, or simply cookies) are small blocks of data created by a web server while a user is browsing a website and placed on the user's computer or other device by the user's web browser. Cookies are placed on the device used to access a website, and more than one cookie may be placed on a user's device during a session. Cookies serve useful and sometimes essential functions on the web. They enable web servers to store stateful information (such as items added in the shopping cart in an online store) on the user's device or to track the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to save for subsequent use information that the user previously entered into form fields, such as names, addresses, passwords, and payment card numbers. Authentication cookies are commonly used by web servers to authenticate that a user is logged in, and with which account they are logged in. Without the cookie, users would need to authenticate themselves by logging in on each page containing sensitive information that they wish to access. The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by an attacker, used to gain access to user data, or used to gain access (with the user's credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site request forgery for examples). Tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals' browsing histories a potential privacy concern that prompted European and U.S. lawmakers to take action in 2011. European law requires that all websites targeting European Union member states gain "informed
https://en.wikipedia.org/wiki/Widom%20scaling
Widom scaling (after Benjamin Widom) is a hypothesis in statistical mechanics regarding the free energy of a magnetic system near its critical point which leads to the critical exponents becoming no longer independent so that they can be parameterized in terms of two values. The hypothesis can be seen to arise as a natural consequence of the block-spin renormalization procedure, when the block size is chosen to be of the same size as the correlation length. Widom scaling is an example of universality. Definitions The critical exponents and are defined in terms of the behaviour of the order parameters and response functions near the critical point as follows , for , for where measures the temperature relative to the critical point. Near the critical point, Widom's scaling relation reads . where has an expansion , with being Wegner's exponent governing the approach to scaling. Derivation The scaling hypothesis is that near the critical point, the free energy , in dimensions, can be written as the sum of a slowly varying regular part and a singular part , with the singular part being a scaling function, i.e., a homogeneous function, so that Then taking the partial derivative with respect to H and the form of M(t,H) gives Setting and in the preceding equation yields for Comparing this with the definition of yields its value, Similarly, putting and into the scaling relation for M yields Hence Applying the expression for the isothermal susceptibility in terms of M to the scaling relation yields Setting H=0 and for (resp. for ) yields Similarly for the expression for specific heat in terms of M to the scaling relation yields Taking H=0 and for (or for yields As a consequence of Widom scaling, not all critical exponents are independent but they can be parameterized by two numbers with the relations expressed as The relations are experimentally well verified for magnetic systems and fluids. References H. E. Stanley, Introd
https://en.wikipedia.org/wiki/Singapore%20Mathematical%20Olympiad
The Singapore Mathematical Olympiad (SMO) is a mathematics competition organised by the Singapore Mathematical Society. It comprises three sections, Junior, Senior and Open, each of which is open to all pre-university students studying in Singapore who meet the age requirements for the particular section. The competition is held annually, and the first round of each section is usually held in late May or early June. The second round is usually held in late June or early July. History The Singapore Mathematical Society (SMS) has been organising mathematical competitions since the 1950's, launching the first inter-school Mathematical Competition in 1956. The Mathematical Competition was renamed to Singapore Mathematical Olympiad in 1995. In 2016, the SMS attempted to make the SMO more inviting to students by aligning questions more closely with school curriculum, although solutions still require considerable insight and creativity in addition to sound mathematical knowledge. In 2020 and 2021, the written round (Round 1) in all sections were postponed to September due to the COVID-19 pandemic, while the invitational round (Round 2) in all sections were cancelled. The normal competition timeline was resumed in 2022. Junior Section There are two rounds in the Junior Section: a written round (Round 1) and an invitational round (Round 2). The paper in Round 1 comprises 5 multiple-choice questions, each with five options, and 20 short answer questions. The Junior section is geared towards Lower Secondary students, and topics tested include number theory, combinatorics, geometry, algebra, and probability. Beginning in 2006, a second round was added, based on the Senior Invitational Round, in the form of a 5-question, 3-hour long paper requiring full-length solutions. Only the top 10% of students from Round 1 are eligible to take Round 2. Senior Section There are two rounds in the Senior Section: a written round (Round 1) and an invitational round (Round 2). The
https://en.wikipedia.org/wiki/Venezuelan%20equine%20encephalitis%20virus
Venezuelan equine encephalitis virus is a mosquito-borne viral pathogen that causes Venezuelan equine encephalitis or encephalomyelitis (VEE). VEE can affect all equine species, such as horses, donkeys, and zebras. After infection, equines may suddenly die or show progressive central nervous system disorders. Humans also can contract this disease. Healthy adults who become infected by the virus may experience flu-like symptoms, such as high fevers and headaches. People with weakened immune systems and the young and the elderly can become severely ill or die from this disease. The virus that causes VEE is transmitted primarily by mosquitoes that bite an infected animal and then bite and feed on another animal or human. The speed with which the disease spreads depends on the subtype of the VEE virus and the density of mosquito populations. Enzootic subtypes of VEE are diseases endemic to certain areas. Generally these serotypes do not spread to other localities. Enzootic subtypes are associated with the rodent-mosquito transmission cycle. These forms of the virus can cause human illness but generally do not affect equine health. Epizootic subtypes, on the other hand, can spread rapidly through large populations. These forms of the virus are highly pathogenic to equines and can also affect human health. Equines, rather than rodents, are the primary animal species that carry and spread the disease. Infected equines develop an enormous quantity of virus in their circulatory system. When a blood-feeding insect feeds on such animals, it picks up this virus and transmits it to other animals or humans. Although other animals, such as cattle, swine, and dogs, can become infected, they generally do not show signs of the disease or contribute to its spread. The virion is spherical and approximately 70 nm in diameter. It has a lipid membrane with glycoprotein surface proteins spread around the outside. Surrounding the nuclear material is a nucleocapsid that has an icosahedral
https://en.wikipedia.org/wiki/Jonathan%20Partington
Jonathan Richard Partington (born 4 February 1955) is an English mathematician who is Emeritus Professor of pure mathematics at the University of Leeds. Education Professor Partington was educated at Gresham's School, Holt, and Trinity College, Cambridge, where he completed his PhD thesis entitled "Numerical ranges and the Geometry of Banach Spaces" under the supervision of Béla Bollobás. Career Partington works in the area of functional analysis, sometimes applied to control theory, and is the author of several books in this area. He was formerly editor-in-chief of the Journal of the London Mathematical Society, a position he held jointly with his Leeds colleague John Truss. Partington's extra-mathematical activities include the invention of the March March march, an annual walk starting at March, Cambridgeshire. He is also known as a writer or co-writer of some of the earliest British text-based computer games, including Acheton, Hamil, Murdac, Avon, Fyleet, Crobe, Sangraal, and SpySnatcher, which started life on the Phoenix computer system at the University of Cambridge Computer Laboratory. These are still available on the IF Archive. Books External links Professor Jonathan R. Partington at the University of Leeds 1955 births Living people People from Holt, Norfolk 20th-century English mathematicians 21st-century English mathematicians Mathematical analysts People educated at Gresham's School Alumni of Trinity College, Cambridge Fellows of Pembroke College, Cambridge Fellows of Fitzwilliam College, Cambridge Academics of the University of Leeds
https://en.wikipedia.org/wiki/Fibre%20Channel%20network%20protocols
Communication between devices in a fibre channel network uses different elements of Fibre Channel standards. Transmission words and ordered sets All Fibre Channel communication is done in units of four 10-bit codes. This group of 4 codes is called a transmission word. An ordered set is a transmission word that includes some combination of control (K) codes and data (D) codes. AL_PAs Each device has an Arbitrated Loop Physical Address (AL_PA). These addresses are defined by an 8-bit field but must have neutral disparity as defined in the 8b/10b coding scheme. That reduces the number of possible values from 256 to 134. The 134 possible values have been divided between the fabric, FC_AL ports, and other special purposes as follows: Meta-data In addition to the transfer of data, it is necessary for Fibre Channel communication to include some metadata. This allows for the setting up of links, sequence management, and other control functions. The meta-data falls into two types, primitives which consist of a 4 character transmission word and non-data frames which are more complex structures. Primitives All primitives are four characters in length. They begin with the control character K28.5, followed by three data characters. In some primitives the three data characters are fixed, in others they can be varied to change the meaning or to act as parameters for the primitive. In some cases the last two parameter characters are identical. Parameters are shown in the table below in the form of their hexadecimal 8-bit values. This is clearer than their full 10-bit (Dxx.x) form as shown in the Fibre Channel standards: Note 1: The first parameter byte of the EOF primitive can have one of four different values (8A, 95, AA, or B5). This is done so that the EOF primitive can rebalance the disparity of the whole frame. The remaining two parameter bytes define whether the frame is ending normally, terminating the transfer, or is to be aborted due to an error. Note 2: The
https://en.wikipedia.org/wiki/Gerchberg%E2%80%93Saxton%20algorithm
The Gerchberg–Saxton (GS) algorithm is an iterative phase retrieval algorithm for retrieving the phase of a complex-valued wavefront from two intensity measurements acquired in two different planes. Typically, the two planes are the image plane and the far field (diffraction) plane, and the wavefront propagation between these two planes is given by the Fourier transform. The original paper by Gerchberg and Saxton considered image and diffraction pattern of a sample acquired in an electron microscope. It is often necessary to know only the phase distribution from one of the planes, since the phase distribution on the other plane can be obtained by performing a Fourier transform on the plane whose phase is known. Although often used for two-dimensional signals, the GS algorithm is also valid for one-dimensional signals. The pseudocode below performs the GS algorithm to obtain a phase distribution for the plane "Source", such that its Fourier transform would have the amplitude distribution of the plane "Target". Pseudocode algorithm Let: FT – forward Fourier transform IFT – inverse Fourier transform i – the imaginary unit, √−1 (square root of −1) exp – exponential function (exp(x) = ex) Target and Source be the Target and Source Amplitude planes respectively A, B, C & D be complex planes with the same dimension as Target and Source Amplitude – Amplitude-extracting function: e.g. for complex z = x + iy, amplitude(z) = sqrt(x·x + y·y) for real x, amplitude(x) = |x| Phase – Phase extracting function: e.g. Phase(z) = arctan(y / x) end Let algorithm Gerchberg–Saxton(Source, Target, Retrieved_Phase) is A := IFT(Target) while error criterion is not satisfied B := Amplitude(Source) × exp(i × Phase(A)) C := FT(B) D := Amplitude(Target) × exp(i × Phase(C)) A := IFT(D) end while Retrieved_Phase = Phase(A) This is just one of the many ways to implement the GS algorithm. Aside from op
https://en.wikipedia.org/wiki/Interface%20control%20document
An interface control document (ICD) in systems engineering and software engineering, provides a record of all interface information (such as drawings, diagrams, tables, and textual information) generated for a project. The underlying interface documents provide the details and describe the interface or interfaces between subsystems or to a system or subsystem. Overview An ICD is the umbrella document over the system interfaces; examples of what these interface specifications should describe include: The inputs and outputs of a single system, documented in individual SIRS (Software Interface Requirements Specifications) and HIRS (Hardware Interface Requirements Specifications) documents, would fall under "The Wikipedia Interface Control Document". The interface between two systems or subsystems, e.g. "The Doghouse to Outhouse Interface" would also have a parent ICD. The complete interface protocol from the lowest physical elements (e.g., the mating plugs, the electrical signal voltage levels) to the highest logical levels (e.g., the level 7 application layer of the OSI model) would each be documented in the appropriate interface requirements spec and fall under a single ICD for the "system". The purpose of the ICD is to control and maintain a record of system interface information for a given project. This includes all possible inputs to and all potential outputs from a system for some potential or actual user of the system. The internal interfaces of a system or subsystem are documented in their respective interface requirements specifications, while human-machine interfaces might be in a system design document (such as a software design document). Interface control documents are a key element of systems engineering as they control the documented interface(s) of a system, as well as specify a set of interface versions that work together, and thereby bound the requirements. Characteristics An application programming interface is a form of interface for a
https://en.wikipedia.org/wiki/Rack%20lift
A rack lift is a type of elevator which consists of a cage attached to vertical rails affixed to the walls of a tower or shaft and which is propelled up and down by means of an electric motor which drives a pinion gear that engages a rack gear which is also attached to the wall between the rails. References Elevators
https://en.wikipedia.org/wiki/Pseudonormal%20space
In mathematics, in the field of topology, a topological space is said to be pseudonormal if given two disjoint closed sets in it, one of which is countable, there are disjoint open sets containing them. Note the following: Every normal space is pseudonormal. Every pseudonormal space is regular. An example of a pseudonormal Moore space that is not metrizable was given by , in connection with the conjecture that all normal Moore spaces are metrizable. References Topology Properties of topological spaces
https://en.wikipedia.org/wiki/Perfect%20set
In general topology, a subset of a topological space is perfect if it is closed and has no isolated points. Equivalently: the set is perfect if , where denotes the set of all limit points of , also known as the derived set of . In a perfect set, every point can be approximated arbitrarily well by other points from the set: given any point of and any neighborhood of the point, there is another point of that lies within the neighborhood. Furthermore, any point of the space that can be so approximated by points of belongs to . Note that the term perfect space is also used, incompatibly, to refer to other properties of a topological space, such as being a Gδ space. As another possible source of confusion, also note that having the perfect set property is not the same as being a perfect set. Examples Examples of perfect subsets of the real line are the empty set, all closed intervals, the real line itself, and the Cantor set. The latter is noteworthy in that it is totally disconnected. Whether a set is perfect or not (and whether it is closed or not) depends on the surrounding space. For instance, the set is perfect as a subset of the space but not perfect as a subset of the space . Connection with other topological properties Every topological space can be written in a unique way as the disjoint union of a perfect set and a scattered set. Cantor proved that every closed subset of the real line can be uniquely written as the disjoint union of a perfect set and a countable set. This is also true more generally for all closed subsets of Polish spaces, in which case the theorem is known as the Cantor–Bendixson theorem. Cantor also showed that every non-empty perfect subset of the real line has cardinality , the cardinality of the continuum. These results are extended in descriptive set theory as follows: If X is a complete metric space with no isolated points, then the Cantor space 2ω can be continuously embedded into X. Thus X has cardinality at leas
https://en.wikipedia.org/wiki/Chip%20PC%20Technologies
Chip PC Technologies is a developer and manufacturer of thin client solutions and management software for server-based computing; where in a network architecture applications are deployed, managed and can be fully executed on the server. History Chip PC was founded in 2000 by Aviv Soffer and Ora Meir Soffer and raised its first round of financing from R.H. Technologies Ltd. (), an electronics contract manufacturing group. In 2005 Elbit Systems acquired 20% of the company. Later, the company established partnerships with Dell, which distributes its products, and Microsoft. In June 2007, it raised NIS 26 million in stocks, bonds, and warrants in an IPO on the Tel Aviv Stock Exchange. In November 2007, the company won Europe's largest Thin client tender thus far, to supply 20,000 Thin client PC's and management software to RZF, the tax authority of the State of North Rhine-Westphalia in Germany. Overview Chip PC supplies thin clients to Multinational & Public sector organizations, recently winning 1st place in an independent Thin-Clients Evaluation among 26 thin clients from 9 vendors worldwide. Among Chip PC customers are top organizations from various verticals, such as Healthcare, Finance, Defense (Israeli Navy), Government (US Police), and Education. Although the company's main target markets are enterprises and large organizations, it modifies and customizes models to fit other markets; such as the Networked home, SOHO (Small-Office-Home-Office), Point of sale and others. See also Thin client Mini PC Jack PC References External links Computer hardware companies Electronics companies of Israel Computer terminals Thin clients Rehovot
https://en.wikipedia.org/wiki/Ugly%20duckling%20theorem
The ugly duckling theorem is an argument showing that classification is not really possible without some sort of bias. More particularly, it assumes finitely many properties combinable by logical connectives, and finitely many objects; it asserts that any two different objects share the same number of (extensional) properties. The theorem is named after Hans Christian Andersen's 1843 story "The Ugly Duckling", because it shows that a duckling is just as similar to a swan as two swans are to each other. It was derived by Satosi Watanabe in 1969. Mathematical formula Suppose there are n things in the universe, and one wants to put them into classes or categories. One has no preconceived ideas or biases about what sorts of categories are "natural" or "normal" and what are not. So one has to consider all the possible classes that could be, all the possible ways of making a set out of the n objects. There are such ways, the size of the power set of n objects. One can use that to measure the similarity between two objects, and one would see how many sets they have in common. However, one cannot. Any two objects have exactly the same number of classes in common if we can form any possible class, namely (half the total number of classes there are). To see this is so, one may imagine each class is a represented by an n-bit string (or binary encoded integer), with a zero for each element not in the class and a one for each element in the class. As one finds, there are such strings. As all possible choices of zeros and ones are there, any two bit-positions will agree exactly half the time. One may pick two elements and reorder the bits so they are the first two, and imagine the numbers sorted lexicographically. The first numbers will have bit #1 set to zero, and the second will have it set to one. Within each of those blocks, the top will have bit #2 set to zero and the other will have it as one, so they agree on two blocks of or on half of all the cases, no matter
https://en.wikipedia.org/wiki/Black%20hole%20%28networking%29
In networking, a black hole, also known as a block hole, refers to a place in the network where incoming or outgoing traffic is silently discarded (or "dropped"), without informing the source that the data did not reach its intended recipient. When examining the topology of the network, the black holes themselves are invisible, and can only be detected by monitoring the lost traffic; hence the name as astronomical black holes cannot be directly observed. Dead addresses The most common form of black hole is simply an IP address that specifies a host machine that is not running or an address to which no host has been assigned. Even though TCP/IP provides a means of communicating the delivery failure back to the sender via ICMP, traffic destined for such addresses is often just dropped. Note that a dead address will be undetectable only to protocols that are both connectionless and unreliable (e.g., UDP). Connection-oriented or reliable protocols (TCP, RUDP) will either fail to connect to a dead address or will fail to receive expected acknowledgements. For IPv6, the black hole prefix is described by . For IPv4, no black hole address is explicitly defined, however the reserved IP addresses can help achieve a similar effect. For example, is reserved for use in documentation and examples by ; while the RFC advises that the addresses in this range are not routed, this is not a requirement. Firewalls and "stealth" ports Most firewalls (and routers for household use) can be configured to silently discard packets addressed to forbidden hosts or ports, resulting in small or large "black holes" in the network. Personal firewalls that do not respond to ICMP echo requests ("ping") have been designated by some vendors as being in "stealth mode". Despite this, in most networks the IP addresses of hosts with firewalls configured in this way are easily distinguished from invalid or otherwise unreachable IP addresses: On encountering the latter, a router will generall
https://en.wikipedia.org/wiki/Regular%20tree%20grammar
In theoretical computer science and formal language theory, a regular tree grammar is a formal grammar that describes a set of directed trees, or terms. A regular word grammar can be seen as a special kind of regular tree grammar, describing a set of single-path trees. Definition A regular tree grammar G is defined by the tuple G = (N, Σ, Z, P), where N is a finite set of nonterminals, Σ is a ranked alphabet (i.e., an alphabet whose symbols have an associated arity) disjoint from N, Z is the starting nonterminal, with , and P is a finite set of productions of the form A → t, with , and , where TΣ(N) is the associated term algebra, i.e. the set of all trees composed from symbols in according to their arities, where nonterminals are considered nullary. Derivation of trees The grammar G implicitly defines a set of trees: any tree that can be derived from Z using the rule set P is said to be described by G. This set of trees is known as the language of G. More formally, the relation ⇒G on the set TΣ(N) is defined as follows: A tree can be derived in a single step into a tree (in short: t1 ⇒G t2), if there is a context S and a production such that: t1 = S[A], and t2 = S[t]. Here, a context means a tree with exactly one hole in it; if S is such a context, S[t] denotes the result of filling the tree t into the hole of S. The tree language generated by G is the language . Here, TΣ denotes the set of all trees composed from symbols of Σ, while ⇒G* denotes successive applications of ⇒G. A language generated by some regular tree grammar is called a regular tree language. Examples Let G1 = (N1,Σ1,Z1,P1), where N1 = {Bool, BList } is our set of nonterminals, Σ1 = { true, false, nil, cons(.,.) } is our ranked alphabet, arities indicated by dummy arguments (i.e. the symbol cons has arity 2), Z1 = BList is our starting nonterminal, and the set P1 consists of the following productions: Bool → false Bool → true BList → nil BList → cons(Bool,BList) A
https://en.wikipedia.org/wiki/WiiConnect24
WiiConnect24 was a feature of the Nintendo Wi-Fi Connection for the Wii console. It was first announced at Electronic Entertainment Expo (E3) in mid-2006 by Nintendo. It enabled the user to remain connected to the Internet while the console was on standby. For example, in Animal Crossing: City Folk, a friend could send messages to another player without the recipient being present in the game at the same time as the sender. On June 27, 2013, WiiConnect24 service features were globally terminated. Consequently, the Wii channels that required it, online data exchange via Wii Message Board, and passive online features for certain games (the latter two of which made use of 16-digit Wii Friend Codes) have all been rendered unusable. The Wii U does not officially support WiiConnect24, so most preloaded and downloadable Wii channels were unavailable on the Wii U's Wii Mode menu and Wii Shop Channel respectively, even prior to WiiConnect24's termination. On the discontinuation date, the defunct downloadable Wii channels were removed from the Wii Shop Channel. WiiConnect24 has been succeeded by SpotPass, a different trademark name for similar content-pushing functions that the Nintendo Network service can perform for the newer Nintendo 3DS and Wii U consoles. In 2015, a fan-made service called RiiConnect24 was established as a replacement for WiiConnect24, aiming to bring back WiiConnect24 to those who have a homebrewed Wii console. As of today, the service offers online access to all of the Wii's channels released in America and Europe (other than video on demand services) as well as sending messages to other users in the Wii Message Board. Another notable example of a homebrew application meant to bring back WiiConnect24 functionality is WiiLink. Service WiiConnect24 was used to receive content such as Wii Message Board messages sent from other Wii consoles, Miis, emails, updated channel and game content, and notifications of software updates. If the Standby Connect
https://en.wikipedia.org/wiki/Bennett%20House%20%28Franklin%2C%20Tennessee%29
The Bennett House is a recording studio located on 4th Avenue North in Franklin, Tennessee. Built in 1875, the two-story building has served as a residence, a clothing store, and, starting in 1980, a recording studio used by many popular music artists when recording in Tennessee. Artists that have frequently recorded at the studio include 1970's rock and roll producer Norbert Putnam (Kris Kristofferson, Dan Fogelberg, Jimmy Buffett, Dusty Springfield), country music producer Bob Montgomery (Joe Diffie, Waylon Jennings), producer Keith Thomas (Amy Grant, Vanessa Williams, Selena, 98 Degrees). Thomas would even have one of the two studios in the building named after him when "Studio A" became known as "The Thomas Room." Other artists to use the studio include Phil Keaggy, Randy Stonehill, and Chagall Guevara. In the early 1990s, Montgomery produced acts such as Joe Diffie, Doug Stone, Jo-El Sonnier, George Jones, Tammy Wynette, Willie Nelson, Vince Gill and many, many others. Gene Eichelberger, Audio Engineer probably most known for recording Dobie Gray's "Drift Away", engineered many of these sessions on a Studer A800 24 track analog machine with Bob Montgomery. The Bennett House Recording Studios ceased all tracking and mixing operations on July 1, 2008. History The house is named after Walter James Bennett, a soldier serving in the Confederate Army during the American Civil War. For a time, Bennett served on the staff of Major General William Whiting until he was captured in Virginia in 1864. Bennett spent the remainder of the Civil War in prison at Fort Donelson, Tennessee until his release in 1865. In 1872, Bennett purchased the lot on Indigo Street, which was later renamed 4th Avenue North, where the Bennett House is located. The Home remained in the Bennett family until 1967 when its ownership passed onto someone outside of the family for the first time in ninety-two years. It still stands at 134 4th Avenue North in Franklin, TN as a stately Victorian Man
https://en.wikipedia.org/wiki/Extraordinary%20optical%20transmission
Extraordinary optical transmission (EOT) is the phenomenon of greatly enhanced transmission of light through a subwavelength aperture in an otherwise opaque metallic film which has been patterned with a regularly repeating periodic structure. Generally when light of a certain wavelength falls on a subwavelength aperture, it is diffracted isotropically in all directions evenly, with minimal far-field transmission. This is the understanding from classical aperture theory as described by Bethe. In EOT however, the regularly repeating structure enables much higher transmission efficiency to occur, up to several orders of magnitude greater than that predicted by classical aperture theory. It was first described in 1998. This phenomenon that was fully analyzed with a microscopic scattering model is partly attributed to the presence of surface plasmon resonances and constructive interference. A surface plasmon (SP) is a collective excitation of the electrons at the junction between a conductor and an insulator and is one of a series of interactions between light and a metal surface called Plasmonics. Currently, there is experimental evidence of EOT out of the optical range. Analytical approaches also predict EOT on perforated plates with a perfect conductor model. Holes can somewhat emulate plasmons at other regions of the electromagnetic spectrum where they do not exist. Then, the plasmonic contribution is a very particular peculiarity of the EOT resonance and should not be taken as the main contribution to the phenomenon. More recent work has shown a strong contribution from overlapping evanescent wave coupling, which explains why surface plasmon resonance enhances the EOT effect on both sides of a metallic film at optical frequencies, but accounts for the terahertz-range transmission. Simple analytical explanations of this phenomenon have been elaborated, emphasizing the similarity between arrays of particles and arrays of holes, and establishing that the phenomen
https://en.wikipedia.org/wiki/Autapomorphy
In phylogenetics, an autapomorphy is a distinctive feature, known as a derived trait, that is unique to a given taxon. That is, it is found only in one taxon, but not found in any others or outgroup taxa, not even those most closely related to the focal taxon (which may be a species, family or in general any clade). It can therefore be considered an apomorphy in relation to a single taxon. The word autapomorphy, introduced in 1950 by German entomologist Willi Hennig, is derived from the Greek words αὐτός, autos "self"; ἀπό, apo "away from"; and μορφή, morphḗ = "shape". Discussion Because autapomorphies are only present in a single taxon, they do not convey information about relationship. Therefore, autapomorphies are not useful to infer phylogenetic relationships. However, autapomorphy, like synapomorphy and plesiomorphy is a relative concept depending on the taxon in question. An autapomorphy at a given level may well be a synapomorphy at a less-inclusive level. An example of an autapomorphy can be described in modern snakes. Snakes have lost the two pairs of legs that characterize all of Tetrapoda, and the closest taxa to Ophidia – as well as their common ancestors – all have two pairs of legs. Therefore, the Ophidia taxon presents an autapomorphy with respect to its absence of legs. The autapomorphic species concept is one of many methods that scientists might use to define and distinguish species from one another. This definition assigns species on the basis of amount of divergence associated with reproductive incompatibility, which is measured essentially by number of autapomorphies. This grouping method is often referred to as the "monophyletic species concept" or the "phylospecies" concept and was popularized by D.E. Rosen in 1979. Within this definition, a species is seen as "the least inclusive monophyletic group definable by at least one autapomorphy". While this model of speciation is useful in that it avoids non-monophyletic groupings, it has its cr
https://en.wikipedia.org/wiki/Prelink
In computing, prebinding, also called prelinking, is a method for optimizing application load times by resolving library symbols prior to launch. Background Most computer programs consist of code that requires external shared libraries to execute. These libraries are normally integrated with the program at run time by a loader, in a process called dynamic linking. While dynamic linking has advantages in code size and management, there are drawbacks as well. Every time a program is run, the loader needs to resolve (find) the relevant libraries. Since libraries move around in memory, there is a performance penalty for resolution. This penalty increases for each additional library needing resolution. Prelinking reduces this penalty by resolving libraries in advance. Afterward, resolution only occurs if the libraries have changed since being prelinked, such as following perhaps an upgrade. Mac OS Mac OS stores executables in the Mach-O file format. Mac OS X Mac OS X performs prebinding in the "Optimizing" stage of installing system software or certain applications. Prebinding has changed a few times within the Mac OS X series. Before 10.2, prebinding only happened during the installation procedure (the aforementioned "Optimizing" stage). From 10.2 through 10.3 the OS checked for prebinding at launch time for applications, and the first time an application ran it would be prebound, making subsequent launches faster. This could also be manually run, which some OS-level installs did. In 10.4, only OS libraries were prebound. In 10.5 and later, Apple replaced prebinding with a dyld shared cache mechanism, which provided better OS performance. Linux On Linux, prelinking is accomplished via the prelink program, a free program written by Jakub Jelínek of Red Hat for ELF binaries. Performance results have been mixed, but it seems to aid systems with a large number of libraries, such as KDE. prelink randomization When run with the "-R" option, prelink will randomly
https://en.wikipedia.org/wiki/Juvenile%20%28organism%29
A juvenile is an individual organism (especially an animal) that has not yet reached its adult form, sexual maturity or size. Juveniles can look very different from the adult form, particularly in colour, and may not fill the same niche as the adult form. In many organisms the juvenile has a different name from the adult (see List of animal names). Some organisms reach sexual maturity in a short metamorphosis, such as ecdysis in many insects and some other arthropods. For others, the transition from juvenile to fully mature is a more prolonged process—puberty in humans and other species (like higher primates and whales), for example. In such cases, juveniles during this transformation are sometimes called subadults. Many invertebrates cease development upon reaching adulthood. The stages of such invertebrates are larvae or nymphs. In vertebrates and some invertebrates (e.g. spiders), larval forms (e.g. tadpoles) are usually considered a development stage of their own, and "juvenile" refers to a post-larval stage that is not fully grown and not sexually mature. In amniotes, the embryo represents the larval stage. Here, a "juvenile" is an individual in the time between hatching/birth/germination and reaching maturity. Examples For animal larval juveniles, see larva Juvenile birds or bats can be called fledglings For cat juveniles, see kitten For dog juveniles, see puppy For human juvenile life stages, see childhood and adolescence, an intermediary period between the onset of puberty and full physical, psychological, and social adulthood References Developmental biology pt:Filhote