source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Voice%20user%20interface
A voice-user interface (VUI) enables spoken human interaction with computers, using speech recognition to understand spoken commands and answer questions, and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface. Voice user interfaces have been added to automobiles, home automation systems, computer operating systems, home appliances like washing machines and microwave ovens, and television remote controls. They are the primary way of interacting with virtual assistants on smartphones and smart speakers. Older automated attendants (which route phone calls to the correct extension) and interactive voice response systems (which conduct more complicated transactions over the phone) can respond to the pressing of keypad buttons via DTMF tones, but those with a full voice user interface allow callers to speak requests and responses without having to press any buttons. Newer voice command devices are speaker-independent, so they can respond to multiple voices, regardless of accent or dialectal influences. They are also capable of responding to several commands at once, separating vocal messages, and providing appropriate feedback, accurately imitating a natural conversation. Overview A VUI is the interface to any speech application. Only a short time ago, controlling a machine by simply talking to it was only possible in science fiction. Until recently, this area was considered to be artificial intelligence. However, advances in technologies like text-to-speech, speech-to-text, natural language processing, and cloud services contributed to the mass adoption of these types of interfaces. VUIs have become more commonplace, and people are taking advantage of the value that these hands-free, eyes-free interfaces provide in many situations. VUIs need to respond to input reliably, or they will be rejected and often ridiculed by their users. Designing a good VUI requires interdisciplinary talents of computer scie
https://en.wikipedia.org/wiki/Multicategory
In mathematics (especially category theory), a multicategory is a generalization of the concept of category that allows morphisms of multiple arity. If morphisms in a category are viewed as analogous to functions, then morphisms in a multicategory are analogous to functions of several variables. Multicategories are also sometimes called operads, or colored operads. Definition A (non-symmetric) multicategory consists of a collection (often a proper class) of objects; for every finite sequence of objects (for von Neumann ordinal ) and object Y, a set of morphisms from to Y; and for every object X, a special identity morphism (with n = 1) from X to X. Additionally, there are composition operations: Given a sequence of sequences of objects, a sequence of objects, and an object Z: if for each , fj is a morphism from to Yj; and g is a morphism from to Z: then there is a composite morphism from to Z. This must satisfy certain axioms: If m = 1, Z = Y0, and g is the identity morphism for Y0, then g(f0) = f0; if for each , nj = 1, , and fj is the identity morphism for Yj, then ; and an associativity condition: if for each and , is a morphism from to , then are identical morphisms from to Z. Comcategories A comcategory (co-multi-category) is a totally ordered set O of objects, a set A of multiarrows with two functions where O% is the set of all finite ordered sequences of elements of O. The dual image of a multiarrow f may be summarized A comcategory C also has a multiproduct with the usual character of a composition operation. C is said to be associative if there holds a multiproduct axiom in relation to this operator. Any multicategory, symmetric or non-symmetric, together with a total-ordering of the object set, can be made into an equivalent comcategory. A multiorder is a comcategory satisfying the following conditions. There is at most one multiarrow with given head and ground. Each object x has a unit multiarrow. A multiarrow is a unit
https://en.wikipedia.org/wiki/INMOS%20G364%20framebuffer
The G364 framebuffer was a line of graphics adapters using the SGS Thomson INMOS G364 colour video controller, produced by INMOS (known for their transputer and eventually acquired by SGS Thomson and incorporated into STMicroelectronics) in the early 1990s. The G364 included a RAMDAC and a 64-bit interface to VRAM graphical memory to implement a framebuffer, but did not include any hardware-based graphical acceleration other than a hardware cursor function. The G364 was largely similar in design and functionality to the G300 framebuffer, but had a 64-bit VRAM interface instead of the slower 32-bit interface of the lower-price G300. The INMOS G364 is quite similar to the G332 found on the Personal DECstation and Dell PowerLine 450DE/2 DGX Graphics Workstation. Although the G364 was capable of providing comparatively high resolution output (up to 1600×1200 pixels at 8 bits-per-pixel, in many cases) typically achieved only in Unix workstations such as those of Sun Microsystems or SGI, it was not a popular chipset for the personal computer manufacturers of the early 1990s and was not adopted by any major workstation manufacturers. The G364 framebuffer was used in an after-market Amiga graphics card, and as the primary graphics system sold with the MIPS Magnum 4000 series of MIPS-based Windows NT workstations. Amiga cards based on the G364: EGS SPECTRUM 110/24 Rainbow III Visiona Paint (G300) The G332 found use in the State Machine G8 and Computer Concepts Colour Card for the Acorn Archimedes range of personal computers, these providing a secondary framebuffer to which the main display memory was copied periodically, also offering a broader 24-bit palette for all graphics modes including individually programmable colours for 256-colour modes. The capabilities of the G332 were reported as being "almost identical" to the ARM VIDC20 that was announced at the time these adapter cards became available. See also Graphics processing unit References External links
https://en.wikipedia.org/wiki/Coherent%20file%20distribution%20protocol
Coherent File Distribution Protocol (CFDP) is an IETF-documented experimental protocol intended for high-speed one-to-many file transfers. Class 1 is assured delivery, class 2 is blind unassured delivery. References Network file systems
https://en.wikipedia.org/wiki/GameTap
GameTap was an online video game service established by Turner Broadcasting System (TBS) in 2006. It provided users with classic arcade video games and game-related video content. The service was acquired by French online video game service Metaboli in 2008 as a wholly owned subsidiary; Metaboli aiming to create a global games service. The service remained active until October 2015, when it was shut down by Metaboli. Features GameTap was conceived primarily as an online subscription rental service, competing against mail-based services like GameFly. GameTap offered two subscription levels: a Premium subscription with access to the entire content library, and a Classic subscription with access to older console and arcade games running in emulation. GameTap also sold games via the online distribution method. GameTap initially offered a limited selection of games for free play without a subscription, but this option was discontinued. Originally, GameTap was designed to offer not only video games, but a complete media hub (GameTap TV), taking advantage of the TBS catalog as well as offering original video content, including the animated series Revisioned: Tomb Raider and new episodes of Space Ghost Coast to Coast. GameTap TV has since been discontinued. Most multiplayer games can be played by two users on the same computer while many others not originally intended to be played outside of a LAN may be played over the internet by using a VPN client such as Hamachi. A limited number of games have been enhanced with an online leaderboard and challenge lobby, adding internet multiplayer to games that previously could only be played face to face. Every Monday GameTap holds a leaderboard tournament with a different game each week. GameTap Originals GameTap has funded the development of a number of titles, with the games subsequently premiering as GameTap exclusives. Such games include Sam & Max Season One and Myst Online: Uru Live. On February 7, 2007, GameTap announced
https://en.wikipedia.org/wiki/CRISPR
CRISPR () (an acronym for clustered regularly interspaced short palindromic repeats) is a family of DNA sequences found in the genomes of prokaryotic organisms such as bacteria and archaea. These sequences are derived from DNA fragments of bacteriophages that had previously infected the prokaryote. They are used to detect and destroy DNA from similar bacteriophages during subsequent infections. Hence these sequences play a key role in the antiviral (i.e. anti-phage) defense system of prokaryotes and provide a form of acquired immunity. CRISPR is found in approximately 50% of sequenced bacterial genomes and nearly 90% of sequenced archaea. Cas9 (or "CRISPR-associated protein 9") is an enzyme that uses CRISPR sequences as a guide to recognize and open up specific strands of DNA that are complementary to the CRISPR sequence. Cas9 enzymes together with CRISPR sequences form the basis of a technology known as CRISPR-Cas9 that can be used to edit genes within the organisms. This editing process has a wide variety of applications including basic biological research, development of biotechnological products, and treatment of diseases. The development of the CRISPR-Cas9 genome editing technique was recognized by the Nobel Prize in Chemistry in 2020 which was awarded to Emmanuelle Charpentier and Jennifer Doudna. History Repeated sequences The discovery of clustered DNA repeats took place independently in three parts of the world. The first description of what would later be called CRISPR is from Osaka University researcher Yoshizumi Ishino and his colleagues in 1987. They accidentally cloned part of a CRISPR sequence together with the "iap" gene (isozyme conversion of alkaline phosphatase) from the genome of Escherichia coli which was their target. The organization of the repeats was unusual. Repeated sequences are typically arranged consecutively, without interspersing different sequences. They did not know the function of the interrupted clustered repeats. In 1993, r
https://en.wikipedia.org/wiki/Thermochromic%20ink
Thermochromic ink (also called thermochromatic ink) is a type of dye that changes color when temperatures increase or decrease. Often used in the manufacture of many toys or product packaging, as well as thermometers. Thermochromic ink can also turn transparent when heat is applied; an example of this type of thermochromic ink is found on corners of an examination mark sheet. This proves that the sheet has not been edited or photocopied, and also on certain pizza boxes to show the temperature of the product. Use on packaging can be to detect temperature history during shipping and to indicate proper heating in an oven. Examples On June 20, 2017, the United States Postal Service released the first application of thermochromic ink to postage stamps in its Total Eclipse of the Sun Forever stamp to commemorate the solar eclipse of August 21, 2017. When pressed with a finger, body heat turns the black circle in the center of the stamp into an image of the full moon. The stamp image is a photo of a total solar eclipse seen in Jalu, Libya, on March 29, 2006. The photo was taken by retired NASA astrophysicist Fred Espenak, aka "Mr. Eclipse". See also Thermochromism Security printing Active packaging References Thermochromism Dyes
https://en.wikipedia.org/wiki/Steenrod%20algebra
In algebraic topology, a Steenrod algebra was defined by to be the algebra of stable cohomology operations for mod cohomology. For a given prime number , the Steenrod algebra is the graded Hopf algebra over the field of order , consisting of all stable cohomology operations for mod cohomology. It is generated by the Steenrod squares introduced by for , and by the Steenrod reduced th powers introduced in and the Bockstein homomorphism for . The term "Steenrod algebra" is also sometimes used for the algebra of cohomology operations of a generalized cohomology theory. Cohomology operations A cohomology operation is a natural transformation between cohomology functors. For example, if we take cohomology with coefficients in a ring , the cup product squaring operation yields a family of cohomology operations: Cohomology operations need not be homomorphisms of graded rings; see the Cartan formula below. These operations do not commute with suspension—that is, they are unstable. (This is because if is a suspension of a space , the cup product on the cohomology of is trivial.) Steenrod constructed stable operations for all greater than zero. The notation and their name, the Steenrod squares, comes from the fact that restricted to classes of degree is the cup square. There are analogous operations for odd primary coefficients, usually denoted and called the reduced -th power operations: The generate a connected graded algebra over , where the multiplication is given by composition of operations. This is the mod 2 Steenrod algebra. In the case , the mod Steenrod algebra is generated by the and the Bockstein operation associated to the short exact sequence . In the case , the Bockstein element is and the reduced -th power is . As a cohomology ring We can summarize the properties of the Steenrod operations as generators in the cohomology ring of Eilenberg–Maclane spectra , since there is an isomorphism giving a direct sum decomposition of all po
https://en.wikipedia.org/wiki/Hydrobiology
Hydrobiology is the science of life and life processes in water. Much of modern hydrobiology can be viewed as a sub-discipline of ecology but the sphere of hydrobiology includes taxonomy, economic and industrial biology, morphology, and physiology. The one distinguishing aspect is that all fields relate to aquatic organisms. Most work is related to limnology and can be divided into lotic system ecology (flowing waters) and lentic system ecology (still waters). One of the significant areas of current research is eutrophication. Special attention is paid to biotic interactions in plankton assemblage including the microbial loop, the mechanism of influencing algal blooms, phosphorus load, and lake turnover. Another subject of research is the acidification of mountain lakes. Long-term studies are carried out on changes in the ionic composition of the water of rivers, lakes and reservoirs in connection with acid rain and fertilization. One goal of current research is elucidation of the basic environmental functions of the ecosystem in reservoirs, which are important for water quality management and water supply. Much of the early work of hydrobiologists concentrated on the biological processes utilized in sewage treatment and water purification especially slow sand filters. Other historically important work sought to provide biotic indices for classifying waters according to the biotic communities that they supported. This work continues to this day in Europe in the development of classification tools for assessing water bodies for the EU water framework directive. A hydrobiologist technician conducts field analysis for hydrobiology. They identify plants and living species, locate their habitat, and count them. They also identify pollutants and nuisances that can affect the aquatic fauna and flora. They take the samples and write reports of their observations for publications. A hydrobiologist engineer intervenes more in the process of the study. They define the inte
https://en.wikipedia.org/wiki/Apple%20Interactive%20Television%20Box
The Apple Interactive Television Box (AITB) is a television set-top box developed by Apple Computer (now Apple Inc.) in partnership with a number of global telecommunications firms, including British Telecom and Belgacom. Prototypes of the unit were deployed at large test markets in parts of the United States and Europe in 1994 and 1995, but the product was canceled shortly thereafter, and was never mass-produced or marketed. Overview The AITB was designed as an interface between a consumer and an interactive television service. The unit's remote control would allow a user to choose what content would be shown on a connected television, and to seek with fast forward and rewind. In this regard it is similar to a modern satellite receiver or TiVo unit. The box would only pass along the user's choices to a central content server for streaming instead of issuing content itself. There were also plans for game shows, educational material for children, and other forms of content made possible by the interactive qualities of the device. Early conceptual prototypes have an unfinished feel. Near-completion units have a high production quality, the internal components often lack prototype indicators, and some units have FCC approval stickers. These facts, along with a full online manual suggest the product was very near completion before being canceled. Infrastructure Because the machine was designed to be part of a subscription data service, the AITB units are mostly inoperable. The ROM contains only what is required to continue booting from an external hard drive or from its network connection. Many of the prototypes do not appear to even attempt to boot. This is likely dependent on changes in the ROM. The ROM itself contains parts of a downsized System 7.1, enabling it to establish a network connection to the media servers provided by Oracle. The Oracle Media Server (OMS) initially ran on hardware produced by Larry Ellison's nCube Systems company, but was later also ma
https://en.wikipedia.org/wiki/Change%20of%20variables
In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem. Change of variables is an operation that is related to substitution. However these are different operations, as can be seen when considering differentiation (chain rule) or integration (integration by substitution). A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth-degree polynomial: Sixth-degree polynomial equations are generally impossible to solve in terms of radicals (see Abel–Ruffini theorem). This particular equation, however, may be written (this is a simple case of a polynomial decomposition). Thus the equation may be simplified by defining a new variable . Substituting x by into the polynomial gives which is just a quadratic equation with the two solutions: The solutions in terms of the original variable are obtained by substituting x3 back in for u, which gives Then, assuming that one is interested only in real solutions, the solutions of the original equation are Simple example Consider the system of equations where and are positive integers with . (Source: 1991 AIME) Solving this normally is not very difficult, but it may get a little tedious. However, we can rewrite the second equation as . Making the substitutions and reduces the system to . Solving this gives and . Back-substituting the first ordered pair gives us , which gives the solution Back-substituting the second ordered pair gives us , which gives no solutions. Hence the solution that solves the system is . Formal introduction Let , be smooth manifolds and let be a -diffeomorphism between them, that is: is a times continuously differentiable, bijective map from to with times continuously differentiable inverse from to .
https://en.wikipedia.org/wiki/Many-to-many
Many-to-many communication occurs when information is shared between groups. Members of a group receive information from multiple senders. Wikis are a type of many-to-many communication, where multiple editors collaborate to create content that is disseminated among a wide audience. Video conferencing, online gaming, chat rooms, and internet forums are also types of many-to-many communication. References See also Point-to-point (telecommunications) Point-to-multipoint communication Network architecture Information technology management Communication pt:N para M
https://en.wikipedia.org/wiki/Anime%20Web%20Turnpike
Anime Web Turnpike (also known as Anipike) was a web directory founded in August 1995 by Jay Fubler Harvey. It served as a large database of links to various anime and manga websites. With well over 40,000 links, it had one of the largest organized collection of anime and manga related links. Users could add their own website to the database by setting up a username on the site and adding it to the applicable category. The website also had services such as a community forum, chat room and a magazine. The Anime Broadcasting Network, Inc. acquired the Anime Web Turnpike in 2000 with plans to enhance and expand the site, but multiple technical issues delayed these plans. As of Nov 2014, the site has gone offline. The site was back online as of July 2016, with no new posts since 2014. As of March 2021, the website has not been updated. Reception In 1995, the site was mentioned among 101 Internet sites to visit. The site and its creator were featured in the 2003 documentary film Otaku Unite! In 2003, Anime Web Turnpike was ranked the number three "must visit" anime website by the online magazine Animefringe. References External links Official Website Anime Web Turnpike Anime and manga websites Internet properties established in 1995 Web directories
https://en.wikipedia.org/wiki/Portastudio
The TASCAM Portastudio was the first four-track recorder based on a standard compact audio cassette tape. The term portastudio is exclusive to TASCAM, though it is generally used to describe all self-contained cassette-based multitrack recorders dedicated to music production. The Portastudio, and particularly its first iteration, the Teac 144, is credited with launching the home-recording wave, which allowed musicians to cheaply record and produce music at home, and is cited as one of the most significant innovations in music production technology. The Teac 144 Portastudio made its debut in 1979, at the annual meeting of the Audio Engineering Society. It was followed by several other models by TASCAM, and eventually by several other manufacturers. For the first time it enabled musicians to affordably record several instrumental and vocal parts on different tracks of the built-in four-track cassette recorder individually and later blend all the parts together, while transferring them to another standard, two-channel stereo tape deck (remix and mixdown) to form a stereo recording. The Tascam Portastudio 244, introduced in 1982, improved upon the previous design with overall better sound quality and more features, including: dbx noise reduction, dual/concentric sweepable EQ's, and the ability to record on up to 4 tracks simultaneously. In general, these machines were typically used by amateur and professional musicians to record demos, although they are still used today in lo-fi recording. The analog portastudios by TASCAM (a division of TEAC) and similar units by Fostex, Akai, Yamaha, Sansui, Marantz, Vestax, Vesta Fire, TOA, Audio-Technica, Peavey, and others generally recorded on high-bias cassette tapes. Most of the machines were four-track, but there were also six-track and eight-track units. Some newer digital models record to a hard disk, allowing for digital effects and up to 32 tracks of audio. Function The Portastudio supported the bouncing of content b
https://en.wikipedia.org/wiki/Viridos%20%28company%29
In September 2021, Synthetic Genomics Inc. (SGI), a private company located in La Jolla, California, changed its name to Viridos. The company is focused on the field of synthetic biology, especially harnessing photosynthesis with micro algae to create alternatives to fossil fuels. Viridos designs and builds biological systems to address global sustainability problems. Synthetic biology is an interdisciplinary branch of biology and engineering, combining fields such as biotechnology, evolutionary biology, molecular biology, systems biology, biophysics, computer engineering, and genetic engineering. Synthetic Genomics uses techniques such as software engineering, bioprocessing, bioinformatics, biodiscovery, analytical chemistry, fermentation, cell optimization, and DNA synthesis to design and build biological systems. The company produces or performs research in the fields of sustainable bio-fuels, insect resistant crops, transplantable organs, targeted medicines, DNA synthesis instruments as well as a number of biological reagents. Core markets SGI mainly operates in three end markets: research, bioproduction and applied products. The research segment focuses on genomics solutions for academic and commercial research organizations. The commercial products and services include instrumentation, reagents, DNA synthesis services, and bioinformatics services and software. In 2015, the company launched the BioXP 3200 system, a fully automated benchtop instrument that produces DNA fragments from many different sources for genomic data. The company's efforts in bio-based production are intended to improve both existing production hosts and develop entirely new synthetic production hosts with the goal of more efficient routes to bioproducts. SGI has a number of commercial as well as research and development stage programs across a variety of industries. Some of these research partnerships include: History Synthetic Genomics was founded in the spring of 2005 by J. Craig
https://en.wikipedia.org/wiki/Point%20process
In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others. There are different mathematical interpretations of a point process, such as a random counting measure or a random set. Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, though it has been remarked that the difference between point processes and stochastic processes is not clear. Others consider a point process as a stochastic process, where the process is indexed by sets of the underlying space on which it is defined, such as the real line or -dimensional Euclidean space. Other stochastic processes such as renewal and counting processes are studied in the theory of point processes. Sometimes the term "point process" is not preferred, as historically the word "process" denoted an evolution of some system in time, so point process is also called a random point field. Point processes on the real line form an important special case that is particularly amenable to study, because the points are ordered in a natural way, and the whole point process can be described completely by the (random) intervals between the points. These point processes are frequently used as models for random events in time, such as the arrival of customers in a queue (queueing theory), of impulses in a neuron (computational neuroscience), particles in a Geiger counter, location of radio stations in a telecommunication network or of searches on the world-wide web. General point process theory In mathema
https://en.wikipedia.org/wiki/Invariants%20of%20tensors
In mathematics, in the fields of multilinear algebra and representation theory, the principal invariants of the second rank tensor are the coefficients of the characteristic polynomial , where is the identity operator and represent the polynomial's eigenvalues. More broadly, any scalar-valued function is an invariant of if and only if for all orthogonal . This means that a formula expressing an invariant in terms of components, , will give the same result for all Cartesian bases. For example, even though individual diagonal components of will change with a change in basis, the sum of diagonal components will not change. Properties The principal invariants do not change with rotations of the coordinate system (they are objective, or in more modern terminology, satisfy the principle of material frame-indifference) and any function of the principal invariants is also objective. Calculation of the invariants of rank two tensors In a majority of engineering applications, the principal invariants of (rank two) tensors of dimension three are sought, such as those for the right Cauchy-Green deformation tensor. Principal invariants For such tensors, the principal invariants are given by: For symmetric tensors, these definitions are reduced. The correspondence between the principal invariants and the characteristic polynomial of a tensor, in tandem with the Cayley–Hamilton theorem reveals that where is the second-order identity tensor. Main invariants In addition to the principal invariants listed above, it is also possible to introduce the notion of main invariants which are functions of the principal invariants above. These are the coefficients of the characteristic polynomial of the deviator , such that it is traceless. The separation of a tensor into a component that is a multiple of the identity and a traceless component is standard in hydrodynamics, where the former is called isotropic, providing the modified pressure, and the latter is called de
https://en.wikipedia.org/wiki/Spatial%20ecology
Spatial ecology studies the ultimate distributional or spatial unit occupied by a species. In a particular habitat shared by several species, each of the species is usually confined to its own microhabitat or spatial niche because two species in the same general territory cannot usually occupy the same ecological niche for any significant length of time. Overview In nature, organisms are neither distributed uniformly nor at random, forming instead some sort of spatial pattern. This is due to various energy inputs, disturbances, and species interactions that result in spatially patchy structures or gradients. This spatial variance in the environment creates diversity in communities of organisms, as well as in the variety of the observed biological and ecological events. The type of spatial arrangement present may suggest certain interactions within and between species, such as competition, predation, and reproduction. On the other hand, certain spatial patterns may also rule out specific ecological theories previously thought to be true. Although spatial ecology deals with spatial patterns, it is usually based on observational data rather than on an existing model. This is because nature rarely follows set expected order. To properly research a spatial pattern or population, the spatial extent to which it occurs must be detected. Ideally, this would be accomplished beforehand via a benchmark spatial survey, which would determine whether the pattern or process is on a local, regional, or global scale. This is rare in actual field research, however, due to the lack of time and funding, as well as the ever-changing nature of such widely-studied organisms such as insects and wildlife. With detailed information about a species' life-stages, dynamics, demography, movement, behavior, etc., models of spatial pattern may be developed to estimate and predict events in unsampled locations. History Most mathematical studies in ecology in the nineteenth century assumed a un
https://en.wikipedia.org/wiki/Mathematical%20universe%20hypothesis
In physics and cosmology, the mathematical universe hypothesis (MUH), also known as the ultimate ensemble theory, is a speculative "theory of everything" (TOE) proposed by cosmologist Max Tegmark. Description Tegmark's MUH is the hypothesis that our external physical reality is a mathematical structure. That is, the physical universe is not merely described by mathematics, but is mathematics — specifically, a mathematical structure. Mathematical existence equals physical existence, and all structures that exist mathematically exist physically as well. Observers, including humans, are "self-aware substructures (SASs)". In any mathematical structure complex enough to contain such substructures, they "will subjectively perceive themselves as existing in a physically 'real' world". The theory can be considered a form of Pythagoreanism or Platonism in that it proposes the existence of mathematical entities; a form of mathematicism in that it denies that anything exists except mathematical objects; and a formal expression of ontic structural realism. Tegmark claims that the hypothesis has no free parameters and is not observationally ruled out. Thus, he reasons, it is preferred over other theories-of-everything by Occam's Razor. Tegmark also considers augmenting the MUH with a second assumption, the computable universe hypothesis (CUH), which says that the mathematical structure that is our external physical reality is defined by computable functions. The MUH is related to Tegmark's categorization of four levels of the multiverse. This categorization posits a nested hierarchy of increasing diversity, with worlds corresponding to different sets of initial conditions (level 1), physical constants (level 2), quantum branches (level 3), and altogether different equations or mathematical structures (level 4). Criticisms and responses Andreas Albrecht of Imperial College in London called it a "provocative" solution to one of the central problems facing physics. Alth
https://en.wikipedia.org/wiki/Uniformization%20%28set%20theory%29
In set theory, a branch of mathematics, the axiom of uniformization is a weak form of the axiom of choice. It states that if is a subset of , where and are Polish spaces, then there is a subset of that is a partial function from to , and whose domain (the set of all such that exists) equals Such a function is called a uniformizing function for , or a uniformization of . To see the relationship with the axiom of choice, observe that can be thought of as associating, to each element of , a subset of . A uniformization of then picks exactly one element from each such subset, whenever the subset is non-empty. Thus, allowing arbitrary sets X and Y (rather than just Polish spaces) would make the axiom of uniformization equivalent to the axiom of choice. A pointclass is said to have the uniformization property if every relation in can be uniformized by a partial function in . The uniformization property is implied by the scale property, at least for adequate pointclasses of a certain form. It follows from ZFC alone that and have the uniformization property. It follows from the existence of sufficient large cardinals that and have the uniformization property for every natural number . Therefore, the collection of projective sets has the uniformization property. Every relation in L(R) can be uniformized, but not necessarily by a function in L(R). In fact, L(R) does not have the uniformization property (equivalently, L(R) does not satisfy the axiom of uniformization). (Note: it's trivial that every relation in L(R) can be uniformized in V, assuming V satisfies the axiom of choice. The point is that every such relation can be uniformized in some transitive inner model of V in which the axiom of determinacy holds.) References Set theory Descriptive set theory Axiom of choice
https://en.wikipedia.org/wiki/Drum%20replacement
Drum replacement is the practice, in modern music production, of an engineer or producer recording a live drummer and replacing (or adding to) the sound of a particular drum with a pre-recorded sample. For example, a drummer might play a beat, whereupon the engineer might then replace all of the snare hits with the sound of a hand-clap. It is considered by some to be one of the most arcane practices of the modern music production industry and is an example of the considerable influence of computers in modern music, even in genres not strictly classified as "electronic music." Origins The practice is an extension of the recording techniques of the 1970s through to the 1980s, wherein the constant search for better or "more perfect" sound led to a variety of techniques being tested, including the extensive use of drum machines. Among these techniques was drum replacement, which was pioneered by producer Roger Nichols while in the studio with Steely Dan in the late '70s, and has grown in both popularity and complexity since. One of the most common uses of this technique is the replacing of every snare hit in a performance (which may or may not sound subjectively "good") with an "ideal" snare drum hit. Should the decision be made to use drum replacement techniques, the actual implementation of the practice usually falls to an audio engineer during the mixing stage. Association Drum replacing is often mentioned, along with autotune, harmonizers, and advanced compressors, as being symptomatic of the "artificial nature" of modern western music by certain critics. Some critics suggest that the practice defeats the purpose of having a live drummer as opposed to a drum machine, since the end result is effectively exactly the same as what a drum machine would produce if the drum machine had a custom sample recorded for it by the engineer. Others laud it as one of the subtleties of studio technique, used by engineers to give their craft more complexity in an increasingly
https://en.wikipedia.org/wiki/Ride%20height
Ride height or ground clearance is the amount of space between the base of an automobile tire and the lowest point of the automobile (typically the axle); or, more properly, to the shortest distance between a flat, level surface, and the lowest part of a vehicle other than those parts designed to contact the ground (such as tires, tracks, skis, etc.). Ground clearance is measured with standard vehicle equipment, and for cars, is usually given with no cargo or passengers. Function Ground clearance is a critical factor in several important characteristics of a vehicle. For all vehicles, especially cars, variations in clearance represent a trade-off between handling, ride quality, and practicality. A higher ride height and ground clearance means that the wheels have more vertical room to travel and absorb road shocks. Also, the car is more capable of being driven on roads that are not level, without the scraping against surface obstacles and possibly damaging the chassis and underbody. For a higher ride height, the center of mass of the car is higher, which makes for less precise and more dangerous handling characteristics (most notably, the chance of rollover is higher). Higher ride heights will typically adversely affect aerodynamic properties. This is why sports cars typically have very low clearances, while off-road vehicles and SUVs have higher ones. Example ride heights A road car usually has a ride height around , while an SUV usually lies around . Two well-known extremes are the Ferrari F40 with a ride height and the Hummer H1 with a ride height. The table below provides average ride height for different car types which were available on the market in India in 2020: Specialized uses Underslung frame Some cars have used underslung frames to achieve a lower ride height and the consequent improvement in center of gravity. The 1905-14 cars of the American Motor Car Company are one example. Self-leveling Self-leveling suspension systems are designed to
https://en.wikipedia.org/wiki/The%20Hobbit%20%281982%20video%20game%29
The Hobbit is an illustrated text adventure computer game released in 1982 for the ZX Spectrum home computer and based on the 1937 book The Hobbit, by J. R. R. Tolkien. It was developed at Beam Software by Philip Mitchell and Veronika Megler and published by Melbourne House. It was later converted to most home computers available at the time including the Commodore 64, BBC Micro, and Oric computers. By arrangement with the book publishers, a copy of the book was included with each game sold. The parser was very advanced for the time and used a subset of English called Inglish. When it was released, most adventure games used simple verb-noun parsers (allowing for simple phrases like "get lamp"), but Inglish allowed the player to type advanced sentences such as "ask Gandalf about the curious map then take sword and kill troll with it". The parser was complex and intuitive, introducing pronouns, adverbs ("viciously attack the goblin"), punctuation and prepositions and allowing the player to interact with the game world in ways not previously possible. Gameplay Many locations are illustrated by an image, based on originals designed by Kent Rees. On the tape version, to save space, each image was stored in a compressed format by storing outline information and then flood filling the enclosed areas on the screen. The slow CPU speed meant that it would take up to several seconds for each scene to draw. The disk-based versions of the game used pre-rendered, higher-quality images. The game has an innovative text-based physics system, developed by Veronika Megler. Objects, including the characters in the game, have a calculated size, weight, and solidity. Objects can be placed inside other objects, attached together with rope and damaged or broken. If the main character is sitting in a barrel and this barrel is then picked up and thrown through a trapdoor, the player would go through. Unlike other works of interactive fiction, the game is also in real time, insofar as a p
https://en.wikipedia.org/wiki/Time%20delay%20and%20integration
A time delay and integration or time delay integration (TDI) charge-coupled device (CCD) is an image sensor for capturing images of moving objects at low light levels. While using similar underlying CCD technology, in operation it contrasts with staring arrays and line scanned arrays. It works by synchronized mechanical and electronical scanning, so that the effects of dim imaging targets on the sensor can be integrated over longer periods of time. TDI is more of an operating mode for CCDs than a separate type of CCD device altogether, even if technical optimizations for the mode are also available. The principle behind TDI—constructive interference between separate observations—is often applicable to other sensor technologies, so that it is comparable to any long term integrating mode of imaging, such as speckle imaging, adaptive optics, and especially long exposure astronomical observation. Detailed operation It is perhaps the easiest to understand TDI devices by contrast with more well-known types of CCD sensors. The best known is the staring array one. In it, there are hundreds or thousands of adjacent rows of specially engineered semiconductor which react to light by accumulating charge, and slightly separated in depth from it by insulation, a tightly spaced array of gate electrodes, whose electric field can be used to drive the accumulated charge around in a predictable and almost lossless fashion. In a staring array configuration, the image is exposed on the two-dimensional semiconductor surface, and then the resulting charge distribution over each line of the image is moved to the side, to be rapidly and sequentially read out by an electronic read amplifier. When done fast enough, this produces a snapshot of the applied photonic flux over the sensor; the readout can proceed in parallel over the several lines, and yields a two-dimensional image of the light applied. Along with CMOS detectors which sense the photocharge accumulation pixel by pixel instead
https://en.wikipedia.org/wiki/MUSASINO-1
The MUSASINO-1 was one of the earliest electronic digital computers built in Japan. Construction started at the Electrical Communication Laboratories of NTT at Musashino, Tokyo in 1952 and was completed in July 1957. The computer was used until July 1962. Saburo Muroga, a University of Illinois visiting scholar and member of the ILLIAC I team, returned to Japan and oversaw the construction of MUSASINO-1. Using 519 vacuum tubes and 5,400 parametrons, the MUSASINO-1 possessed a magnetic core memory, initially of 32 (later expanded to 256) words. A word was composed of 40 bits, and two instructions could be stored in a single word. Addition time was clocked at 1,350 microseconds, multiplication at 6,800 microseconds, and division time at 26.1 milliseconds. The MUSASINO-1's instruction set was a superset of the ILLIAC I's instructions, so it could generally use the latter's software. However, many of the programs for the ILLIAC used some of the unused bits in the instructions to store data, and these would be interpreted as a different instructions by the MUSASINO-1 control circuitry. See also FUJIC ILLIAC I List of vacuum-tube computers References Raúl Rojas and Ulf Hashagen, ed. The First Computers: History and Architectures. 2000, MIT Press, . In memory of Saburo Muroga, CS @ Illinois Alumni Magazine, Summer 2011 External links Descriptions of the MUSASINO-1 and its immediate successors at the IPSJ Computer Museum IAS architecture computers 40-bit computers Vacuum tube computers Magnetic logic computers
https://en.wikipedia.org/wiki/Teknisk%20Ukeblad
Teknisk Ukeblad (TU, ) is a Norwegian engineering magazine. The magazine has its headquarters in Oslo, Norway. History and profile TU has appeared weekly since 13 April 1883 and was published by Ingeniørforlaget, now Teknisk Ukeblad Media jointly owned by three national professional associations of engineers and architects: the Norwegian Society of Engineers and Technologists (NITO, founded 1936), Tekna (founded in 1874), and the Norwegian Polytechnic Society (PF, founded 1852). On 24 June 2010 TU had a total circulation of 302,000 weekly copies. Corresponding publications are Ny Teknik in Sweden, Ingeniøren in Denmark and Technisch Weekblad in the Netherlands. References External links Teknisk Ukeblad, the magazine's website Teknisk Ukeblad, some older volumes digitized by Project Runeberg 1883 establishments in Norway Engineering magazines Magazines established in 1883 Magazines published in Oslo Norwegian-language magazines Science and technology magazines Weekly magazines published in Norway
https://en.wikipedia.org/wiki/Quantifier%20elimination
Quantifier elimination is a concept of simplification used in mathematical logic, model theory, and theoretical computer science. Informally, a quantified statement " such that " can be viewed as a question "When is there an such that ?", and the statement without quantifiers can be viewed as the answer to that question. One way of classifying formulas is by the amount of quantification. Formulas with less depth of quantifier alternation are thought of as being simpler, with the quantifier-free formulas as the simplest. A theory has quantifier elimination if for every formula , there exists another formula without quantifiers that is equivalent to it (modulo this theory). Examples An example from high school mathematics says that a single-variable quadratic polynomial has a real root if and only if its discriminant is non-negative: Here the sentence on the left-hand side involves a quantifier , while the equivalent sentence on the right does not. Examples of theories that have been shown decidable using quantifier elimination are Presburger arithmetic, algebraically closed fields, real closed fields, atomless Boolean algebras, term algebras, dense linear orders, abelian groups, random graphs, as well as many of their combinations such as Boolean algebra with Presburger arithmetic, and term algebras with queues. Quantifier eliminator for the theory of the real numbers as an ordered additive group is Fourier–Motzkin elimination; for the theory of the field of real numbers it is the Tarski–Seidenberg theorem. Quantifier elimination can also be used to show that "combining" decidable theories leads to new decidable theories (see Feferman-Vaught theorem). Algorithms and decidability If a theory has quantifier elimination, then a specific question can be addressed: Is there a method of determining for each ? If there is such a method we call it a quantifier elimination algorithm. If there is such an algorithm, then decidability for the theory reduces to deci
https://en.wikipedia.org/wiki/Counting%20quantification
A counting quantifier is a mathematical term for a quantifier of the form "there exists at least k elements that satisfy property X". In first-order logic with equality, counting quantifiers can be defined in terms of ordinary quantifiers, so in this context they are a notational shorthand. However, they are interesting in the context of logics such as two-variable logic with counting that restrict the number of variables in formulas. Also, generalized counting quantifiers that say "there exists infinitely many" are not expressible using a finite number of formulas in first-order logic. Definition in terms of ordinary quantifiers Counting quantifiers can be defined recursively in terms of ordinary quantifiers. Let denote "there exist exactly ". Then Let denote "there exist at least ". Then See also Uniqueness quantification Lindström quantifier References Erich Graedel, Martin Otto, and Eric Rosen. "Two-Variable Logic with Counting is Decidable." In Proceedings of 12th IEEE Symposium on Logic in Computer Science LICS `97, Warschau. 1997. Postscript file Quantifier (logic)
https://en.wikipedia.org/wiki/PGPfone
PGPfone was a secure voice telephony system developed by Philip Zimmermann in 1995. The PGPfone protocol had little in common with Zimmermann's popular PGP email encryption package, except for the use of the name. It used ephemeral Diffie-Hellman protocol to establish a session key, which was then used to encrypt the stream of voice packets. The two parties compared a short authentication string to detect a Man-in-the-middle attack, which is the most common method of wiretapping secure phones of this type. PGPfone could be used point-to-point (with two modems) over the public switched telephone network, or over the Internet as an early Voice over IP system. In 1996, there were no protocol standards for Voice over IP. Ten years later, Zimmermann released the successor to PGPfone, Zfone and ZRTP, a newer and secure VoIP protocol based on modern VoIP standards. Zfone builds on the ideas of PGPfone. According to the MIT PGPfone web page, "MIT is no longer distributing PGPfone. Given that the software has not been maintained since 1997, we doubt it would run on most modern systems." See also Comparison of VoIP software Nautilus (secure telephone) PGP word list Secure telephone References External links PGPfone homepage on PGPi Old PGPfone homepage on MIT PGPfone sources, modified to build on modern systems Secure communication Cryptographic software VoIP software
https://en.wikipedia.org/wiki/Oxidative%20stress
Oxidative stress reflects an imbalance between the systemic manifestation of reactive oxygen species and a biological system's ability to readily detoxify the reactive intermediates or to repair the resulting damage. Disturbances in the normal redox state of cells can cause toxic effects through the production of peroxides and free radicals that damage all components of the cell, including proteins, lipids, and DNA. Oxidative stress from oxidative metabolism causes base damage, as well as strand breaks in DNA. Base damage is mostly indirect and caused by the reactive oxygen species generated, e.g., O2− (superoxide radical), OH (hydroxyl radical) and H2O2 (hydrogen peroxide). Further, some reactive oxidative species act as cellular messengers in redox signaling. Thus, oxidative stress can cause disruptions in normal mechanisms of cellular signaling. In humans, oxidative stress is thought to be involved in the development of attention deficit hyperactivity disorder, cancer, Parkinson's disease, Lafora disease, Alzheimer's disease, atherosclerosis, heart failure, myocardial infarction, fragile X syndrome, sickle-cell disease, lichen planus, vitiligo, autism, infection, chronic fatigue syndrome, and depression; however, reactive oxygen species can be beneficial, as they are used by the immune system as a way to attack and kill pathogens. Short-term oxidative stress may also be important in prevention of aging by induction of a process named mitohormesis, and is required to initiate stress response processes in plants. Chemical and biological effects Chemically, oxidative stress is associated with increased production of oxidizing species or a significant decrease in the effectiveness of antioxidant defenses, such as glutathione. The effects of oxidative stress depend upon the size of these changes, with a cell being able to overcome small perturbations and regain its original state. However, more severe oxidative stress can cause cell death, and even moderate oxidati
https://en.wikipedia.org/wiki/SipXecs
SipXecs is a free software enterprise communications system. It was initially developed by Pingtel Corporation in 2003 as a voice over IP telephony server located in Boston, MA. The server was later extended with additional collaboration capabilities as part of the SIPfoundry project. Since its extension, sipXecs now acts as a software implementation of the Session Initiation Protocol (SIP), making it a full IP-based communications system. SipXecs competitors include other open-source telephony and SoftSwitch solutions such as Asterisk, FreeSWITCH, and the SIP Express Router. History Development of sipXecs began in 2003 by Pingtel Corporation. In 2004, Pingtel adopted an open-source business model and contributed the codebase to the not-for-profit organization SIPfoundry. It has been an open source project since then. Pingtel's assets were acquired by Bluesocket in July 2007. In August 2008 the Pingtel assets were acquired from Bluesocket by Nortel. Subsequent to the acquisition by Nortel, Nortel released the SCS500 product based on sipXecs. SCS500 was positioned as an open and software-only telephony server for the SMB market up to 500 users and received some recognition. It was later renamed SCS and positioned as an enterprise communications system. Subsequent to the Nortel bankruptcy and the acquisition of the Nortel assets by Avaya, sipXecs continued to be used as the basis for the Avaya Live cloud based communications service. In April 2010 the founders of SIPfoundry founded , a commercial version of the software. Information SipXecs is designed as a software-only, distributed cloud application. It runs on the Linux operating system CentOS or RHEL on either virtualized or physical servers. A minimum configuration allows running all of the sipXecs components on a single server, including database, all available services, and the sipXecs management. Global clusters can be built using built-in auto-configuration capabilities from the centralized management
https://en.wikipedia.org/wiki/Pascal%27s%20simplex
In mathematics, Pascal's simplex is a generalisation of Pascal's triangle into arbitrary number of dimensions, based on the multinomial theorem. Generic Pascal's m-simplex Let m (m > 0) be a number of terms of a polynomial and n (n ≥ 0) be a power the polynomial is raised to. Let denote a Pascal's m-simplex. Each Pascal's m-simplex is a semi-infinite object, which consists of an infinite series of its components. Let denote its nth component, itself a finite (m − 1)-simplex with the edge length n, with a notational equivalent . nth component consists of the coefficients of multinomial expansion of a polynomial with m terms raised to the power of n: where . Example for Pascal's 4-simplex , sliced along the k4. All points of the same color belong to the same n-th component, from red (for n = 0) to blue (for n = 3). Specific Pascal's simplices Pascal's 1-simplex is not known by any special name. nth component (a point) is the coefficient of multinomial expansion of a polynomial with 1 term raised to the power of n: Arrangement of which equals 1 for all n. Pascal's 2-simplex is known as Pascal's triangle . nth component (a line) consists of the coefficients of binomial expansion of a polynomial with 2 terms raised to the power of n: Arrangement of Pascal's 3-simplex is known as Pascal's tetrahedron . nth component (a triangle) consists of the coefficients of trinomial expansion of a polynomial with 3 terms raised to the power of n: Arrangement of Properties Inheritance of components is numerically equal to each (m − 1)-face (there is m + 1 of them) of , or: From this follows, that the whole is (m + 1)-times included in , or: Example For more terms in the above array refer to Equality of sub-faces Conversely, is (m + 1)-times bounded by , or: From this follows, that for given n, all i-faces are numerically equal in nth components of all Pascal's (m > i)-simplices, or: Example The 3rd component (2-simplex) of Pascal's 3
https://en.wikipedia.org/wiki/Integer%20overflow
In computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is outside of the range that can be represented with a given number of digits – either higher than the maximum or lower than the minimum representable value. The most common result of an overflow is that the least significant representable digits of the result are stored; the result is said to wrap around the maximum (i.e. modulo a power of the radix, usually two in modern computers, but sometimes ten or another radix). An overflow condition may give results leading to unintended behavior. In particular, if the possibility has not been anticipated, overflow can compromise a program's reliability and security. For some applications, such as timers and clocks, wrapping on overflow can be desirable. The C11 standard states that for unsigned integers, modulo wrapping is the defined behavior and the term overflow never applies: "a computation involving unsigned operands can never overflow." On some processors like graphics processing units (GPUs) and digital signal processors (DSPs) which support saturation arithmetic, overflowed results would be "clamped", i.e. set to the minimum or the maximum value in the representable range, rather than wrapped around. Origin The register width of a processor determines the range of values that can be represented in its registers. Though the vast majority of computers can perform multiple-precision arithmetic on operands in memory, allowing numbers to be arbitrarily long and overflow to be avoided, the register width limits the sizes of numbers that can be operated on (e.g., added or subtracted) using a single instruction per operation. Typical binary register widths for unsigned integers include: 4-bit: maximum representable value 24 − 1 = 15 8-bit: maximum representable value 28 − 1 = 255 16-bit: maximum representable value 216 − 1 = 65,535 32-bit: maximum representable value 232 − 1 = 4,294,967,295 (th
https://en.wikipedia.org/wiki/FreeRTOS
FreeRTOS is a real-time operating system kernel for embedded devices that has been ported to 35 microcontroller platforms. It is distributed under the MIT License. History The FreeRTOS kernel was originally developed by Richard Barry around 2003, and was later developed and maintained by Barry's company, Real Time Engineers Ltd. In 2017, the firm passed stewardship of the FreeRTOS project to Amazon Web Services (AWS). Barry continues to work on FreeRTOS as part of an AWS team. Implementation FreeRTOS is designed to be small and simple. It is mostly written in the C programming language to make it easy to port and maintain. It also comprises a few assembly language functions where needed, mostly in architecture-specific scheduler routines. Process management FreeRTOS provides methods for multiple threads or tasks, mutexes, semaphores and software timers. A tickless mode is provided for low power applications. Thread priorities are supported. FreeRTOS applications can be statically allocated, but objects can also be dynamically allocated with five schemes of memory management (allocation): allocate only; allocate and free with a very simple, fast, algorithm; a more complex but fast allocate and free algorithm with memory coalescence; an alternative to the more complex scheme that includes memory coalescence that allows a heap to be broken across multiple memory areas. and C library allocate and free with some mutual exclusion protection. RTOSes typically do not have the more advanced features that are found in operating systems like Linux and Microsoft Windows, such as device drivers, advanced memory management, and user accounts. The emphasis is on compactness and speed of execution. FreeRTOS can be thought of as a thread library rather than an operating system, although command line interface and POSIX-like input/output (I/O) abstraction are available. FreeRTOS implements multiple threads by having the host program call a thread tick method at regular sho
https://en.wikipedia.org/wiki/Zero-knowledge%20password%20proof
In cryptography, a zero-knowledge password proof (ZKPP) is a type of zero-knowledge proof that allows one party (the prover) to prove to another party (the verifier) that it knows a value of a password, without revealing anything other than the fact that it knows the password to the verifier. The term is defined in IEEE P1363.2, in reference to one of the benefits of using a password-authenticated key exchange (PAKE) protocol that is secure against off-line dictionary attacks. A ZKPP prevents any party from verifying guesses for the password without interacting with a party that knows it and, in the optimal case, provides exactly one guess in each interaction. A common use of a zero-knowledge password proof is in authentication systems where one party wants to prove its identity to a second party using a password but doesn't want the second party or anybody else to learn anything about the password. For example, apps can validate a password without processing it and a payment app can check the balance of an account without touching or learning anything about the amount. History The first methods to demonstrate a ZKPP were the encrypted key exchange methods (EKE) described by Steven M. Bellovin and Michael Merritt in 1992. A considerable number of refinements, alternatives, and variations in the growing class of password-authenticated key agreement methods were developed in subsequent years. Standards for these methods include IETF , IEEE P1363.2, and ISO-IEC 11770-4. See also Cryptographic protocol Outline of cryptography Key-agreement protocol Secure Remote Password protocol References External links David Jablon's links for password-based cryptography Password authentication
https://en.wikipedia.org/wiki/Antiisomorphism
In category theory, a branch of mathematics, an antiisomorphism (or anti-isomorphism) between structured sets A and B is an isomorphism from A to the opposite of B (or equivalently from the opposite of A to B). If there exists an antiisomorphism between two structures, they are said to be antiisomorphic. Intuitively, to say that two mathematical structures are antiisomorphic is to say that they are basically opposites of one another. The concept is particularly useful in an algebraic setting, as, for instance, when applied to rings. Simple example Let A be the binary relation (or directed graph) consisting of elements {1,2,3} and binary relation defined as follows: Let B be the binary relation set consisting of elements {a,b,c} and binary relation defined as follows: Note that the opposite of B (denoted Bop) is the same set of elements with the opposite binary relation (that is, reverse all the arcs of the directed graph): If we replace a, b, and c with 1, 2, and 3 respectively, we see that each rule in Bop is the same as some rule in A. That is, we can define an isomorphism from A to Bop by . is then an antiisomorphism between A and B. Ring anti-isomorphisms Specializing the general language of category theory to the algebraic topic of rings, we have: Let R and S be rings and f: R → S be a bijection. Then f is a ring anti-isomorphism if If R = S then f is a ring anti-automorphism. An example of a ring anti-automorphism is given by the conjugate mapping of quaternions: Notes References Morphisms Ring theory Algebra
https://en.wikipedia.org/wiki/Disinfectant%20%28software%29
Disinfectant was a popular antivirus software program for the classic Mac OS. It was originally released as freeware by John Norstad in the spring of 1989. Disinfectant featured a system extension that would detect virus infections and an application with which users could scan for and remove viruses. New versions of Disinfectant were subsequently released to detect additional viruses. Bob LeVitus praised and recommended Disinfectant in 1992. In May 1998, Norstad retired Disinfectant, citing the new danger posed by macro viruses, which Disinfectant did not detect, and the inability of a single individual to maintain a program that caught all of them. References Antivirus software Macintosh-only software 1989 software
https://en.wikipedia.org/wiki/Password-authenticated%20key%20agreement
In cryptography, a password-authenticated key agreement method is an interactive method for two or more parties to establish cryptographic keys based on one or more party's knowledge of a password. An important property is that an eavesdropper or man-in-the-middle cannot obtain enough information to be able to brute-force guess a password without further interactions with the parties for each (few) guesses. This means that strong security can be obtained using weak passwords. Types Password-authenticated key agreement generally encompasses methods such as: Balanced password-authenticated key exchange Augmented password-authenticated key exchange Password-authenticated key retrieval Multi-server methods Multi-party methods In the most stringent password-only security models, there is no requirement for the user of the method to remember any secret or public data other than the password. Password-authenticated key exchange (PAKE) is a method in which two or more parties, based only on their knowledge of a shared password, establish a cryptographic key using an exchange of messages, such that an unauthorized party (one who controls the communication channel but does not possess the password) cannot participate in the method and is constrained as much as possible from brute-force guessing the password. (The optimal case yields exactly one guess per run exchange.) Two forms of PAKE are balanced and augmented methods. Balanced PAKE Balanced PAKE assumes the two parties in either a client-client or client-server situation use the same secret password to negotiate and authenticate a shared key. Examples of these are: Encrypted Key Exchange (EKE) PAK and PPK SPEKE (Simple password exponential key exchange) Elliptic Curve based Secure Remote Password protocol (EC-SRP or SRP5) There is a free Java card implementation. Dragonfly – IEEE Std 802.11-2012, RFC 5931, RFC 6617 CPace SPAKE1 and SPAKE2 SESPAKE – RFC 8133 J-PAKE (Password Authenticated Key Exchang
https://en.wikipedia.org/wiki/Masahiko%20Fujiwara
Masahiko Fujiwara (Japanese: 藤原 正彦 Fujiwara Masahiko; born July 9, 1943, in Shinkyo, Manchukuo) is a Japanese mathematician and writer who is known for his book The Dignity of the Nation. He is a professor emeritus at Ochanomizu University. Biography Masahiko Fujiwara is the son of Jirō Nitta and Tei Fujiwara, who were both popular authors. He graduated from the University of Tokyo in 1966. He began writing after a two-year position as associate professor at the University of Colorado, with a book Wakaki sugakusha no Amerika designed to explain American campus life to Japanese people. He also wrote about the University of Cambridge, after a year's visit (Harukanaru Kenburijji: Ichi sugakusha no Igirisu). In a popular book on mathematics, he categorized theorems as beautiful theorems or ugly theorems. He is also known in Japan for speaking out against government reforms in secondary education. He wrote The Dignity of the Nation, which according to Time Asia was the second best selling book in the first six months of 2006 in Japan. In 2006, Fujiwara published Yo ni mo utsukushii sugaku nyumon ("An Introduction to the World's Most Elegant Mathematics") with the writer Yōko Ogawa: it is a dialogue between novelist and mathematician on the extraordinary beauty of numbers. References External links Article in the Financial Times from 2007. Online essay Essay on Literature and Mathematics Japanese essayists Mathematics popularizers Number theorists 20th-century Japanese mathematicians 21st-century Japanese mathematicians 1943 births Living people Recreational mathematicians Japanese people from Manchukuo University of Tokyo alumni University of Colorado Boulder faculty 20th-century essayists 21st-century essayists Academic staff of Ochanomizu University
https://en.wikipedia.org/wiki/Reflector%20%28antenna%29
An antenna reflector is a device that reflects electromagnetic waves. Antenna reflectors can exist as a standalone device for redirecting radio frequency (RF) energy, or can be integrated as part of an antenna assembly. Standalone reflectors The function of a standalone reflector is to redirect electromagnetic (EM) energy, generally in the radio wavelength range of the electromagnetic spectrum. Common standalone reflector types are corner reflector, which reflects the incoming signal back to the direction from which it came, commonly used in radar. flat reflector, which reflects the signal such as a mirror and is often used as a passive repeater. Integrated reflectors When integrated into an antenna assembly, the reflector serves to modify the radiation pattern of the antenna, increasing gain in a given direction. Common integrated reflector types are parabolic reflector, which focuses a beam signal into one point or directs a radiating signal into a beam. a passive element slightly longer than and located behind a radiating dipole element that absorbs and re-radiates the signal in a directional way as in a Yagi antenna array. a flat reflector such as used in a Short backfire antenna or Sector antenna. a corner reflector used in UHF television antennas. a cylindrical reflector as used in Cantenna. Design criteria Parameters that can directly influence the performance of an antenna with integrated reflector: Dimensions of the reflector (Big ugly dish versus small dish) Spillover (part of the feed antenna radiation misses the reflector) Aperture blockage (also known as feed blockage: part of the feed energy is reflected back into the feed antenna and does not contribute to the main beam) Illumination taper (feed illumination reduced at the edges of the reflector) Reflector surface deviation Defocusing Cross polarization Feed losses Antenna feed mismatch Non-uniform amplitude/phase distributions The antenna efficiency is measured in terms of it
https://en.wikipedia.org/wiki/Nanobiotechnology
Nanobiotechnology, bionanotechnology, and nanobiology are terms that refer to the intersection of nanotechnology and biology. Given that the subject is one that has only emerged very recently, bionanotechnology and nanobiotechnology serve as blanket terms for various related technologies. This discipline helps to indicate the merger of biological research with various fields of nanotechnology. Concepts that are enhanced through nanobiology include: nanodevices (such as biological machines), nanoparticles, and nanoscale phenomena that occurs within the discipline of nanotechnology. This technical approach to biology allows scientists to imagine and create systems that can be used for biological research. Biologically inspired nanotechnology uses biological systems as the inspirations for technologies not yet created. However, as with nanotechnology and biotechnology, bionanotechnology does have many potential ethical issues associated with it. The most important objectives that are frequently found in nanobiology involve applying nanotools to relevant medical/biological problems and refining these applications. Developing new tools, such as peptoid nanosheets, for medical and biological purposes is another primary objective in nanotechnology. New nanotools are often made by refining the applications of the nanotools that are already being used. The imaging of native biomolecules, biological membranes, and tissues is also a major topic for nanobiology researchers. Other topics concerning nanobiology include the use of cantilever array sensors and the application of nanophotonics for manipulating molecular processes in living cells. Recently, the use of microorganisms to synthesize functional nanoparticles has been of great interest. Microorganisms can change the oxidation state of metals. These microbial processes have opened up new opportunities for us to explore novel applications, for example, the biosynthesis of metal nanomaterials. In contrast to chemical an
https://en.wikipedia.org/wiki/Antarctic%20Adventure
is a video game developed by Konami in 1983 for the MSX, and later for video game consoles, such as NES and ColecoVision. The player takes the role of an Antarctic penguin, racing to various research stations owned by different countries in Antarctica (excluding the USSR). The gameplay is similar to Sega's Turbo, but plays at a much slower pace, and features platform game elements. The penguin, later named Penta, must reach the next station before time runs out while avoiding sea lions and breaks in the ice. Throughout the levels, fish jump out of ice holes and can be caught for bonus points. The game, like many early video games, has no ending – when the player reaches the last station, the game starts from the first level again, but with increased difficulty. Legacy Antarctic Adventure was followed by a sequel for the MSX computer in 1986, entitled Penguin Adventure. In addition, the penguin character Penta, and his son Pentarou became a mascot for Konami through the 1980s. They have made appearances in over 10 games. Of particular note are his appearances in the Parodius series of shoot 'em up games. Penta, or his son Pentarou, had appeared in Medal Games like Tsurikko Penta, Balloon Penta and Imo Hori Penta. Following in 2002 (not released for mobile in 2001), three mobile games , released on May 6, 2003, another titled , as part of Konami Taisen Colosseum, and the fishing game as Penta no Tsuri Boken, and released for i-Revo. A screenshot from this game can briefly be seen in the introduction of Gradius ReBirth, released in 2008 for the Wii Virtual Console and in 2014 for the Wii U Virtual Console. An MSX Version was re-released for the Windows Store as part of EGG Project on November 25, 2014 in Japan. In 1990, Konami released only in Japan a handheld electronic game of Antarctic Adventure, although it is usually listed as South Pole (a more literal translation of the Japanese title). In 2014, Antarctic Adventure was released on a special version of the
https://en.wikipedia.org/wiki/Detention%20basin
A detention basin or retarding basin is an excavated area installed on, or adjacent to, tributaries of rivers, streams, lakes or bays to protect against flooding and, in some cases, downstream erosion by storing water for a limited period of time. These basins are also called dry ponds, holding ponds or dry detention basins if no permanent pool of water exists. Detention ponds that are designed to permanently retain some volume of water at all times are called retention basins. In its basic form, a detention basin is used to manage water quantity while having a limited effectiveness in protecting water quality, unless it includes a permanent pool feature. Functions and design Detention basins are storm water best management practices that provide general flood protection and can also control extreme floods such as a 1 in 100-year storm event. The basins are typically built during the construction of new land development projects including residential subdivisions or shopping centers. The ponds help manage the excess urban runoff generated by newly constructed impervious surfaces such as roads, parking lots and rooftops. A basin functions by allowing large flows of water to enter but limits the outflow by having a small opening at the lowest point of the structure. The size of this opening is determined by the capacity of underground and downstream culverts and washes to handle the release of the contained water. Frequently the inflow area is constructed to protect the structure from some types of damage. Offset concrete blocks in the entrance spillways are used to reduce the speed of entering flood water. These structures may also have debris drop vaults to collect large rocks. These vaults are deep holes under the entrance to the structure. The holes are wide enough to allow large rocks and other debris to fall into the holes before they can damage the rest of the structure. These vaults must be emptied after each storm event. Research has shown that dete
https://en.wikipedia.org/wiki/Dovecot%20%28software%29
Dovecot is an open-source IMAP and POP3 server for Unix-like operating systems, written primarily with security in mind. Timo Sirainen originated Dovecot and first released it in July 2002. Dovecot developers primarily aim to produce a lightweight, fast and easy-to-set-up open-source email server. The primary purpose of Dovecot is to act as a mail storage server. The mail is delivered to the server using some mail delivery agent (MDA) and is stored for later access with an email client (mail user agent, or MUA). Dovecot can also act as mail proxy server, forwarding connection to another mail server, or act as a lightweight MUA in order to retrieve and manipulate mail on remote server for e.g. mail migration. According to the Open Email Survey, as of 2020, Dovecot has an installed base of at least 2.9million IMAP servers, and has a global market share of 76.9% of all IMAP servers. The results of the same survey in 2019 gave figures of 2.6million and 76.2%, respectively. Features Dovecot can work with standard mbox, Maildir, and its own native high-performance dbox formats. It is fully compatible with UW IMAP and Courier IMAP servers’ implementation of them, as well as mail clients accessing the mailboxes directly. Dovecot also includes a mail delivery agent (called Local delivery agent in Dovecot's documentation) and an LMTP server, with the optional Sieve filtering support. Dovecot supports a variety of authentication schemas for IMAP, POP and message submission agent (MSA) access, including CRAM-MD5 and the more secure DIGEST-MD5. With version 2.2, some new features have been added to Dovecot, e.g. additional IMAP command extensions, dsync has been rewritten or optimized, and shared mailboxes now support per-user flags. Version 2.3 adds a message submission agent, Lua scripting for authentication, and some other improvements. Apple Inc. includes Dovecot for email services since Mac OS X Server 10.6 Snow Leopard. In 2017, Mozilla, via the Mozilla Open Sourc
https://en.wikipedia.org/wiki/List%20of%20hash%20functions
This is a list of hash functions, including cyclic redundancy checks, checksum functions, and cryptographic hash functions. Cyclic redundancy checks Adler-32 is often mistaken for a CRC, but it is not: it is a checksum. Checksums Universal hash function families Non-cryptographic hash functions Keyed cryptographic hash functions Unkeyed cryptographic hash functions See also Hash function security summary Secure Hash Algorithms NIST hash function competition Key derivation functions (category) References List Checksum algorithms Cryptography lists and comparisons
https://en.wikipedia.org/wiki/Mobile%20phone%20feature
A mobile phone feature is a capability, service, or application that a mobile phone offers to its users. Mobile phones are often referred to as feature phones, and offer basic telephony. Handsets with more advanced computing ability through the use of native code try to differentiate their own products by implementing additional functions to make them more attractive to consumers. This has led to great innovation in mobile phone development over the past 20 years. The common components found on all phones are: A number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips. A battery (typically a lithium-ion battery), providing the power source for the phone functions. An input mechanism to allow the user to interact with the phone. The most common input mechanism is a keypad, but touch screens are also found in smartphones. Basic mobile phone services to allow users to make calls and send text messages. All GSM phones use a SIM card to allow an account to be swapped among devices. Some CDMA devices also have a similar card called a R-UIM. Individual GSM, WCDMA, IDEN and some satellite phone devices are uniquely identified by an International Mobile Equipment Identity (IMEI) number. All mobile phones are designed to work on cellular networks and contain a standard set of services that allow phones of different types and in different countries to communicate with each other. However, they can also support other features added by various manufacturers over the years: roaming which permits the same phone to be used in multiple countries, providing that the operators of both countries have a roaming agreement. send and receive data and faxes (if a computer is attached), access WAP services, and provide full Internet access using technologies such as GPRS. applications like a clock, alarm, calendar, contacts, and calculator and a few games. Sending and receiving pictures and videos (by without internet) through MMS, and for short distances with
https://en.wikipedia.org/wiki/Dolby%20Digital%20Plus
Dolby Digital Plus, also known as Enhanced AC-3 (and commonly abbreviated as DDP, DD+, E-AC-3 or EC-3), is a digital audio compression scheme developed by Dolby Labs for the transport and storage of multi-channel digital audio. It is a successor to Dolby Digital (AC-3), and has a number of improvements over that codec, including support for a wider range of data rates (32 kbit/s to 6144 kbit/s), an increased channel count, and multi-program support (via substreams), as well as additional tools (algorithms) for representing compressed data and counteracting artifacts. Whereas Dolby Digital (AC-3) supports up to five full-bandwidth audio channels at a maximum bitrate of 640 kbit/s, E-AC-3 supports up to 15 full-bandwidth audio channels at a maximum bitrate of 6.144 Mbit/s. The full set of technical specifications for E-AC-3 (and AC-3) are standardized and published in Annex E of ATSC A/52:2012, as well as Annex E of ETSI TS 102 366. Technical details Specifications Dolby Digital Plus is capable of the following: Coded bitrate: 0.032 to 6.144 Mbit/s Audio channels: 1.0 to 15.1 (i.e. from mono to 15 full range channels and a low frequency effects channel) Number of audio programs per bitstream: 8 Sample rate: 32, 44.1 or 48 kHz Structure A Dolby Digital Plus service consists of one or more substreams. There are three types of substreams: Independent substreams, which can contain a single program of up to 5.1 channels. Up to eight dependent substreams may be present in a Dolby Digital Plus stream. The channels present in an independent substream are limited to the traditional 5.1 channels: Left (L), Right (R), Center (C), Left Surround (Ls), and Right Surround (Rs) channels, as well as a Low Frequency Effects (Lfe) channel. Legacy substreams, which contain a single 5.1 program, and which correspond directly to Dolby Digital content. At most a single legacy substream may be present in a DD+ stream. Dependent substreams, which contain additional channels beyon
https://en.wikipedia.org/wiki/Delay-insensitive%20minterm%20synthesis
Within digital electronics, the DIMS (delay-insensitive minterm synthesis) system is an asynchronous design methodology making the least possible timing assumptions. Assuming only the quasi-delay-insensitive delay model the generated designs need little if any timing hazard testing. The basis for DIMS is the use of two wires to represent each bit of data. This is known as a dual-rail data encoding. Parts of the system communicate using the early four-phase asynchronous protocol. The construction of DIMS logic gates comprises generating every possible minterm using a row of C-elements and then gathering the outputs of these using OR gates which generate the true and false output signals. With two dual-rail inputs the gate would be composed of four two-input C-elements. A three input gate uses eight three-input C-elements. Latches are constructed using two C-elements to store the data and an OR gate to acknowledge the input once the data has been latched by attaching as its inputs the data output wires. The acknowledge from the forward stage is inverted and passed to the C-elements to allow them to reset once the computation has completed. This latch design is known as the 'half latch'. Other asynchronous latches provide a higher data capacity and levels of decoupling. DIMS designs are large and slow but they have the advantage of being very robust. Further reading Jens Sparsø, Steve Furber: "Principles of Asynchronous Circuit Design"; Kluwer, Dordrecht (2001); chapter 5.5.1. Digital electronics
https://en.wikipedia.org/wiki/Common%20Algebraic%20Specification%20Language
The Common Algebraic Specification Language (CASL) is a general-purpose specification language based on first-order logic with induction. Partial functions and subsorting are also supported. Overview CASL has been designed by CoFI, the Common Framework Initiative (CoFI), with the aim to subsume many existing specification languages. CASL comprises four levels: basic specifications, for the specification of single software modules, structured specifications, for the modular specification of modules, architectural specifications, for the prescription of the structure of implementations, specification libraries, for storing specifications distributed over the Internet. The four levels are orthogonal to each other. In particular, it is possible to use CASL structured and architectural specifications and libraries with logics other than CASL. For this purpose, the logic has to be formalized as an institution. This feature is also used by the CASL extensions. Extensions Several extensions of CASL have been designed: HasCASL, a higher-order extension CoCASL, a coalgebraic extension CspCASL, a concurrent extension based on CSP ModalCASL, a modal logic extension CASL-LTL, a temporal logic extension HetCASL, an extension for heterogeneous specification External links Official CoFI website CASL The heterogeneous tool set Hets, the main analysis tool for CASL Formal specification languages
https://en.wikipedia.org/wiki/Institution%20%28computer%20science%29
The notion of institution was created by Joseph Goguen and Rod Burstall in the late 1970s, in order to deal with the "population explosion among the logical systems used in computer science". The notion attempts to "formalize the informal" concept of logical system. The use of institutions makes it possible to develop concepts of specification languages (like structuring of specifications, parameterization, implementation, refinement, and development), proof calculi, and even tools in a way completely independent of the underlying logical system. There are also morphisms that allow to relate and translate logical systems. Important applications of this are re-use of logical structure (also called borrowing), and heterogeneous specification and combination of logics. The spread of institutional model theory has generalized various notions and results of model theory, and institutions themselves have impacted the progress of universal logic. Definition The theory of institutions does not assume anything about the nature of the logical system. That is, models and sentences may be arbitrary objects; the only assumption is that there is a satisfaction relation between models and sentences, telling whether a sentence holds in a model or not. Satisfaction is inspired by Tarski's truth definition, but can in fact be any binary relation. A crucial feature of institutions is that models, sentences, and their satisfaction, are always considered to live in some vocabulary or context (called signature) that defines the (non-logic) symbols that may be used in sentences and that need to be interpreted in models. Moreover, signature morphisms allow to extend signatures, change notation, and so on. Nothing is assumed about signatures and signature morphisms except that signature morphisms can be composed; this amounts to having a category of signatures and morphisms. Finally, it is assumed that signature morphisms lead to translations of sentences and models in a way that sati
https://en.wikipedia.org/wiki/MARID
MARID was an IETF working group in the applications area tasked to propose standards for email authentication in 2004. The name is an acronym of MTA Authorization Records In DNS. Background Lightweight MTA Authentication Protocol (LMAP) was a generic name for a set of 'designated sender' proposals that were discussed in the ASRG in the Fall of 2003, including: Designated Mailers Protocol (DMP) Designated Relays Inquiry Protocol (DRIP) Flexible Sender Validation (FSV) MTAMARK Reverse MX (RMX) Sender Policy Framework (SPF) These schemes attempt to list the valid IP addresses that can send mail for a domain. The "lightweight" in LMAP essentially stands for "no crypto", as opposed to DomainKeys and its successor, DKIM. In March 2004, the Internet Engineering Task Force IETF held a BoF on these proposals. As the result of that meeting, the task force chartered the MARID working group. Controversy Microsoft's Caller-ID proposal was a late and highly controversial addition to this mix. It came with the following features: Use of XML policies with DNS - this was reduced to what is now known as Sender ID Piggybacking and extension of the existing SPF Use of RFC 2822 mail header fields as by DomainKeys (All other LMAP drafts used the SMTP envelope.) Specific questions about patents and licensing Proceedings The working group decided to postpone the question of RFC 2821 SMTP identities - i.e. MAIL FROM covered by SPF, or HELO covered by CSV and SPF - in favour of RFC 2822 identities covered by Caller-ID's and later Sender-ID's Purported Responsible Address (PRA). The WG arrived at a point where sender policies could be split into different scopes, like the 2821 MAIL FROM or the 2822 PRA. The MARID syntax also allowed to join different scopes into one policy record, if the sets of permitted IPs are identical, as is often the case. Less than a week after the publication of a first or MAIL FROM draft, the WG was terminated unilaterally by its leadership. MARID
https://en.wikipedia.org/wiki/Waterwolf
Waterwolf, or Water-wolf is a Dutch word that comes from the Netherlands, which refers to the tendency of lakes in low lying peaty land, sometimes previously worn-down by men digging peat for fuel, to enlarge or expand by flooding, thus eroding the lake shores, and potentially causing harm to infrastructure, or death. The term waterwolf is an example of zoomorphism, in which a non-living thing is given traits or characteristics of an animal (whereas a non-living thing given human traits or characteristics is personification). The traits of a wolf most commonly given to lakes include "something to be feared", "quick and relentless", "an enemy of man". The Netherlands, meaning 'low countries', is a nation where 18% of the land is below sea level, and half of the land under one meter above sea level, and is prone to flooding. Before modern flood control, severe storms could cause flooding that could wipe out whole villages in the area of the waterwolf. Much of the land in the Netherlands consists of peat bogs. Peat is known to be an organic matter substance consisting of 10% carbon and 90% water and is usually found in colder climates where the plant growth and plant decay are slow. Peat is considered a carbon sequestration, and when dried can be burned as a fuel. Historically peat was the primary source of fuel in the Netherlands, and farmers would mine the peat to burn or sell, thus contributing to the erosion of the landscape. The first great step to reclaiming land taken by the waterwolf was made with the creation of windmills that could pump water out of the surrounding area, allowing for the creation of polders, or areas inhabited below sea level with an artificially managed water table. Modern flood control in the Netherlands consists of maintaining polders, levees, and is highlighted by the world's largest dam project: The Delta Works. While modern flood control has conquered the waterwolf, new events such as rising sea levels from climate change could once a
https://en.wikipedia.org/wiki/T-money
T-money is a rechargeable series of smart cards and other "smart" devices used for paying transportation fares in and around Seoul and other areas of South Korea. T-money can also be used in lieu of cash or credit cards in some convenience stores and other businesses. The T-money System has been implemented and is being operated by T-money Co., Ltd of which 34.4% owned by Seoul Special City Government, 31.85% owned by LG CNS, and 15.73% owned by Credit Card Union. History 22 April 2004 : City government announced the name of new transit card called T-money. "T" stands for travel, touch, traffic and technology. June 2004 : T-money terminals installed at stations. Several bugs had to be ironed out before full operation. 1 July 2004 : System officially inaugurated, with a day of free transit for all. 15 October 2005 : Incheon public transit system started to accept T-money. 6 December 2005 : T-money Internet refilling service started. 13 November 2006 : Gyeonggi-do transit system started to partially accept T-money. 4 August 2008 : Busan urban buses started to accept T-money. Usage Similar to its predecessor, the "Seoul Bus Card", T-money can be used to pay for bus, subway and some taxi fares. As of March 2017, T-money is accepted by: All Seoul, Gyeonggi-do, Incheon, Busan, Daegu, Daejeon, and Gwangju buses Seoul, Incheon, Busan, Daegu, Daejeon, and Gwangju Metropolitan Subway networks AREX, U Line, EverLine, Shinbundang Line, Donghae Line (Metro) and Busan–Gimhae Light Rail Transit All Sejong Special Autonomous City, Chungcheongnam-do and Chungcheongbuk-do buses All Gangwon-do buses All Gyeongsangbuk-do and Gyeongsangnam-do buses with dongle All Jeollabuk-do, Jeollanam-do, and Jeju Special Autonomous Province buses with dongle Korail ticket office Toll booth operated by Korea Expressway Corporation Express bus with E-Pass dongle Some stores and attractions including Seoul's four palaces (except Gyeonghuigung), Lotte World amusement park, Kyobo
https://en.wikipedia.org/wiki/Credential
A credential is a piece of any document that details a qualification, competence, or authority issued to an individual by a third party with a relevant or de facto authority or assumed competence to do so. Examples of credentials include academic diplomas, academic degrees, certifications, security clearances, identification documents, badges, passwords, user names, keys, powers of attorney, and so on. Sometimes publications, such as scientific papers or books, may be viewed as similar to credentials by some people, especially if the publication was peer reviewed or made in a well-known journal or reputable publisher. Types and documentation of credentials A person holding a credential is usually given documentation or secret knowledge (e.g., a password or key) as proof of the credential. Sometimes this proof (or a copy of it) is held by a third, trusted party. While in some cases a credential may be as simple as a paper membership card, in other cases, such as diplomas, it involves the presentation of letters directly from the issuer of the credential its faith in the person representing them in a negotiation or meeting. Counterfeiting of credentials is a constant and serious problem, irrespective of the type of credential. A great deal of effort goes into finding methods to reduce or prevent counterfeiting. In general, the greater the perceived value of the credential, the greater the problem with counterfeiting and the greater the lengths to which the issuer of the credential must go to prevent fraud. Diplomacy In diplomacy, credentials, also known as a letter of credence, are documents that ambassadors, diplomatic ministers, plenipotentiary, and chargés d'affaires provide to the government to which they are accredited, for the purpose, chiefly, of communicating to the latter the envoy's diplomatic rank. It also contains a request that full credence be accorded to his official statements. Until his credentials have been presented and found in proper ord
https://en.wikipedia.org/wiki/Smart%20bookmark
Smart bookmarks are an extended kind of Internet bookmark used in web browsers. By accepting an argument, they directly give access to functions of web sites, as opposed to filling web forms at the respective web site for accessing these functions. Smart bookmarks can be used for web searches, or access to data on web sites with uniformly structured web addresses (e.g., user profiles in a web forum). History Smart bookmarks first were introduced in OmniWeb on the NEXTSTEP platform in 1997/1998, where they were called shortcuts. The feature was subsequently taken up by Opera, Galeon and Internet Explorer for Mac, so they can now be used in many web browsers, most of which are Mozilla based, like Kazehakase and Mozilla Firefox. In Web, smart bookmarks appear in a dropdown menu when entering text in the address bar. By selecting a smart bookmark the respective web site is accessed using the text as argument. Smart bookmarks can also be added to the toolbar, together with their own textbox. The same applies to Galeon, which also allows the user to collapse and expand the textboxes within the toolbar. Smart bookmarks can also be shared, and there is a collection of them at the web site of the Galeon project. Usage There are two ways to employ smart bookmarks: either through the assignment of keywords or without. E.g., Mozilla derivatives and also Konqueror requires the assigning of keywords that can then be typed directly into the address bar followed by the term. Epiphany does not allow assigning keywords. Instead, the term is typed directly into the address bar, then all smart bookmarks appear on the address bar, can be dropped down the list, and selected. See also Bookmarklets, making it possible to use javascript with smart bookmarks iMacros for Firefox, embeds web browser macros in bookmarks or links References External links Smart Bookmarksat the Galeon site Smart Bookmarks And Bookmarklets Web browsers Smart devices
https://en.wikipedia.org/wiki/Single-event%20upset
A single-event upset (SEU), also known as a single-event error (SEE), is a change of state caused by one single ionizing particle (ions, electrons, photons...) striking a sensitive node in a live micro-electronic device, such as in a microprocessor, semiconductor memory, or power transistors. The state change is a result of the free charge created by ionization in or close to an important node of a logic element (e.g. memory "bit"). The error in device output or operation caused as a result of the strike is called an SEU or a soft error. The SEU itself is not considered permanently damaging to the transistor's or circuits' functionality unlike the case of single-event latch-up (SEL), single-event gate rupture (SEGR), or single-event burnout (SEB). These are all examples of a general class of radiation effects in electronic devices called single-event effects (SEEs). History Single-event upsets were first described during above-ground nuclear testing, from 1954 to 1957, when many anomalies were observed in electronic monitoring equipment. Further problems were observed in space electronics during the 1960s, although it was difficult to separate soft failures from other forms of interference. In 1972, a Hughes satellite experienced an upset where the communication with the satellite was lost for 96 seconds and then recaptured. Scientists Dr. Edward C. Smith, Al Holman, and Dr. Dan Binder explained the anomaly as a single-event upset (SEU) and published the first SEU paper in the IEEE Transactions on Nuclear Science journal in 1975. In 1978, the first evidence of soft errors from alpha particles in packaging materials was described by Timothy C. May and M.H. Woods. In 1979, James Ziegler of IBM, along with W. Lanford of Yale, first described the mechanism whereby a sea-level cosmic ray could cause a single-event upset in electronics. 1979 also saw the world’s first heavy ion "single-event effects" test at a particle accelerator facility, conducted at Lawrence Berke
https://en.wikipedia.org/wiki/Saturation%20%28magnetic%29
Seen in some magnetic materials, saturation is the state reached when an increase in applied external magnetic field H cannot increase the magnetization of the material further, so the total magnetic flux density B more or less levels off. (Though, magnetization continues to increase very slowly with the field due to paramagnetism.) Saturation is a characteristic of ferromagnetic and ferrimagnetic materials, such as iron, nickel, cobalt and their alloys. Different ferromagnetic materials have different saturation levels. Description Saturation is most clearly seen in the magnetization curve (also called BH curve or hysteresis curve) of a substance, as a bending to the right of the curve (see graph at right). As the H field increases, the B field approaches a maximum value asymptotically, the saturation level for the substance. Technically, above saturation, the B field continues increasing, but at the paramagnetic rate, which is several orders of magnitude smaller than the ferromagnetic rate seen below saturation. The relation between the magnetizing field H and the magnetic field B can also be expressed as the magnetic permeability: or the relative permeability , where is the vacuum permeability. The permeability of ferromagnetic materials is not constant, but depends on H. In saturable materials the relative permeability increases with H to a maximum, then as it approaches saturation inverts and decreases toward one. Different materials have different saturation levels. For example, high permeability iron alloys used in transformers reach magnetic saturation at 1.6–2.2teslas (T), whereas ferrites saturate at 0.2–0.5T. Some amorphous alloys saturate at 1.2–1.3T. Mu-metal saturates at around 0.8T. Explanation Ferromagnetic materials (like iron) are composed of microscopic regions called magnetic domains, that act like tiny permanent magnets that can change their direction of magnetization. Before an external magnetic field is applied to the material,
https://en.wikipedia.org/wiki/Rewilding
Rewilding may refer to: Rewilding (conservation biology), the return of habitats to a natural state Rewilding Europe, a programme to do so in Europe Pleistocene rewilding, a form of species reintroduction Rewilding Institute, an organization concerned with the integration of traditional wildlife and wildlands conservation Rewilding (anarchism), the reversal of human "domestication" Rewilding (horse), a thoroughbred racehorse See also Species reintroduction, the deliberate release of a species into the wild
https://en.wikipedia.org/wiki/FUJIC
FUJIC was the first electronic digital computer in operation in Japan. It was finished in March 1956, the project having been effectively started in 1949, and was built almost entirely by Dr. Okazaki Bunji. Originally designed to perform calculations for lens design by Fuji, the ultimate goal of FUJIC's construction was to achieve a speed 1,000 times that of human calculation for the same purpose – the actual performance achieved was double that number. Employing approximately 1,700 vacuum tubes, the computer's word length was 33 bits. It had an ultrasonic mercury delay-line memory of 255 words, with an average access time of 500 microseconds. An addition or subtraction was clocked at 100 microseconds, multiplication at 1,600 microseconds, and division at 2,100 microseconds. Used extensively for two years at the Fuji factory in Odawara, it was given later to Waseda University before taking up residence in the National Science Museum of Japan in Tokyo. See also MUSASINO-1 List of vacuum-tube computers References References and external links FUJIC at the IPSJ Computer Museum Dr. Okazaki Bunji at the IPSJ Computer Museum Raúl Rojas and Ulf Hashagen, ed. The First Computers: History and Architectures. 2000, MIT Press, . One-of-a-kind computers Vacuum tube computers Fujifilm
https://en.wikipedia.org/wiki/Data%20recovery
In computing, data recovery is a process of retrieving deleted, inaccessible, lost, corrupted, damaged, or formatted data from secondary storage, removable media or files, when the data stored in them cannot be accessed in a usual way. The data is most often salvaged from storage media such as internal or external hard disk drives (HDDs), solid-state drives (SSDs), USB flash drives, magnetic tapes, CDs, DVDs, RAID subsystems, and other electronic devices. Recovery may be required due to physical damage to the storage devices or logical damage to the file system that prevents it from being mounted by the host operating system (OS). Logical failures occur when the hard drive devices are functional but the user or automated-OS cannot retrieve or access data stored on them. Logical failures can occur due to corruption of the engineering chip, lost partitions, firmware failure, or failures during formatting/re-installation. Data recovery can be a very simple or technical challenge. This is why there are specific software companies specialized in this field. About The most common data recovery scenarios involve an operating system failure, malfunction of a storage device, logical failure of storage devices, accidental damage or deletion, etc. (typically, on a single-drive, single-partition, single-OS system), in which case the ultimate goal is simply to copy all important files from the damaged media to another new drive. This can be accomplished using a Live CD, or DVD by booting directly from a ROM or a USB drive instead of the corrupted drive in question. Many Live CDs or DVDs provide a means to mount the system drive and backup drives or removable media, and to move the files from the system drive to the backup media with a file manager or optical disc authoring software. Such cases can often be mitigated by disk partitioning and consistently storing valuable data files (or copies of them) on a different partition from the replaceable OS system files. Another sc
https://en.wikipedia.org/wiki/Native%20resolution
The native resolution of an liquid crystal display (LCD), liquid crystal on silicon (LCoS) or other flat panel display refers to its single fixed resolution. As an LCD consists of a fixed raster, it cannot change resolution to match the signal being displayed as a cathode-ray tube (CRT) monitor can, meaning that optimal display quality can be reached only when the signal input matches the native resolution. An image where the number of pixels is the same as in the image source and where the pixels are perfectly aligned to the pixels in the source is said to be pixel perfect. While CRT monitors can usually display images at various resolutions, an LCD monitor has to rely on interpolation (scaling of the image), which causes a loss of image quality. An LCD has to scale up a smaller image to fit into the area of the native resolution. This is the same principle as taking a smaller image in an image editing program and enlarging it; the smaller image loses its sharpness when it is expanded. This is especially problematic as most resolutions are in a 4:3 aspect ratio (640×480, 800×600, 1024×768, 1280×960, 1600×1200) but there are odd resolutions that are not, notably 1280×1024. If a user were to map 1024×768 to a 1280×1024 screen there would be distortion as well as some image errors, as there is not a one-to-one mapping with regard to pixels. This results in noticeable quality loss and the image is much less sharp. In theory, some resolutions could work well, if they are exact multiples of smaller image sizes. For example, a 1600×1200 LCD could display an 800×600 image well, as each of the pixels in the image could be represented by a block of four on the larger display, without interpolation. Since 800×600 is an integer factor of 1600×1200, scaling should not adversely affect the image. But in practice, most monitors apply a smoothing algorithm to all smaller resolutions, so the quality still suffers for these "half" modes. Most LCD monitors are able to inform the P
https://en.wikipedia.org/wiki/BartPE
BartPE (Bart's Preinstalled Environment) is a discontinued tool that customizes Windows XP or Windows Server 2003 into a lightweight environment, similar to Windows Preinstallation Environment, which could be run from a Live CD or Live USB drive. A BartPE system image is created using PE Builder, a freeware program created by Bart Lagerweij. It requires a legal copy of Windows XP or Windows Server 2003. Additional applications can be included in the image using plugins. As it often resides on a Live CD or USB drive, BartPE allows a user to boot Windows, even if a hardware or software fault has disabled the installed operating system(s) on the internal hard drive – for instance, to recover files. It can also be used to scan for and remove rootkits, computer viruses and spyware (that have infected boot files), or to reset a lost administrator password. Overview While a bootable floppy disk can be used to help recover a failing hard disk, one major limitation on using a floppy disk that booted a standalone copy of MS-DOS was that "DOS can't handle NTFS hard-drive partitions." BartPE is a reasonable first choice for Windows users: it's free, and it's #2 in a list of alternatives; #1 is Linux-oriented. While it can be used for running demos, it can also be used "to bring a dead PC back to life." PE Builder PE Builder (also known as Bart PE Builder) is the software used to create the BartPE system images. Description As with Windows Preinstallation Environment, BartPE operates by loading system registry files into RAM, and not writing any registry changes back to boot media. Thus, neither operating system requires an operational hard drive or network access. This also allows them to be run from non-writable media such as a CD-ROM. Since each instance of BartPE is a new installation, the BartPE "boot" disk needs original Windows Setup files in order to operate. The Bart PE Builder application interprets and condenses files from a Windows setup CD to create the Bart
https://en.wikipedia.org/wiki/Dynamical%20billiards
A dynamical billiard is a dynamical system in which a particle alternates between free motion (typically as a straight line) and specular reflections from a boundary. When the particle hits the boundary it reflects from it without loss of speed (i.e. elastic collisions). Billiards are Hamiltonian idealizations of the game of billiards, but where the region contained by the boundary can have shapes other than rectangular and even be multidimensional. Dynamical billiards may also be studied on non-Euclidean geometries; indeed, the first studies of billiards established their ergodic motion on surfaces of constant negative curvature. The study of billiards which are kept out of a region, rather than being kept in a region, is known as outer billiard theory. The motion of the particle in the billiard is a straight line, with constant energy, between reflections with the boundary (a geodesic if the Riemannian metric of the billiard table is not flat). All reflections are specular: the angle of incidence just before the collision is equal to the angle of reflection just after the collision. The sequence of reflections is described by the billiard map that completely characterizes the motion of the particle. Billiards capture all the complexity of Hamiltonian systems, from integrability to chaotic motion, without the difficulties of integrating the equations of motion to determine its Poincaré map. Birkhoff showed that a billiard system with an elliptic table is integrable. Equations of motion The Hamiltonian for a particle of mass m moving freely without friction on a surface is: where is a potential designed to be zero inside the region in which the particle can move, and infinity otherwise: This form of the potential guarantees a specular reflection on the boundary. The kinetic term guarantees that the particle moves in a straight line, without any change in energy. If the particle is to move on a non-Euclidean manifold, then the Hamiltonian is replaced by
https://en.wikipedia.org/wiki/Cross%20Cave
Cross Cave (, ), also named Cold Cave under Cross Mountain (), is a cave in Slovenia's Lož Valley, in the area between the Lož Karst Field, Cerknica Karst Field, and Bloke Plateau. The cave is named after nearby Holy Cross Church in Podlož. The cave is particularly noted among Karst caves for its chain of over 45 subterranean lakes of emerald green water. Extremely slow-growing calcareous formations (up to 0.1 mm per year) and their fragility are the main obstacle to large-scale tourism in the cave and limit daily tourist visits to the flooded part of the cave to four people. As a result, the Cross Cave is among the best-preserved caves, opened to the public in Slovenia. The cave was prepared for visits in the 1950s by the Lož Valley Tourist Association. It was later managed by the Ljubljana Cave Research Society. Since the 1990s, it has been cared for by the Friends of Cross Cave Association (). With 45 species of organisms, some not discovered until 2000, Cross Cave is the fourth-largest cave ecosystem in the world in terms of biodiversity. The cave was first documented in 1832, but the part of the cave that includes lakes and stream passages was first explored by Slovene cavers in 1926. Course At Calvary (, the best-known symbol of Cross Cave), the cave splits into two branches: the Muds to the north and the Variegated Passage to the northeast. The passage through the Muds is more difficult, and so most visitors choose to continue through the Variegated Passage, which requires the use of small boats. Cross Cave continues into New Cross Cave (). In the direction from the entrance to the cave, the Variegated Passage is the left gallery of Cross Cave at the confluence with the Muds at Cavalry. The access requires the use of boats. Part of the way along the Variegated Passage is a side gallery named the Matjaž Passage (), and it contains several large columns. Continuing along the Variegated Passage, visitors enter the Crystal Mountain (), the largest room in the c
https://en.wikipedia.org/wiki/Eigenvalues%20and%20eigenvectors
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor. Geometrically, a transformation matrix rotates, stretches, or shears the vectors it acts upon. The eigenvectors for a linear transformation matrix are the set of vectors that are only stretched, with no rotation or shear. The eigenvalue is the factor by which an eigenvector is stretched. If the eigenvalue is negative, the direction is reversed. Definition If is a linear transformation from a vector space over a field into itself and is a nonzero vector in , then is an eigenvector of if is a scalar multiple of . This can be written as where is a scalar in , known as the eigenvalue, characteristic value, or characteristic root associated with . There is a direct correspondence between n-by-n square matrices and linear transformations from an n-dimensional vector space into itself, given any basis of the vector space. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices, or the language of linear transformations. If is finite-dimensional, the above equation is equivalent to where is the matrix representation of and is the coordinate vector of . Overview Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'. Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization. In essence, an eigenvector v of a linear transformatio
https://en.wikipedia.org/wiki/Yan%20tan%20tethera
Yan Tan Tethera or yan-tan-tethera is a sheep-counting system traditionally used by shepherds in Northern England and some other parts of Britain. The words are numbers taken from Brythonic Celtic languages such as Cumbric which had died out in most of Northern England by the sixth century, but they were commonly used for sheep counting and counting stitches in knitting until the Industrial Revolution, especially in the fells of the Lake District. Though most of these number systems fell out of use by the turn of the 20th century, some are still in use. Origin and development Sheep-counting systems ultimately derive from Brythonic Celtic languages, such as Cumbric; Tim Gay writes: “[Sheep-counting systems from all over the British Isles] all compared very closely to 18th-century Cornish and modern Welsh". It is impossible, given the corrupted form in which they have survived, to be sure of their exact origin. The counting systems have changed considerably over time. A particularly common tendency is for certain pairs of adjacent numbers to come to resemble each other by rhyme (notably the words for 1 and 2, 3 and 4, 6 and 7, or 8 and 9). Still, multiples of five tend to be fairly conservative; compare bumfit with Welsh pymtheg, in contrast with standard English fifteen. Use in sheep counting Like most Celtic numbering systems, they tend to be vigesimal (based on the number twenty), but they usually lack words to describe quantities larger than twenty; this is not a limitation of either modernised decimal Celtic counting systems or the older ones. To count a large number of sheep, a shepherd would repeatedly count to twenty, placing a mark on the ground, or move a hand to another mark on a shepherd's crook, or drop a pebble into a pocket to represent each score (e.g. 5 score sheep = 100 sheep). Importance of keeping count In order to keep accurate records (e.g. of birth and death) and to be alert to instances of straying, shepherds must perform frequent head
https://en.wikipedia.org/wiki/Rose%20oil
Rose oil (rose otto, attar of rose, attar of roses, or rose essence) is the essential oil extracted from the petals of various types of rose. Rose ottos are extracted through steam distillation, while rose absolutes are obtained through solvent extraction, the absolute being used more commonly in perfumery. The production technique originated in Greater Iran. Even with their high price and the advent of organic synthesis, rose oils are still perhaps the most widely used essential oil in perfumery. Damascena and Centifolia Two major species of rose are cultivated for the production of rose oil: Rosa damascena production today is dominated by 3 producers account for over 70% of the Rose oil market share Bulgaria, Bulgarian Rose Turkey, Turkish Rose Saudi Arabia, Taif Rose It is also grown on a smaller scale in Afghanistan, Armenia, Azebaijan, Bosnia, Croatia, Cyprus, Ethiopia, Georgia, Greece, Jordan, Lebanon, India, Iran, Iraq, Israel, Moldova, North Macedonia, Oman, Serbia, Syria, Tajikistan, Turkmenistan, Pakistan, Romania, Russia, Ukraine, United Arab Emirates and Yemen. Rosa centifolia, the cabbage rose, which is more commonly grown in Morocco, Egypt and France. Rosa Damascena composition Composition of Rose oil & headspace vary, but the Rose international standard survey of 2003-2020 lists 3 components as the major components with a specific range specified in ISO 9842:2003. Major Rose components Citronellol 20%-34% Geraniol 15%-22% Nonadecane 8%-15% Minor Rose components heneicosane, eicosane, docosane, tricosane, tetracosane, pentacosane, hexacosane, heptacosane, nonacosane, dodecane, tetradecane, pentadecane, hexadecane, heptadecane, octadecane, nerol, linalool, phenyl ethyl alcohol, farnesol, α-pinene, β-pinene, α-terpinene, limonene, p-cymene, camphene, β-caryophyllene, neral, geranyl acetate, neryl acetate, eugenol, methyl eugenol, benzaldehyde, benzyl alcohol, octane and tetradecanol. Key Rose components β-damascenone (0.01% - 1.85%) β-
https://en.wikipedia.org/wiki/Padovan%20sequence
In number theory, the Padovan sequence is the sequence of integers P(n) defined by the initial values and the recurrence relation The first few values of P(n) are 1, 1, 1, 2, 2, 3, 4, 5, 7, 9, 12, 16, 21, 28, 37, 49, 65, 86, 114, 151, 200, 265, ... A Padovan prime is a Padovan number that is prime. The first Padovan primes are: 2, 3, 5, 7, 37, 151, 3329, 23833, 13091204281, 3093215881333057, 1363005552434666078217421284621279933627102780881053358473, 1558877695141608507751098941899265975115403618621811951868598809164180630185566719, ... . The Padovan sequence is named after Richard Padovan who attributed its discovery to Dutch architect Hans van der Laan in his 1994 essay Dom. Hans van der Laan : Modern Primitive. The sequence was described by Ian Stewart in his Scientific American column Mathematical Recreations in June 1996. He also writes about it in one of his books, "Math Hysteria: Fun Games With Mathematics". The above definition is the one given by Ian Stewart and by MathWorld. Other sources may start the sequence at a different place, in which case some of the identities in this article must be adjusted with appropriate offsets. Recurrence relations In the spiral, each triangle shares a side with two others giving a visual proof that the Padovan sequence also satisfies the recurrence relation Starting from this, the defining recurrence and other recurrences as they are discovered, one can create an infinite number of further recurrences by repeatedly replacing by The Perrin sequence satisfies the same recurrence relations as the Padovan sequence, although it has different initial values. The Perrin sequence can be obtained from the Padovan sequence by the following formula: Extension to negative parameters As with any sequence defined by a recurrence relation, Padovan numbers P(m) for m<0 can be defined by rewriting the recurrence relation as Starting with m = −1 and working backwards, we extend P(m) to negative indices: {| class="wikitabl
https://en.wikipedia.org/wiki/Protein%E2%80%93protein%20interaction
Protein–protein interactions (PPIs) are physical contacts of high specificity established between two or more protein molecules as a result of biochemical events steered by interactions that include electrostatic forces, hydrogen bonding and the hydrophobic effect. Many are physical contacts with molecular associations between chains that occur in a cell or in a living organism in a specific biomolecular context. Proteins rarely act alone as their functions tend to be regulated. Many molecular processes within a cell are carried out by molecular machines that are built from numerous protein components organized by their PPIs. These physiological interactions make up the so-called interactomics of the organism, while aberrant PPIs are the basis of multiple aggregation-related diseases, such as Creutzfeldt–Jakob and Alzheimer's diseases. PPIs have been studied with many methods and from different perspectives: biochemistry, quantum chemistry, molecular dynamics, signal transduction, among others. All this information enables the creation of large protein interaction networks – similar to metabolic or genetic/epigenetic networks – that empower the current knowledge on biochemical cascades and molecular etiology of disease, as well as the discovery of putative protein targets of therapeutic interest. Examples Electron transfer proteins In many metabolic reactions, a protein that acts as an electron carrier binds to an enzyme that acts as its reductase. After it receives an electron, it dissociates and then binds to the next enzyme that acts as its oxidase (i.e. an acceptor of the electron). These interactions between proteins are dependent on highly specific binding between proteins to ensure efficient electron transfer. Examples: mitochondrial oxidative phosphorylation chain system components cytochrome c-reductase / cytochrome c / cytochrome c oxidase; microsomal and mitochondrial P450 systems. In the case of the mitochondrial P450 systems, the specific residues
https://en.wikipedia.org/wiki/Genetic%20predisposition
A genetic predisposition is a genetic characteristic which influences the possible phenotypic development of an individual organism within a species or population under the influence of environmental conditions. In medicine, genetic susceptibility to a disease refers to a genetic predisposition to a health problem, which may eventually be triggered by particular environmental or lifestyle factors, such as tobacco smoking or diet. Genetic testing is able to identify individuals who are genetically predisposed to certain diseases. Behavior Predisposition is the capacity humans are born with to learn things such as language and concept of self. Negative environmental influences may block the predisposition (ability) one has to do some things. Behaviors displayed by animals can be influenced by genetic predispositions. Genetic predisposition towards certain human behaviors is scientifically investigated by attempts to identify patterns of human behavior that seem to be invariant over long periods of time and in very different cultures. For example, philosopher Daniel Dennett has proposed that humans are genetically predisposed to have a theory of mind because there has been evolutionary selection for the human ability to adopt the intentional stance. The intentional stance is a useful behavioral strategy by which humans assume that others have minds like their own. This assumption allows one to predict the behavior of others based on personal knowledge. In 1951, Hans Eysenck and Donald Prell published an experiment in which identical (monozygotic) and fraternal (dizygotic) twins, ages 11 and 12, were tested for neuroticism. It is described in detail in an article published in the Journal of Mental Science. in which Eysenck and Prell concluded that, "The factor of neuroticism is not a statistical artifact, but constitutes a biological unit which is inherited as a whole....neurotic Genetic predisposition is to a large extent hereditarily determined." E. O. Wilson'
https://en.wikipedia.org/wiki/Rapid%20thermal%20processing
Rapid thermal processing (RTP) is a semiconductor manufacturing process which heats silicon wafers to temperatures exceeding 1,000°C for not more than a few seconds. During cooling wafer temperatures must be brought down slowly to prevent dislocations and wafer breakage due to thermal shock. Such rapid heating rates are often attained by high intensity lamps or lasers. These processes are used for a wide variety of applications in semiconductor manufacturing including dopant activation, thermal oxidation, metal reflow and chemical vapor deposition. Temperature control One of the key challenges in rapid thermal processing is accurate measurement and control of the wafer temperature. Monitoring the ambient with a thermocouple has only recently become feasible, in that the high temperature ramp rates prevent the wafer from coming to thermal equilibrium with the process chamber. One temperature control strategy involves in situ pyrometry to effect real time control. Used for melting iron for welding purposes. Rapid thermal anneal Rapid thermal anneal (RTA) in rapid thermal processing is a process used in semiconductor device fabrication which involves heating a single wafer at a time in order to affect its electrical properties. Unique heat treatments are designed for different effects. Wafers can be heated in order to activate dopants, change film-to-film or film-to-wafer substrate interfaces, densify deposited films, change states of grown films, repair damage from ion implantation, move dopants or drive dopants from one film into another or from a film into the wafer substrate. Rapid thermal anneals are performed by equipment that heats a single wafer at a time using either lamp based heating, a hot chuck, or a hot plate that a wafer is brought near. Unlike furnace anneals they are of short duration, processing each wafer in several minutes. To achieve short annealing times and quick throughput, sacrifices are made in temperature and process uniformity, temp
https://en.wikipedia.org/wiki/Hitachi%20Flora%20Prius
The Hitachi Flora Prius was a range of personal computers marketed in Japan by Hitachi, Ltd. during the late 1990s. The Flora Prius was preinstalled with both Microsoft Windows 98 as well as BeOS. It did not, however, have a dual-boot option as Microsoft reminded Hitachi of the terms of the Windows OEM license. In effect, two thirds of the hard drive was hidden from the end-user, and a series of complicated manipulations was necessary to activate the BeOS partition. Models FLORA Prius 330J came in three models: 330N40JB: Base version with no LCD Screen 3304ST40JB: Included a 14.1-inch super TFT color LCD Display 3304ST40JBT: Included a 14.1-inch super TFT color LCD Display and WinTV Video capture board Base specifications CPU: Pentium II processor (400 MHz) RAM: 64 MB SDRAM Hard Drive: 6.4 GB (2 GB for Windows 98 and 4.6 GB for BeOS) CD-ROM Drive: 24X speed max. 100BASE-TX/10BASE-10 References Hitachi products BeOS
https://en.wikipedia.org/wiki/Textile%20manufacturing
Textile manufacturing (or textile engineering) is a major industry. It is largely based on the conversion of fibre into yarn, then yarn into fabric. These are then dyed or printed, fabricated into cloth which is then converted into useful goods such as clothing, household items, upholstery and various industrial products. Different types of fibres are used to produce yarn. Cotton remains the most widely used and common natural fiber making up 90% of all-natural fibers used in the textile industry. People often use cotton clothing and accessories because of comfort, not limited to different weathers. There are many variable processes available at the spinning and fabric-forming stages coupled with the complexities of the finishing and colouration processes to the production of a wide range of products. History Textile manufacturing in the modern era is an evolved form of the art and craft industries. Until the 18th and 19th centuries, the textile industry was a household work. It became mechanised in the 18th and 19th centuries, and has continued to develop through science and technology in the twentieth and twenty-first centuries. Processing of cotton Cotton is the world's most important natural fibre. In the year 2007, the global yield was 25 million tons from 35 million hectares cultivated in more than 50 countries. There are six stages to the manufacturing of cotton textiles: Cultivating and Harvesting Preparatory Processes Spinning Weaving or Knitting Finishing Marketing Cultivating and harvesting Cotton is grown in locations with long, hot, dry summers with plenty of sunshine and low humidity. Indian cotton, Gossypium arboreum, is finer but the staple is only suitable for hand processing. American cotton, Gossypium hirsutum, produces the longer staple needed for mechanised textile production. The planting season is from September to mid-November, and the crop is harvested between March and June. The cotton bolls are harvested by stripper harvesters
https://en.wikipedia.org/wiki/The%20Jim%20Rome%20Show
The Jim Rome Show is a sports radio talk show hosted by Jim Rome. It airs live for three hours each weekday from 9 a.m. to noon Pacific Time. The show is produced in Los Angeles, syndicated by CBS Sports Radio, and can be heard on affiliate radio stations in the U.S. and Canada. In January 2018, the show began simulcasting on television on CBS Sports Network. History The Jim Rome Show began on XTRA Sports 690 in San Diego. In 1996, Premiere Radio Networks picked up the program for national syndication. Sometime after, the show was shortened by one hour and the broadcast location was shifted from XTRA Sports 690 to the Premiere Radio Networks studio complex in Sherman Oaks, California. As part of the broadcast deal bringing Rome's TV show to CBS Sports Network, The Jim Rome Show became a charter program of CBS Sports Radio upon its full launch on January 2, 2013. Show personnel ("The XR4Ti") Tom Di Benedetto, executive producer, call screener since 2021 Alvin Delloro, engineer since 2005 James "Flight Deck" Kelley, digital program director since approximately 2010 Jack Savage, producer since February 2023 Former personnel Brian "Whitey" Albers, engineer from 1996 to 2005 Keith Arnold, associate producer from 2016 to 2019 Kyle Brandt, producer and writer from 2007 until July 22, 2016 Robert Dozmati, associate producer from 2018 to 2019 Adam Hawk, executive producer and call screener from 2016 to July 23, 2021 Austin Huff, content screener from 2015 to 2018 Gerrit Ritt, producer from 2019 to January 30, 2023 Travis Rodgers, producer from 1996 to 2009 Jason Stewart, talent coordinator and call screener from 1999 to March 8, 2013 Dave Whelan, producer from 2007 to September 2, 2022 Show format and content The three-hour program is a mixture of interviews, calls, emails, Tweets and Rome's own thoughts and analysis. On the radio, the opening and closing theme is "Lust for Life" by Iggy Pop, and the show also uses "Welcome to the Jungle" by Guns N' Roses, while CBS Spo
https://en.wikipedia.org/wiki/List%20of%20plant%20communities%20in%20the%20British%20National%20Vegetation%20Classification
The following is the list of the 286 plant communities which comprise the British National Vegetation Classification (NVC). These are grouped by major habitat category, as used in the five volumes of British Plant Communities, the standard work describing the NVC. Woodland and scrub communities The following 25 communities are described in Volume 1 of British Plant Communities. For an article summarising these communities see Woodland and scrub communities in the British National Vegetation Classification system. W1 Salix cinerea - Galium palustre woodland W2 Salix cinerea - Betula pubescens - Phragmites australis woodland W3 Salix pentandra - Carex rostrata woodland W4 Betula pubescens - Molinia caerulea woodland W5 Alnus glutinosa - Carex paniculata woodland W6 Alnus glutinosa - Urtica dioica woodland W7 Alnus glutinosa - Fraxinus excelsior - Lysimachia nemorum woodland W8 Fraxinus excelsior - Acer campestre - Mercurialis perennis woodland W9 Fraxinus excelsior - Sorbus aucuparia - Mercurialis perennis woodland W10 Quercus robur - Pteridium aquilinum - Rubus fruticosus woodland W11 Quercus petraea - Betula pubescens - Oxalis acetosella woodland W12 Fagus sylvatica - Mercurialis perennis woodland W13 Taxus baccata woodland W14 Fagus sylvatica - Rubus fruticosus woodland W15 Fagus sylvatica - Deschampsia flexuosa woodland W16 Quercus spp. - Betula spp. - Deschampsia flexuosa woodland W17 Quercus petraea - Betula pubescens - Dicranum majus woodland W18 Pinus sylvestris - Hylocomium splendens woodland W19 Juniperus communis ssp. communis - Oxalis acetosella woodland W20 Salix lapponum - Luzula sylvatica scrub W21 Crataegus monogyna - Hedera helix scrub W22 Prunus spinosa - Rubus fruticosus scrub W23 Ulex europaeus - Rubus fruticosus scrub W24 Rubus fruticosus - Holcus lanatus underscrub W25 Pteridium aquilinum - Rubus fruticosus underscrub Mires The following 38 communities are described in Volume 2 of British Plant Communities. For an
https://en.wikipedia.org/wiki/Sieve%20%28category%20theory%29
In category theory, a branch of mathematics, a sieve is a way of choosing arrows with a common codomain. It is a categorical analogue of a collection of open subsets of a fixed open set in topology. In a Grothendieck topology, certain sieves become categorical analogues of open covers in topology. Sieves were introduced by in order to reformulate the notion of a Grothendieck topology. Definition Let C be a category, and let c be an object of C. A sieve on c is a subfunctor of Hom(−, c), i.e., for all objects c′ of C, S(c′) ⊆ Hom(c′, c), and for all arrows f:c″→c′, S(f) is the restriction of Hom(f, c), the pullback by f (in the sense of precomposition, not of fiber products), to S(c′); see the next section, below. Put another way, a sieve is a collection S of arrows with a common codomain that satisfies the condition, "If g:c′→c is an arrow in S, and if f:c″→c′ is any other arrow in C, then gf is in S." Consequently, sieves are similar to right ideals in ring theory or filters in order theory. Pullback of sieves The most common operation on a sieve is pullback. Pulling back a sieve S on c by an arrow f:c′→c gives a new sieve f*S on c′. This new sieve consists of all the arrows in S that factor through c′. There are several equivalent ways of defining f*S. The simplest is: For any object d of C, f*S(d) = { g:d→c′ | fg ∈ S(d)} A more abstract formulation is: f*S is the image of the fibered product S×Hom(−, c)Hom(−, c′) under the natural projection S×Hom(−, c)Hom(−, c′)→Hom(−, c′). Here the map Hom(−, c′)→Hom(−, c) is Hom(f, c′), the pullback by f. The latter formulation suggests that we can also take the image of S×Hom(−, c)Hom(−, c′) under the natural map to Hom(−, c). This will be the image of f*S under composition with f. For each object d of C, this sieve will consist of all arrows fg, where g:d→c′ is an arrow of f*S(d). In other words, it consists of all arrows in S that can be factored through f. If we denote by ∅c the empty sieve on c, that
https://en.wikipedia.org/wiki/Stable%20storage
Stable storage is a classification of computer data storage technology that guarantees atomicity for any given write operation and allows software to be written that is robust against some hardware and power failures. To be considered atomic, upon reading back a just written-to portion of the disk, the storage subsystem must return either the write data or the data that was on that portion of the disk before the write operations. Most computer disk drives are not considered stable storage because they do not guarantee atomic write; an error could be returned upon subsequent read of the disk where it was just written to in lieu of either the new or prior data. Implementation Multiple techniques have been developed to achieve the atomic property from weakly atomic devices such as disks. Writing data to a disk in two places in a specific way is one technique and can be done by application software. Most often though, stable storage functionality is achieved by mirroring data on separate disks via RAID technology (level 1 or greater). The RAID controller implements the disk writing algorithms that enable separate disks to act as stable storage. The RAID technique is robust against some single disk failure in an array of disks whereas the software technique of writing to separate areas of the same disk only protects against some kinds of internal disk media failures such as bad sectors in single disk arrangements. Computer data storage
https://en.wikipedia.org/wiki/Peaucellier%E2%80%93Lipkin%20linkage
The Peaucellier–Lipkin linkage (or Peaucellier–Lipkin cell, or Peaucellier–Lipkin inversor), invented in 1864, was the first true planar straight line mechanism – the first planar linkage capable of transforming rotary motion into perfect straight-line motion, and vice versa. It is named after Charles-Nicolas Peaucellier (1832–1913), a French army officer, and Yom Tov Lipman Lipkin (1846–1876), a Lithuanian Jew and son of the famed Rabbi Israel Salanter. Until this invention, no planar method existed of converting exact straight-line motion to circular motion, without reference guideways. In 1864, all power came from steam engines, which had a piston moving in a straight-line up and down a cylinder. This piston needed to keep a good seal with the cylinder in order to retain the driving medium, and not lose energy efficiency due to leaks. The piston does this by remaining perpendicular to the axis of the cylinder, retaining its straight-line motion. Converting the straight-line motion of the piston into circular motion was of critical importance. Most, if not all, applications of these steam engines, were rotary. The mathematics of the Peaucellier–Lipkin linkage is directly related to the inversion of a circle. Earlier Sarrus linkage There is an earlier straight-line mechanism, whose history is not well known, called the Sarrus linkage. This linkage predates the Peaucellier–Lipkin linkage by 11 years and consists of a series of hinged rectangular plates, two of which remain parallel but can be moved normally to each other. Sarrus' linkage is of a three-dimensional class sometimes known as a space crank, unlike the Peaucellier–Lipkin linkage which is a planar mechanism. Geometry In the geometric diagram of the apparatus, six bars of fixed length can be seen: , , , , , . The length of is equal to the length of , and the lengths of , , , and are all equal forming a rhombus. Also, point is fixed. Then, if point is constrained to move along a circle (for exampl
https://en.wikipedia.org/wiki/Cellomics
Cellomics is the discipline of quantitative cell analysis using bioimaging methods and informatics with a workflow involving three major components: image acquisition, image analysis, and data visualization and management. These processes are generally automated. All three of these components depend on sophisticated software to acquire qualitative data, quantitative data, and the management of both images and data, respectively. Cellomics is also a trademarked term, which is often used interchangeably with high-content analysis (HCA) or high-content screening (HCS), but cellomics extends beyond HCA/HCS by incorporating sophisticated informatics tools. History HCS and the discipline of cellomics was pioneered by a once privately held company named Cellomics Inc., which commercialized instruments, software, and reagents to facilitate the study of cells in culture, and more specifically, their responses to potentially therapeutic drug-like molecules. In 2005, Cellomics was acquired by Fisher Scientific International, Inc., now Thermo Fisher Scientific, which continues developing cellomics-centered products under its Thermo Scientific™ high content analysis product line. Applications Like many of the -omics, e.g., genomics and proteomics, applications have grown in depth and breadth over time. Currently there are over 40 different application areas that cellomics is used in, including the analysis of 3-D cell models, angiogenesis, and cell-signalling. Originally a tool used by the pharmaceutical industry for screening, cellomics has now expanded into academia to better understand cell function in the context of the cell. Cellomics is used in both academic and industrial life-science research in areas, such as cancer research, neuroscience research, drug discovery, consumer products safety, and toxicology; however, there are many more areas for which cellomics could provide a much deeper understanding of cellular function. Image analysis With HCA at its core, cello
https://en.wikipedia.org/wiki/Type%20B%20videotape
1–inch type B VTR (designated Type B by SMPTE) is a reel-to-reel analog recording video tape format developed by the Bosch Fernseh division of Bosch in Germany in 1976. The magnetic tape format became the broadcasting standard in continental Europe, but adoption was limited in the United States and United Kingdom, where the Type C videotape VTR met with greater success. Details The tape speed allowed 96 minutes on a large reel (later 120 minutes), and used 2 record/playback (R/P) heads on the drum rotating at 9,000 RPM with a 190-degree wrap around a very small head drum, recording 52 video lines per head segment. A single video frame or field was recorded across 6 tracks in the tape. The format only allowed for play, rewind and fast forward. Video is recorded on an FM signal with a bandwidth of 5.5 MHz. Three longitudinal audio tracks are recorded on the tape as well: two audio and one Linear timecode (LTC) track. BCN 50 VTRs were used at the 1980 Summer Olympics in Moscow. The format required an optional, and costly, digital framestore in addition to the normal analog timebase corrector to do any "trick-play" operations, such as slow motion/variable-speed playback, frame step play, and visible shuttle functions. This was because, unlike 1-inch type C which recorded one field per helical scan track on the tape, Type B segmented each field to 5 or 6 tracks per field according to whether it was a 525- (NTSC) or 625- (PAL) line machine. The picture quality was excellent, and standard R/P machines, digital frame store machines, reel-to-reel portables, random access cart machines (for playback of short-form video material such as television commercials), and portable cart versions were marketed. Echo Science Corporation, a United States company, made units like a BCN 1 for the U.S. military for a short time in the 1970s. Echo Science models were Pilot 1, Echo 460, Pilot 260. Models introduced BCR (BCR-40, BCR-50 and BCR-60) was a pre BCN VTR, made jointly with Phil
https://en.wikipedia.org/wiki/The%20Bottle%20Imp
"The Bottle Imp" is an 1891 short story by the Scottish author Robert Louis Stevenson usually found in the short story collection Island Nights' Entertainments. It was first published in the New York Herald (February–March 1891) and Black and White London (March–April 1891). In it, the protagonist buys a bottle with an imp inside that grants wishes. However, the bottle is cursed; if the holder dies bearing it, his or her soul is forfeit to hell. Plot Keawe, a poor Native Hawaiian, buys a strange unbreakable bottle from a sad, elderly gentleman who credits the bottle with his fortune. He promises that an imp residing in the bottle will also grant Keawe his every desire. Of course, there is a catch. The bottle must be sold, for cash, at a loss, i.e. for less than its owner originally paid, and cannot be thrown or given away, or else it will magically return to him. All of these rules must be explained by each seller to each purchaser. If an owner of the bottle dies without having sold it in the prescribed manner, that person's soul will burn for eternity in Hell. The bottle was said to have been brought to Earth by the Devil and first purchased by Prester John for millions; it was owned by Napoleon and Captain James Cook and accounted for their great successes. By the beginning of the story the price has diminished to fifty dollars. Keawe buys the bottle and instantly tests it by wishing his money to be refunded, and by trying to sell it for more than he paid and abandoning it, to test if the story is true. When these all work as described, he realizes the bottle does indeed have unholy power. He wishes for his heart's desire: a big, fancy mansion on a landed estate, and finds his wish granted, but at a price: his beloved uncle and cousins have been killed in a boating accident, leaving Keawe sole heir to his uncle's fortune. Keawe is horrified, but uses the money to build his house. Having all he wants, and being happy, he explains the risks to a friend who b
https://en.wikipedia.org/wiki/Hilbert%27s%20twelfth%20problem
Hilbert's twelfth problem is the extension of the Kronecker–Weber theorem on abelian extensions of the rational numbers, to any base number field. It is one of the 23 mathematical Hilbert problems and asks for analogues of the roots of unity that generate a whole family of further number fields, analogously to the cyclotomic fields and their subfields. Leopold Kronecker described the complex multiplication issue as his , or "dearest dream of his youth", so the problem is also known as Kronecker's Jugendtraum. The classical theory of complex multiplication, now often known as the , does this for the case of any imaginary quadratic field, by using modular functions and elliptic functions chosen with a particular period lattice related to the field in question. Goro Shimura extended this to CM fields. In the special case of totally real fields, Samit Dasgupta and Mahesh Kakde provided a construction of the maximal abelian extension of totally real fields using the Brumer–Stark conjecture. The general case of Hilbert's twelfth problem is still open . Description of the problem The fundamental problem of algebraic number theory is to describe the fields of algebraic numbers. The work of Galois made it clear that field extensions are controlled by certain groups, the Galois groups. The simplest situation, which is already at the boundary of what is well understood, is when the group in question is abelian. All quadratic extensions, obtained by adjoining the roots of a quadratic polynomial, are abelian, and their study was commenced by Gauss. Another type of abelian extension of the field Q of rational numbers is given by adjoining the nth roots of unity, resulting in the cyclotomic fields. Already Gauss had shown that, in fact, every quadratic field is contained in a larger cyclotomic field. The Kronecker–Weber theorem shows that any finite abelian extension of Q is contained in a cyclotomic field. Kronecker's (and Hilbert's) question addresses the situation of a mor
https://en.wikipedia.org/wiki/Mathemagician
A mathemagician is a mathematician who is also a magician. The term "mathemagic" is believed to have been introduced by Royal Vale Heath with his 1933 book "Mathemagic". The name "mathemagician" was probably first applied to Martin Gardner, but has since been used to describe many mathematician/magicians, including Arthur T. Benjamin, Persi Diaconis, and Colm Mulcahy. Diaconis has suggested that the reason so many mathematicians are magicians is that "inventing a magic trick and inventing a theorem are very similar activities." Mathemagician is a neologism, specifically a portmanteau, that combines mathematician and magician. A great number of self-working mentalism tricks rely on mathematical principles. Max Maven often utilizes this type of magic in his performance. The Mathemagician is the name of a character in the 1961 children's book The Phantom Tollbooth. He is the ruler of Digitopolis, the kingdom of mathematics. Notable mathemagicians Arthur T. Benjamin Jin Akiyama Persi Diaconis Richard Feynman Karl Fulves Martin Gardner Ronald Graham Royal Vale Heath Colm Mulcahy Raymond Smullyan W. W. Rouse Ball Alex Elmsley References Further reading Diaconis, Persi & Graham, Ron. Magical Mathematics: The Mathematical Ideas That Animate Great Magic Tricks Princeton University Press, 2012. Fulves, Karl. Self-working Number Magic, New York London : Dover Constable, 1983. Gardner, Martin. Mathematics, Magic and Mystery, Dover, 1956. Graham, Ron. Juggling Mathematics and Magic University of California, San Diego Teixeira, Ricardo & Park, Jang Woo. Mathemagics: A Magical Journey Through Advanced Mathematics, Connecting More Than 60 Magic Tricks to High-Level Math World Scientific, 2020. ISBN 978-9811215308. Magicians Mathematical science occupations
https://en.wikipedia.org/wiki/Homological%20mirror%20symmetry
Homological mirror symmetry is a mathematical conjecture made by Maxim Kontsevich. It seeks a systematic mathematical explanation for a phenomenon called mirror symmetry first observed by physicists studying string theory. History In an address to the 1994 International Congress of Mathematicians in Zürich, speculated that mirror symmetry for a pair of Calabi–Yau manifolds X and Y could be explained as an equivalence of a triangulated category constructed from the algebraic geometry of X (the derived category of coherent sheaves on X) and another triangulated category constructed from the symplectic geometry of Y (the derived Fukaya category). Edward Witten originally described the topological twisting of the N=(2,2) supersymmetric field theory into what he called the A and B model topological string theories. These models concern maps from Riemann surfaces into a fixed target—usually a Calabi–Yau manifold. Most of the mathematical predictions of mirror symmetry are embedded in the physical equivalence of the A-model on Y with the B-model on its mirror X. When the Riemann surfaces have empty boundary, they represent the worldsheets of closed strings. To cover the case of open strings, one must introduce boundary conditions to preserve the supersymmetry. In the A-model, these boundary conditions come in the form of Lagrangian submanifolds of Y with some additional structure (often called a brane structure). In the B-model, the boundary conditions come in the form of holomorphic (or algebraic) submanifolds of X with holomorphic (or algebraic) vector bundles on them. These are the objects one uses to build the relevant categories. They are often called A and B branes respectively. Morphisms in the categories are given by the massless spectrum of open strings stretching between two branes. The closed string A and B models only capture the so-called topological sector—a small portion of the full string theory. Similarly, the branes in these models are only topologic
https://en.wikipedia.org/wiki/Dip-pen%20nanolithography
Dip pen nanolithography (DPN) is a scanning probe lithography technique where an atomic force microscope (AFM) tip is used to create patterns directly on a range of substances with a variety of inks. A common example of this technique is exemplified by the use of alkane thiolates to imprint onto a gold surface. This technique allows surface patterning on scales of under 100 nanometers. DPN is the nanotechnology analog of the dip pen (also called the quill pen), where the tip of an atomic force microscope cantilever acts as a "pen," which is coated with a chemical compound or mixture acting as an "ink," and put in contact with a substrate, the "paper." DPN enables direct deposition of nanoscale materials onto a substrate in a flexible manner. Recent advances have demonstrated massively parallel patterning using two-dimensional arrays of 55,000 tips. Applications of this technology currently range through chemistry, materials science, and the life sciences, and include such work as ultra high density biological nanoarrays, and additive photomask repair. Development The uncontrollable transfer of a molecular 'ink' from a coated AFM tip to a substrate was first reported by Jaschke and Butt in 1995, but they erroneously concluded that alkanethiols could not be transferred to gold substrates to form stable nanostructures. A research group at Northwestern University, US led by Chad Mirkin independently studied the process and determined that under the appropriate conditions, molecules could be transferred to a wide variety of surfaces to create stable chemically-adsorbed monolayers in a high resolution lithographic process they termed "DPN". Mirkin and his coworkers hold the patents on this process, and the patterning technique has expanded to include liquid "inks". It is important to note that "liquid inks" are governed by a very different deposition mechanism when compared to "molecular inks". Deposition materials Molecular inks Molecular inks are typically co
https://en.wikipedia.org/wiki/Truth-value%20semantics
In formal semantics, truth-value semantics is an alternative to Tarskian semantics. It has been primarily championed by Ruth Barcan Marcus, H. Leblanc, and J. Michael Dunn and Nuel Belnap. It is also called the substitution interpretation (of the quantifiers) or substitutional quantification. The idea of these semantics is that a universal (respectively, existential) quantifier may be read as a conjunction (respectively, disjunction) of formulas in which constants replace the variables in the scope of the quantifier. For example, may be read () where are individual constants replacing all occurrences of in . The main difference between truth-value semantics and the standard semantics for predicate logic is that there are no domains for truth-value semantics. Only the truth clauses for atomic and for quantificational formulas differ from those of the standard semantics. Whereas in standard semantics atomic formulas like or are true if and only if (the referent of) is a member of the extension of the predicate , respectively, if and only if the pair is a member of the extension of , in truth-value semantics the truth-values of atomic formulas are basic. A universal (existential) formula is true if and only if all (some) substitution instances of it are true. Compare this with the standard semantics, which says that a universal (existential) formula is true if and only if for all (some) members of the domain, the formula holds for all (some) of them; for example, is true (under an interpretation) if and only if for all in the domain , is true (where is the result of substituting for all occurrences of in ). (Here we are assuming that constants are names for themselves—i.e. they are also members of the domain.) Truth-value semantics is not without its problems. First, the strong completeness theorem and compactness fail. To see this consider the set . Clearly the formula is a logical consequence of the set, but it is not a consequence of any finite subs
https://en.wikipedia.org/wiki/Gel%20extraction
In molecular biology, gel extraction or gel isolation is a technique used to isolate a desired fragment of intact DNA from an agarose gel following agarose gel electrophoresis. After extraction, fragments of interest can be mixed, precipitated, and enzymatically ligated together in several simple steps. This process, usually performed on plasmids, is the basis for rudimentary genetic engineering. After DNA samples are run on an agarose gel, extraction involves four basic steps: identifying the fragments of interest, isolating the corresponding bands, isolating the DNA from those bands, and removing the accompanying salts and stain. To begin, UV light is shone on the gel in order to illuminate all the ethidium bromide-stained DNA. Care must be taken to avoid exposing the DNA to mutagenic radiation for longer than absolutely necessary. The desired band is identified and physically removed with a cover slip or razor blade. The removed slice of gel should contain the desired DNA inside. An alternative method, utilizing SYBR Safe DNA gel stain and blue-light illumination, avoids the DNA damage associated with ethidium bromide and UV light. Several strategies for isolating and cleaning the DNA fragment of interest exist. Spin Column Extraction Gel extraction kits are available from several major biotech manufacturers for a final cost of approximately 1–2 US$ per sample. Protocols included in these kits generally call for the dissolution of the gel-slice in 3 volumes of chaotropic agent at 50 °C, followed by application of the solution to a spin-column (the DNA remains in the column), a 70% ethanol wash (the DNA remains in the column, salt and impurities are washed out), and elution of the DNA in a small volume (30 µL) of water or buffer. Dialysis The gel fragment is placed in a dialysis tube that is permeable to fluids but impermeable to molecules at the size of DNA, thus preventing the DNA from passing through the membrane when soaked in TE buffer. An electric f
https://en.wikipedia.org/wiki/Snag%20%28ecology%29
In forest ecology, a snag refers to a standing, dead or dying tree, often missing a top or most of the smaller branches. In freshwater ecology it refers to trees, branches, and other pieces of naturally occurring wood found sunken in rivers and streams; it is also known as coarse woody debris. Snags provide habitat for a wide variety of wildlife but pose hazards to river navigation. When used in manufacturing, especially in Scandinavia, they are often called dead wood and in Finland, kelo wood. Forest snags Snags are an important structural component in forest communities, making up 10–20% of all trees present in old-growth tropical, temperate, and boreal forests. Snags and downed coarse woody debris represent a large portion of the woody biomass in a healthy forest. In temperate forests, snags provide critical habitat for more than 100 species of bird and mammal, and snags are often called 'wildlife trees' by foresters. Dead, decaying wood supports a rich community of decomposers like bacteria and fungi, insects, and other invertebrates. These organisms and their consumers, along with the structural complexity of cavities, hollows, and broken tops make snags important habitat for birds, bats, and small mammals, which in turn feed larger mammalian predators. Snags are optimal habitat for primary cavity nesters such as woodpeckers which create the majority of cavities used by secondary cavity users in forest ecosystems. Woodpeckers excavate cavities for more than 80 other species and the health of their populations relies on snags. Most snag-dependent birds and mammals are insectivorous and represent a major portion of the insectivorous forest fauna, and are important factors in controlling forest insect populations. There are many instances in which birds reduced outbreak populations of forest insects, such as woodpeckers affecting outbreaks of southern hardwood borers and Engelmann spruce beetles. Snag creation occurs naturally as trees die due to old age, di
https://en.wikipedia.org/wiki/Thomas%20Kailath
Thomas Kailath (born June 7, 1935) is an Indian born American electrical engineer, information theorist, control engineer, entrepreneur and the Hitachi America Professor of Engineering emeritus at Stanford University. Professor Kailath has authored several books, including the well-known book Linear Systems, which ranks as one of the most referenced books in the field of linear systems. Kailath was elected as a member into the US National Academy of Engineering in 1984 for outstanding contributions in prediction, filtering, and signal processing, and for leadership in engineering. In 2012, Kailath was awarded the National Medal of Science, presented by President Barack Obama in 2014 for "transformative contributions to the fields of information and system science, for distinctive and sustained mentoring of young scholars, and for translation of scientific ideas into entrepreneurial ventures that have had a significant impact on industry." Kailath is listed as an ISI highly cited researcher and is generally recognized as one of the preeminent figures of twentieth-century electrical engineering. Biography Kailath was born in 1935 in Pune, Maharashtra, India, to a Malayalam-speaking Syrian Christian family named Chittoor. His parents hailed from Kerala. He studied at St. Vincent's High School, Pune and received his engineering Bachelor's degree from the Government College of Engineering, the University of Pune in 1956. He received his Master's degree in 1959 and his doctorate degree in 1961, both from the Massachusetts Institute of Technology (MIT). He was the first Indian-born student to receive a doctorate in electrical engineering from MIT. Kailath is Hitachi America Professor of Engineering emeritus at Stanford University. Here he has supervised about 80 Ph.D. theses. Kailath's research work has encompassed linear systems, estimation and control theory, signal processing, information theory and semiconductor device fabrication. Kailath has been an Institute o
https://en.wikipedia.org/wiki/Empty%20domain
In first-order logic the empty domain is the empty set having no members. In traditional and classical logic domains are restrictedly non-empty in order that certain theorems be valid. Interpretations with an empty domain are shown to be a trivial case by a convention originating at least in 1927 with Bernays and Schönfinkel (though possibly earlier) but oft-attributed to Quine 1951. The convention is to assign any formula beginning with a universal quantifier the value truth while any formula beginning with an existential quantifier is assigned the value falsehood. This follows from the idea that existentially quantified statements have existential import (i.e. they imply the existence of something) while universally quantified statements do not. This interpretation reportedly stems from George Boole in the late 19th century but this is debatable. In modern model theory, it follows immediately for the truth conditions for quantified sentences: In other words, an existential quantification of the open formula φ is true in a model iff there is some element in the domain (of the model) that satisfies the formula; i.e. iff that element has the property denoted by the open formula. A universal quantification of an open formula φ is true in a model iff every element in the domain satisfies that formula. (Note that in the metalanguage, "everything that is such that X is such that Y" is interpreted as a universal generalization of the material conditional "if anything is such that X then it is such that Y". Also, the quantifiers are given their usual objectual readings, so that a positive existential statement has existential import, while a universal one does not.) An analogous case concerns the empty conjunction and the empty disjunction. The semantic clauses for, respectively, conjunctions and disjunctions are given by . It is easy to see that the empty conjunction is trivially true, and the empty disjunction trivially false. Logics whose theorems are valid in every
https://en.wikipedia.org/wiki/Class-D%20amplifier
A class-D amplifier or switching amplifier is an electronic amplifier in which the amplifying devices (transistors, usually MOSFETs) operate as electronic switches, and not as linear gain devices as in other amplifiers. They operate by rapidly switching back and forth between the supply rails, using pulse-width modulation, pulse-density modulation, or related techniques to produce a pulse train output. This passes through a simple low-pass filter which blocks the high-frequency pulses and provides analog output current and voltage. Because they are always either in fully on or fully off modes, little energy is dissipated in the transistors and efficiency can exceed 90%. History The first Class-D amplifier was invented by British scientist Alec Reeves in the 1950s and was first called by that name in 1955. The first commercial product was a kit module called the X-10 released by Sinclair Radionics in 1964. However, it had an output power of only 2.5 watts. The Sinclair X-20 in 1966 produced 20 watts but suffered from the inconsistencies and limitations of the germanium-based bipolar junction transistors available at the time. As a result, these early class-D amplifiers were impractical and unsuccessful. Practical class-D amplifiers were enabled by the development of silicon-based MOSFET (metal–oxide–semiconductor field-effect transistor) technology. In 1978, Sony introduced the TA-N88, the first class-D unit to employ power MOSFETs and a switched-mode power supply. There were subsequently rapid developments in MOSFET technology between 1979 and 1985. The availability of low-cost, fast-switching MOSFETs led to Class-D amplifiers becoming successful in the mid-1980s. The first class-D amplifier based integrated circuit was released by Tripath in 1996, and it saw widespread use. Basic operation Class-D amplifiers work by generating a train of rectangular pulses of fixed amplitude but varying width and separation. This modulation represents the amplitude variations
https://en.wikipedia.org/wiki/Shell%20Eco-marathon
Shell Eco-marathon is a world-wide energy efficiency competition sponsored by Shell. Participants build automotive vehicles to achieve the highest possible fuel efficiency. There are two vehicle classes within Shell Eco-marathon: Prototype and UrbanConcept. There are three energy categories within Shell Eco-marathon: battery-electric, hydrogen fuel cell, and internal combustion engine (gasoline, ethanol, or diesel). Prizes are awarded separately for each vehicle class and energy category. The pinnacle of the competition is the Shell Eco-marathon Drivers' World Championship, where the most energy-efficient UrbanConcept vehicles compete in a race with a limited amount of energy. Shell Eco-marathon competitions are held around the world with nine events as of 2018. The 2018 competition season includes events held in Singapore, California, Paris, London, Istanbul, Johannesburg, Rio de Janeiro, India, and China. Participants are students from various academic backgrounds including university teams such as past finalists University of British Columbia, Duke University, University of Toronto, and University of California, Los Angeles. In 2018, over 5,000 students from over 700 universities in 52 countries participated in Shell Eco-marathon. The digital reach of Shell Eco-marathon is approximately several million. History In 1939, a group of Shell scientists based in a research laboratory in Wood River, Illinois, USA, had a friendly bet to see who could drive their own car furthest on one gallon of fuel. The winner managed a fuel economy of . A repeat of the challenge yielded dramatically improved results over the years: with a 1947 Studebaker in 1949 with a 1959 Fiat 600 in 1968 with a 1959 Opel in 1973. The current record is , set in 2005 by the PAC-Car II. The world record in Diesel efficiency was achieved by a team from the Universitat Politècnica de Valencia (Politechnical University of Valencia, Spain) in 2010 with 1396.8 kilometres per litre. In contrast, t
https://en.wikipedia.org/wiki/Meibomian%20gland
Meibomian glands (also called tarsal glands, palpebral glands, and tarsoconjunctival glands) are sebaceous glands along the rims of the eyelid inside the tarsal plate. They produce meibum, an oily substance that prevents evaporation of the eye's tear film. Meibum prevents tears from spilling onto the cheek, traps them between the oiled edge and the eyeball, and makes the closed lids airtight. There are about 25 such glands on the upper eyelid, and 20 on the lower eyelid. Dysfunctional meibomian glands is believed to be the most often cause of dry eyes. They are also the cause of posterior blepharitis. History The glands were mentioned by Galen in 200 AD and were described in more detail by Heinrich Meibom (1638–1700), a German physician, in his work De Vasis Palpebrarum Novis Epistola in 1666. This work included a drawing with the basic characteristics of the glands. Anatomy Although the upper lid have greater number and volume of meibomian glands than the lower lid, there is no consensus whether it contributes more to the tearfilm stability. The glands do not have direct contact with eyelash follicles. The process of blinking releases meibum into the lid margin. Function Meibum Lipids Lipids are the major components of meibum (also known as "meibomian gland secretions"). The term "meibum" was originally introduced by Nicolaides et al. in 1981. The biochemical composition of meibum is extremely complex and very different from that of sebum. Lipids are universally recognized as major components of human and animal meibum. An update was published in 2009 on the composition of human meibum and on the structures of various positively identified meibomian lipids Currently, the most sensitive and informative approach to lipidomic analysis of meibum is mass spectrometry, either with direct infusion or in combination with liquid chromatography. The lipids are the main component of the lipid layer of the tear film, preventing rapid evaporation and it is believed
https://en.wikipedia.org/wiki/Dynamic%20multipathing
In computer data storage technology field, dynamic multipathing (DMP) is a multipath I/O enhancement technique that balances input/output (I/O) across many available paths from the computer to the storage device to improve performance and availability. The name was introduced with Veritas Volume Manager software. The DMP utility does not take any time to switch over, although the total time for failover is dependent on how long the underlying disk driver retries the command before giving up. External links Veritas - DMP white paper Computer storage technologies
https://en.wikipedia.org/wiki/Vital%20rates
Vital rates refer to how fast vital statistics change in a population (usually measured per 1000 individuals). There are 2 categories within vital rates: crude rates and refined rates. Crude rates measure vital statistics in a general population (overall change in births and deaths per 1000). Refined rates measure the change in vital statistics in a specific demographic (such as age, sex, race, etc.). Marriage rates The national marriage rates since 1972,in the US have fallen by almost 50% at six people per 1000. According to Iran Index and National Organization for Civil Registration of Iran Iranian divorce rate is in the red at its record highest level since 1979, divorce quotas were introduced to curb enthuitasim. References Ecology
https://en.wikipedia.org/wiki/Flash%20pasteurization
Flash pasteurization, also called "high-temperature short-time" (HTST) processing, is a method of heat pasteurization of perishable beverages like fruit and vegetable juices, beer, wine, and some dairy products such as milk. Compared with other pasteurization processes, it maintains color and flavor better, but some cheeses were found to have varying responses to the process. Flash pasteurization is performed to kill spoilage microorganisms prior to filling containers, in order to make the products safer and to extend their shelf life compared to the unpasteurised foodstuff. For example, one manufacturer of flash pasteurizing machinery gives shelf life as "in excess of 12 months". It must be used in conjunction with sterile fill technology (similar to aseptic processing) to prevent post-pasteurization contamination. The liquid moves in a controlled, continuous flow while subjected to temperatures of 71.5 °C (160 °F) to 74 °C (165 °F), for about 15 to 30 seconds, followed by rapid cooling to between 4 °C (39.2 °F) and 5.5 °C (42 °F). The standard US protocol for flash pasteurization of milk, 71.7 °C (161 °F) for 15 seconds in order to kill Coxiella burnetii (the most heat-resistant pathogen found in raw milk), was introduced in 1933, and results in 5-log reduction (99.999%) or greater reduction in harmful bacteria. An early adopter of pasteurization was Tropicana Products, which has used the method since the 1950s. The juice company Odwalla switched from non-pasteurized to flash-pasteurized juices in 1996 after tainted apple juice containing E. coli O157:H7 sickened many children and killed one. References See also Ultra-high-temperature processing External links The Pasteurization of Beer – The New York Times Food preservation
https://en.wikipedia.org/wiki/F-algebra
In mathematics, specifically in category theory, F-algebras generalize the notion of algebraic structure. Rewriting the algebraic laws in terms of morphisms eliminates all references to quantified elements from the axioms, and these algebraic laws may then be glued together in terms of a single functor F, the signature. F-algebras can also be used to represent data structures used in programming, such as lists and trees. The main related concepts are initial F-algebras which may serve to encapsulate the induction principle, and the dual construction F-coalgebras. Definition If is a category, and is an endofunctor of , then an -algebra is a tuple , where is an object of and is a -morphism . The object is called the carrier of the algebra. When it is permissible from context, algebras are often referred to by their carrier only instead of the tuple. A homomorphism from an -algebra to an -algebra is a -morphism such that , according to the following commutative diagram: Equipped with these morphisms, -algebras constitute a category. The dual construction are -coalgebras, which are objects together with a morphism . Examples Groups Classically, a group is a set with a group law , with , satisfying three axioms: the existence of an identity element, the existence of an inverse for each element of the group, and associativity. To put this in a categorical framework, first define the identity and inverse as functions (morphisms of the set ) by with , and with . Here denotes the set with one element , which allows one to identify elements with morphisms . It is then possible to write the axioms of a group in terms of functions (note how the existential quantifier is absent): , , . Then this can be expressed with commutative diagrams: Now use the coproduct (the disjoint union of sets) to glue the three morphisms in one: according to Thus a group is a -algebra where is the functor . However the reverse is not necessarily true. Some -algeb
https://en.wikipedia.org/wiki/Password-based%20cryptography
Password-based cryptography generally refers to two distinct classes of methods: Single-party methods Multi-party methods Single party methods Some systems attempt to derive a cryptographic key directly from a password. However, such practice is generally ill-advised when there is a threat of brute-force attack. Techniques to mitigate such attack include passphrases and iterated (deliberately slow) password-based key derivation functions such as PBKDF2 (RFC 2898). Multi-party methods Password-authenticated key agreement systems allow two or more parties that agree on a password (or password-related data) to derive shared keys without exposing the password or keys to network attack. Earlier generations of challenge–response authentication systems have also been used with passwords, but these have generally been subject to eavesdropping and/or brute-force attacks on the password. See also Password Passphrase Password-authenticated key agreement Cryptography
https://en.wikipedia.org/wiki/Asset%20allocation
Asset allocation is the implementation of an investment strategy that attempts to balance risk versus reward by adjusting the percentage of each asset in an investment portfolio according to the investor's risk tolerance, goals and investment time frame. The focus is on the characteristics of the overall portfolio. Such a strategy contrasts with an approach that focuses on individual assets. Description Many financial experts argue that asset allocation is an important factor in determining returns for an investment portfolio. Asset allocation is based on the principle that different assets perform differently in different market and economic conditions. A fundamental justification for asset allocation is the notion that different asset classes offer returns that are not perfectly correlated, hence diversification reduces the overall risk in terms of the variability of returns for a given level of expected return. Asset diversification has been described as "the only free lunch you will find in the investment game". Academic research has painstakingly explained the importance and benefits of asset allocation and the problems of active management (see academic studies section below). Although the risk is reduced as long as correlations are not perfect, it is typically forecast (wholly or in part) based on statistical relationships (like correlation and variance) that existed over some past period. Expectations for return are often derived in the same way. Studies of these forecasting methods constitute an important direction of academic research. When such backward-looking approaches are used to forecast future returns or risks using the traditional mean-variance optimization approach to the asset allocation of modern portfolio theory (MPT), the strategy is, in fact, predicting future risks and returns based on history. As there is no guarantee that past relationships will continue in the future, this is one of the "weak links" in traditional asset allocation s
https://en.wikipedia.org/wiki/Homochirality
Homochirality is a uniformity of chirality, or handedness. Objects are chiral when they cannot be superposed on their mirror images. For example, the left and right hands of a human are approximately mirror images of each other but are not their own mirror images, so they are chiral. In biology, 19 of the 20 natural amino acids are homochiral, being L-chiral (left-handed), while sugars are D-chiral (right-handed). Homochirality can also refer to enantiopure substances in which all the constituents are the same enantiomer (a right-handed or left-handed version of an atom or molecule), but some sources discourage this use of the term. It is unclear whether homochirality has a purpose; however, it appears to be a form of information storage. One suggestion is that it reduces entropy barriers in the formation of large organized molecules. It has been experimentally verified that amino acids form large aggregates in larger abundance from an enantiopure samples of the amino acid than from racemic (enantiomerically mixed) ones. It is not clear whether homochirality emerged before or after life, and many mechanisms for its origin have been proposed. Some of these models propose three distinct steps: mirror-symmetry breaking creates a minute enantiomeric imbalance, chiral amplification builds on this imbalance, and chiral transmission is the transfer of chirality from one set of molecules to another. In biology Amino acids are the building blocks of peptides and enzymes while sugar-peptide chains are the backbone of RNA and DNA. In biological organisms, amino acids appear almost exclusively in the left-handed form (L-amino acids) and sugars in the right-handed form (R-sugars). Since the enzymes catalyze reactions, they enforce homochirality on a great variety of other chemicals, including hormones, toxins, fragrances and food flavors. Glycine is achiral, as are some other non-proteinogenic amino acids that are either achiral (such as dimethylglycine) or of the D enantiom