source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/List%20of%20mitosporic%20Ascomycota
The mitosporic Ascomycota are a heterogeneous group of ascomycotic fungi whose common characteristic is the absence of a sexual state (anamorph); many of the pathogenic fungi in humans belong to this group. Acremonium Acrodontium Alatospora Anguillospora Antennariella Anungitopsis Aphanocladium Bispora Brachyconidiella Calcarisporium Capnobotryella Cephaliophora Ceratocladium Chaetasbolisia Chaetomella Clathrosporium Colispora Coniosporium Corynespora Curvicladium Cytoplea Dactylaria Duddingtonia Eladia Endoconidioma Engyodontium Flagellospora Fonsecaea Geniculifera Glarea Gliocephalis Goniopila Gonytrichum Gyoerffyella Helminthosporium Hormococcus Humicola Hyphozyma Kabatina Kendrickiella Kloeckera Kumanasamuha Lecophagus Lemonniera Leptodontidium Limaciniaseta Lunulospora Macrophoma Macrophomina Madurella Microsphaeropsis Moniliella Myxocephala Nakataea Neoplaconema Noosia Ochroconis Oosporidium Phaeoisariopsis Phaeomoniella Phaeoramularia Phaeosclera Phaeoseptoria Phaeotheca Phaeotrichoconis Phialemonium Phoma Polycytella Pseudofusarium Pseudotaeniolina Raffaelea Readeriella Rhizopycnis Rhizosphaera Rhynchosporium Robillarda Saitoella Sarcopodium Sarocladium Scleroconidioma Sclerotium Scolecobasidium Scytalidium Sirococcus Spegazzinia Sphaerographium Spicellum Spirosphaera Stachybotrys Stanjemonium Symbiotaphrina Synchaetomella Termitaria Tetrachaetum Tetracladium Thermomyces Tilachlidium Tricellulortus Trichosporonoides Trichothecium Tricladium Tritirachium Tumularia Verticimonosporium Xenochalara Zalerion References https://www.uniprot.org/taxonomy/108599. Referred on 29/11/2011 Ascomycota Biology-related lists
https://en.wikipedia.org/wiki/Drive%20bay
A drive bay is a standard-sized area for adding hardware to a computer. Most drive bays are fixed to the inside of a case, but some can be removed. Over the years since the introduction of the IBM PC, it and its compatibles have had many form factors of drive bays. Four form factors are in common use today, the 5.25-inch, 3.5-inch, 2.5-inch or 1.8-inch drive bays. These names do not refer to the width of the bay itself, but rather to the width of the disks used by the drives mounted in these bays. Form factors 8.0-inch 8.0-inch drive bays were found in early IBM computers, CP/M computers, and the TRS-80 Model II. They were high, wide, and approximately deep, and were used for hard disk drives and floppy disk drives. This form factor is obsolete. 5.25-inch 5.25-inch drive bays are divided into two height specifications, full-height and half-height. Full-height bays were found in old PCs in the early to mid-1980s. They were high, wide, and up to deep, used mainly for hard disk drives and floppy disk drives. This is the size of the internal (screwed) part of the bay, as the front side is actually . The difference between those widths and the name of the bay size is because it is named after the size of floppy that would fit in those drives, a 5.25-inch-wide square. Half-height drive bays are high by wide, and are the standard housing for CD and DVD drives in modern computers. They were sometimes used for other things in the past, including hard disk drives (roughly between 10 and 100 MB) and floppy disk drives. As the name indicates, two half-height devices can fit in one full-height bay. Often represented as 5.25-inch, these floppy disk drives are obsolete. The dimensions of a 5.25-inch floppy drive are specified in the SFF standard specifications which were incorporated into the EIA-741 "Specification for Small Form Factor 133.35 mm (5.25 in) Disk Drives" by the Electronic Industries Association (EIA). Dimensions of 5.25 optical drives are specifi
https://en.wikipedia.org/wiki/Sentence%20diagram
A sentence diagram is a pictorial representation of the grammatical structure of a sentence. The term "sentence diagram" is used more when teaching written language, where sentences are diagrammed. The model shows the relations between words and the nature of sentence structure and can be used as a tool to help recognize which potential sentences are actual sentences. History Most methods of diagramming in pedagogy are based on the work of Alonzo Reed and Brainerd Kellogg. Some teachers continue to use the Reed–Kellogg system in teaching grammar, but others have discouraged it in favor of more modern tree diagrams. Reed-Kellogg system Simple sentences in the Reed–Kellogg system are diagrammed according to these forms: The diagram of a simple sentence begins with a horizontal line called the base. The subject is written on the left, the predicate on the right, separated by a vertical bar that extends through the base. The predicate must contain a verb, and the verb either requires other sentence elements to complete the predicate, permits them to do so, or precludes them from doing so. The verb and its object, when present, are separated by a line that ends at the baseline. If the object is a direct object, the line is vertical. If the object is a predicate noun or adjective, the line looks like a backslash, \, sloping toward the subject. Modifiers of the subject, predicate, or object are placed below the baseline: Modifiers, such as adjectives (including articles) and adverbs, are placed on slanted lines below the word they modify. Prepositional phrases are also placed beneath the word they modify; the preposition goes on a slanted line and the slanted line leads to a horizontal line on which the object of the preposition is placed. These basic diagramming conventions are augmented for other types of sentence structures, e.g. for coordination and subordinate clauses. Constituency and dependency The connections to modern principles for constructing parse
https://en.wikipedia.org/wiki/Delay%20insensitive%20circuit
A delay-insensitive circuit is a type of asynchronous circuit which performs a digital logic operation often within a computing processor chip. Instead of using clock signals or other global control signals, the sequencing of computation in delay-insensitive circuit is determined by the data flow. Data flows from one circuit element to another using "handshakes", or sequences of voltage transitions to indicate readiness to receive data, or readiness to offer data. Typically, inputs of a circuit module will indicate their readiness to receive, which will be "acknowledged" by the connected output by sending data (encoded in such a way that the receiver can detect the validity directly), and once that data has been safely received, the receiver will explicitly acknowledge it, allowing the sender to remove the data, thus completing the handshake, and allowing another datum to be transmitted. In a delay-insensitive circuit, there is therefore no need to provide a clock signal to determine a starting time for a computation. Instead, the arrival of data to the input of a sub-circuit triggers the computation to start. Consequently, the next computation can be initiated immediately when the result of the first computation is completed. The main advantage of such circuits is their ability to optimize processing of activities that can take arbitrary periods of time depending on the data or requested function. An example of a process with a variable time for completion would be mathematical division or recovery of data where such data might be in a cache. The Delay-Insensitive (DI) class is the most robust of all asynchronous circuit delay models. It makes no assumptions on the delay of wires or gates. In this model all transitions on gates or wires must be acknowledged before transitioning again. This condition stops unseen transitions from occurring. In DI circuits any transition on an input to a gate must be seen on the output of the gate before a subsequent transitio
https://en.wikipedia.org/wiki/The%20Man%20Who%20Counted
The Man Who Counted (original Portuguese title: O Homem que Calculava) is a book on recreational mathematics and curious word problems by Brazilian writer Júlio César de Mello e Souza, published under the pen name Malba Tahan. Since its first publication in 1938, the book has been immensely popular in Brazil and abroad, not only among mathematics teachers but among the general public as well. The book has been published in many other languages, including Catalan, English (in the UK and in the US), German, Italian, and Spanish, and is recommended as a paradidactic source in many countries. It earned its author a prize from the Brazilian Literary Academy. Plot summary First published in Brazil in 1949, O Homem que Calculava is a series of tales in the style of the Arabian Nights, but revolving around mathematical puzzles and curiosities. The book is ostensibly a translation by Brazilian scholar Breno de Alencar Bianco of an original manuscript by Malba Tahan, a thirteenth-century Persian scholar of the Islamic Empire – both equally fictitious. The first two chapters tell how Hanak Tade Maia was traveling from Samarra to Baghdad when he met Beremiz Samir, a young lad from Khoy with amazing mathematical abilities. The traveler then invited Beremiz to come with him to Baghdad, where a man with his abilities will certainly find profitable employment. The rest of the book tells of various incidents that befell the two men along the road and in Baghdad. In all those events, Beremiz Samir uses his abilities with calculation like a magic wand to amaze and entertain people, settle disputes, and find wise and just solutions to seemingly unsolvable problems. In the first incident along their trip (chapter III), Beremiz settles a heated inheritance dispute between three brothers. Their father had left them 35 camels, of which 1/2 (17.5 camels) should go to his eldest son, 1/3 (11.666... camels) to the middle one, and 1/9 (3.888... camels) to the youngest. To solve the bro
https://en.wikipedia.org/wiki/MVCML
Multiple-valued current mode logic (MVCML) or current mode multiple-valued logic (CM-MVL) is a method of representing electronic logic levels in analog CMOS circuits. In MVCML, logic levels are represented by multiples of a base current, Ibase, set to a certain value, x. Thus, level 0 is associated with the value of null, level 1 is associated with Ibase = x, level 2 is represented by Ibase = 2x, and so on. References See also Many-valued logic Digital electronics
https://en.wikipedia.org/wiki/NORDUnet
NORDUnet is an international collaboration between the National research and education networks in the Nordic countries. Members The members of NORDUnet are: SUNET of Sweden UNINETT of Norway FUNET of Finland Forskningsnettet of Denmark RHnet of Iceland Network NORDUnet interconnects the Nordic national research and education networks and connects them to the worldwide network for research and education and to the general purpose Internet. NORDUnet provides its services by a combination of leased lines and Internet services provided by other international operators. NORDUnet has peering in multiple important internet exchange sites outside the Nordics, such as Amsterdam, Chicago, Frankfurt, London, Miami and New York. In addition to the basic Internet service NORDUnet operates information services and provides USENET NetNews and Multicast connectivity to the Nordic national networks. NORDUnet also coordinates the national networks' Computer Emergency Response Team (CERT) activities and the Nordic national networks' IPv6 activities - an area where NORDUnet has been active for years. NORDUnet is one of the members, alongside Internet2, ESnet, SURFnet, CANARIE and GÉANT, to pilot a 100G intercontinental connection between Europe and North America. History NORDUnet is the result of the NORDUNET programme (1986 to 1992) financed by the Nordic Council of Ministers, officially beginning operations 1989. It was the first European NREN to embrace the TCP/IP technology and to connect to the National Science Foundation Network in the United States providing open access for university students in member countries. Along with other early adopters of TCP/IP, particularly CERN, it encouraged the adoption of TCP/IP in Europe (see Protocol Wars). NORDUnet has only few permanent employees. Most of the work is contracted to appropriate organisations in the Nordic area. Distinction The web site for NORDUNet, nordu.net, is the oldest active domain name. It was registered
https://en.wikipedia.org/wiki/Preferred%20number
In industrial design, preferred numbers (also called preferred values or preferred series) are standard guidelines for choosing exact product dimensions within a given set of constraints. Product developers must choose numerous lengths, distances, diameters, volumes, and other characteristic quantities. While all of these choices are constrained by considerations of functionality, usability, compatibility, safety or cost, there usually remains considerable leeway in the exact choice for many dimensions. Preferred numbers serve two purposes: Using them increases the probability of compatibility between objects designed at different times by different people. In other words, it is one tactic among many in standardization, whether within a company or within an industry, and it is usually desirable in industrial contexts (unless the goal is vendor lock-in or planned obsolescence) They are chosen such that when a product is manufactured in many different sizes, these will end up roughly equally spaced on a logarithmic scale. They therefore help to minimize the number of different sizes that need to be manufactured or kept in stock. Preferred numbers represent preferences of simple numbers (such as 1, 2, and 5) multiplied by the powers of a convenient basis, usually 10. Renard numbers In 1870 Charles Renard proposed a set of preferred numbers. His system was adopted in 1952 as international standard ISO 3. Renard's system divides the interval from 1 to 10 into 5, 10, 20, or 40 steps, leading to the R5, R10, R20 and R40 scales, respectively. The factor between two consecutive numbers in a Renard series is approximately constant (before rounding), namely the 5th, 10th, 20th, or 40th root of 10 (approximately 1.58, 1.26, 1.12, and 1.06, respectively), which leads to a geometric sequence. This way, the maximum relative error is minimized if an arbitrary number is replaced by the nearest Renard number multiplied by the appropriate power of 10. Example: 1.0, 1.6, 2.5, 4
https://en.wikipedia.org/wiki/List%20of%20honey%20bee%20pheromones
The pheromones of the honey bee are mixtures of chemical substances released by individual bees into the hive or environment that cause changes in the physiology and behaviour of other bees. Introduction Honey bees (Apis mellifera) have one of the most complex pheromonal communication systems found in nature, possessing 15 known glands that produce an array of compounds. These chemical messengers secreted by a queen, drone, worker bee or laying worker bee to elicit a response in other bees. The chemical messages are received by the bee's antenna and other body parts. They are produced as a volatile or non-volatile liquid and transmitted by direct contact as a liquid or vapor. Honey bee pheromones can be grouped into releaser pheromones which temporarily affect the recipient's behavior, and primer pheromones which have a long-term effect on the physiology of the recipient. Releaser pheromones trigger an almost immediate behavioral response from the receiving bee. Under certain conditions a pheromone can act as both a releaser and primer pheromone. The pheromones may either be single chemicals or a complex mixture of numerous chemicals in different percentages. Types of honey bee pheromones Alarm pheromone Two main alarm pheromones have been identified in honeybee workers. One is released by the Koschevnikov gland, near the sting shaft, and consists of more than 40 chemical compounds, including isopentyl acetate (IPA), butyl acetate, 1-hexanol, n-butanol, 1-octanol, hexyl acetate, octyl acetate, n-pentyl acetate and 2-nonanol. These chemical compounds have low molecular weights, are highly volatile, and appear to be the least specific of all pheromones. Alarm pheromones are released when a bee stings another animal, and attract other bees to the location and causes the other bees to behave defensively, i.e. sting or charge. The alarm pheromone emitted when a bee stings another animal smells like bananas. Smoke can mask the bees' alarm pheromone. The other al
https://en.wikipedia.org/wiki/Vertebrate%20zoology
Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley. Subdivisions This subdivision of zoology has many further subdivisions, including: Ichthyology - the study of fishes. Mammalogy - the study of mammals. Chiropterology - the study of bats. Primatology - the study of primates. Ornithology - the study of birds. Herpetology - the study of reptiles. Batrachology - the study of amphibians. These divisions are sometimes further divided into more specific specialties. References External links Vertebrate Zoology (Journal published by the Museum of Zoology Dresden, Germany Senckenberg Natural History Collections) Subfields of zoology
https://en.wikipedia.org/wiki/Spelunker%20%28video%20game%29
Spelunker is a 1983 platform video game developed by Timothy G. Martin of MicroGraphic Image. It is set in a colossal cave, with the player starting at the cave's entrance at the top, and the objective is to get to the treasure at the bottom. Originally released by MicroGraphic Image for the Atari 8-bit family in 1983, the game was later ported to the Commodore 64 and re-released by Broderbund in 1984, with European publishing rights licensed to Ariolasoft. It was released on arcade in 1985, on the Nintendo Entertainment System on December 6, 1985 in Japan and September 1987 in North America, and on the MSX in 1986. A sequel was released in arcades in 1986 called Spelunker II: 23 no Kagi, and a different sequel for the NES on September 18, 1987 called Spelunker II: Yūsha e no Chōsen, both by Irem and in Japan only. Gameplay The player must walk and jump through increasingly challenging parts of the cave, all while working with a finite supply of fresh air, which can be replenished at various points. The cave's hazards include bats, which drop deadly guano on the player; and a ghost haunting the cave, randomly appearing to take the player to the shadow world. The player character can send a blast of air to push the ghost away. However, this renders the player's character immobile for a few seconds, thus vulnerable to other dangers and further depleting their air supply. Objects to collect include sticks of dynamite, flares, and keys. Precise positioning and jumping are key factors in successfully completing the game. The cave is divided into six levels. Although the levels connect seamlessly to each other, forming one large map, the game clearly signals a level change at certain points by showing the name of the next level and giving the player a bonus, consisting of an extra life and a varying number of points. Once a player completed all six levels, a new cave will be started with the same layout but with increasing difficulty. There are six caves total. While
https://en.wikipedia.org/wiki/Holographic%20Versatile%20Disc
The Holographic Versatile Disc (HVD) is an optical disc technology that was expected to store up to several terabytes of data on an optical disc 10 cm or 12 cm in diameter. Its development commenced in April 2004, but it never arrived due to lack of funding. The company responsible for HVD went bankrupt in 2010. The reduced radius reduces cost and materials used. It employs a technique known as collinear holography, whereby a blue-green and red laser beam are collimated in a single beam. The blue-green laser reads data encoded as laser interference fringes from a holographic layer near the top of the disc. A red laser is used as the reference beam to read servoinformation from a regular CD-style aluminium layer near the bottom. Servoinformation is used to monitor the position of the read head over the disc, similar to the head, track, and sector information on a conventional hard disk drive. On a CD or DVD this servoinformation is interspersed among the data. A dichroic mirror layer between the holographic data and the servo data reflects the blue-green laser while letting the red laser pass through. This prevents interference from refraction of the blue-green laser off the servo data pits and is an advance over past holographic storage media, which either experienced too much interference, or lacked the servo data entirely, making them incompatible with current CD and DVD drive technology. Standards for 100 GB read-only holographic discs and 200 GB recordable cartridges were published by ECMA in 2007, but no holographic disc product has ever appeared in the market. A number of release dates were announced, all since passed, likely due to high costs of the drives and discs itself, lack of compatibility with existing or new standards, and competition from more established optical disc Blu-ray video streaming. Technology Current optical storage saves one bit per pulse, and the HVD alliance hoped to improve this efficiency with capabilities of around 60,000 bits pe
https://en.wikipedia.org/wiki/Commodity%20computing
Commodity computing (also known as commodity cluster computing) involves the use of large numbers of already-available computing components for parallel computing, to get the greatest amount of useful computation at low cost. It is computing done in commodity computers as opposed to in high-cost superminicomputers or in boutique computers. Commodity computers are computer systems - manufactured by multiple vendors - incorporating components based on open standards. Characteristics Such systems are said to be based on standardized computer components, since the standardization process promotes lower costs and less differentiation among vendors' products. Standardization and decreased differentiation lower the switching or exit cost from any given vendor, increasing purchasers' leverage and preventing lock-in. A governing principle of commodity computing is that it is preferable to have more low-performance, low-cost hardware working in parallel (scalar computing) (e.g. AMD x86 CISC) than to have fewer high-performance, high-cost hardware items (e.g. IBM POWER7 or Sun-Oracle's SPARC RISC). At some point, the number of discrete systems in a cluster will be greater than the mean time between failures (MTBF) for any hardware platform, no matter how reliable, so fault tolerance must be built into the controlling software. Purchases should be optimized on cost-per-unit-of-performance, not just on absolute performance-per-CPU at any cost. History The mid-1960s to early 1980s The first computers were large, expensive and proprietary. The move towards commodity computing began when DEC introduced the PDP-8 in 1965. This was a computer that was relatively small and inexpensive enough that a department could purchase one without convening a meeting of the board of directors. The entire minicomputer industry sprang up to supply the demand for 'small' computers like the PDP-8. Unfortunately, each of the many different brands of minicomputers had to stand on its own bec
https://en.wikipedia.org/wiki/Repairable%20component
A repairable component is a component of a finished good that can be designated for repair. Overview Repairable components tend to be more expensive than non-repairable components (consumables). This is because for items that are inexpensive to procure, it is often more cost-effective not to maintain (repair) them. Repair costs can be expensive, including costs for the labor for the removal the broken or worn out part (described as unserviceable), cost of replacement with a working (serviceable) from inventory, and also the cost of the actual repair, including possible shipping costs to a repair vendor. At maintenance facilities, such as might be found at Main Operating Bases, inventory is controlled by site personnel. Maintenance personnel will formally "turn-in" unserviceable items for repair, receiving a funding credit in the process. These "turn-ins" will be fixed, reconditioned, or replaced. Maintenance personnel can also be issued repaired or new items back from inventory. These processes are assisted by automated logistics management systems. In the Navy/Marine Corps supply system repairable items are identified with certain two character cognizance symbols (COGs) and one character Material Control Codes (MCCs). In United States Marine Corps Aviation, repairables are managed by the Repairables Management Division of the Aviation Supply Department. In the United States Air Force, repairables can be identified by their ERRC designation or SMR code. See also Level of Repair Analysis Repairability Military logistics
https://en.wikipedia.org/wiki/Root%20test
In mathematics, the root test is a criterion for the convergence (a convergence test) of an infinite series. It depends on the quantity where are the terms of the series, and states that the series converges absolutely if this quantity is less than one, but diverges if it is greater than one. It is particularly useful in connection with power series. Root test explanation The root test was developed first by Augustin-Louis Cauchy who published it in his textbook Cours d'analyse (1821). Thus, it is sometimes known as the Cauchy root test or Cauchy's radical test. For a series the root test uses the number where "lim sup" denotes the limit superior, possibly +∞. Note that if converges then it equals C and may be used in the root test instead. The root test states that: if C < 1 then the series converges absolutely, if C > 1 then the series diverges, if C = 1 and the limit approaches strictly from above then the series diverges, otherwise the test is inconclusive (the series may diverge, converge absolutely or converge conditionally). There are some series for which C = 1 and the series converges, e.g. , and there are others for which C = 1 and the series diverges, e.g. . Application to power series This test can be used with a power series where the coefficients cn, and the center p are complex numbers and the argument z is a complex variable. The terms of this series would then be given by an = cn(z − p)n. One then applies the root test to the an as above. Note that sometimes a series like this is called a power series "around p", because the radius of convergence is the radius R of the largest interval or disc centred at p such that the series will converge for all points z strictly in the interior (convergence on the boundary of the interval or disc generally has to be checked separately). A corollary of the root test applied to such a power series is the Cauchy–Hadamard theorem: the radius of convergence is exactly taking care that we really m
https://en.wikipedia.org/wiki/Linear%20complementarity%20problem
In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968. Formulation Given a real matrix M and vector q, the linear complementarity problem LCP(q, M) seeks vectors z and w which satisfy the following constraints: (that is, each component of these two vectors is non-negative) or equivalently This is the complementarity condition, since it implies that, for all , at most one of and can be positive. A sufficient condition for existence and uniqueness of a solution to this problem is that M be symmetric positive-definite. If M is such that has a solution for every q, then M is a Q-matrix. If M is such that have a unique solution for every q, then M is a P-matrix. Both of these characterizations are sufficient and necessary. The vector w is a slack variable, and so is generally discarded after z is found. As such, the problem can also be formulated as: (the complementarity condition) Convex quadratic-minimization: Minimum conditions Finding a solution to the linear complementarity problem is associated with minimizing the quadratic function subject to the constraints These constraints ensure that f is always non-negative. The minimum of f is 0 at z if and only if z solves the linear complementarity problem. If M is positive definite, any algorithm for solving (strictly) convex QPs can solve the LCP. Specially designed basis-exchange pivoting algorithms, such as Lemke's algorithm and a variant of the simplex algorithm of Dantzig have been used for decades. Besides having polynomial time complexity, interior-point methods are also effective in practice. Also, a quadratic-programming problem stated as minimize subject to as well as with Q symmetric is the same as solving the LCP with This is because the Karush–Kuhn–Tucker conditions of the QP problem
https://en.wikipedia.org/wiki/Counting%20problem%20%28complexity%29
In computational complexity theory and computability theory, a counting problem is a type of computational problem. If R is a search problem then is the corresponding counting function and denotes the corresponding decision problem. Note that cR is a search problem while #R is a decision problem, however cR can be C Cook-reduced to #R (for appropriate C) using a binary search (the reason #R is defined the way it is, rather than being the graph of cR, is to make this binary search possible). Counting complexity class If NX is a complexity class associated with non-deterministic machines then #X = {#R | R ∈ NX} is the set of counting problems associated with each search problem in NX. In particular, #P is the class of counting problems associated with NP search problems. Just as NP has NP-complete problems via many-one reductions, #P has complete problems via parsimonious reductions, problem transformations that preserve the number of solutions. See also GapP External links Computational problems
https://en.wikipedia.org/wiki/Promise%20problem
In computational complexity theory, a promise problem is a generalization of a decision problem where the input is promised to belong to a particular subset of all possible inputs. Unlike decision problems, the yes instances (the inputs for which an algorithm must return yes) and no instances do not exhaust the set of all inputs. Intuitively, the algorithm has been promised that the input does indeed belong to set of yes instances or no instances. There may be inputs which are neither yes nor no. If such an input is given to an algorithm for solving a promise problem, the algorithm is allowed to output anything, and may even not halt. Formal definition A decision problem can be associated with a language , where the problem is to accept all inputs in and reject all inputs not in . For a promise problem, there are two languages, and , which must be disjoint, which means , such that all the inputs in are to be accepted and all inputs in are to be rejected. The set is called the promise. There are no requirements on the output if the input does not belong to the promise. If the promise equals , then this is also a decision problem, and the promise is said to be trivial. Examples Many natural problems are actually promise problems. For instance, consider the following problem: Given a directed acyclic graph, determine if the graph has a path of length 10. The yes instances are directed acyclic graphs with a path of length 10, whereas the no instances are directed acyclic graphs with no path of length 10. The promise is the set of directed acyclic graphs. In this example, the promise is easy to check. In particular, it is very easy to check if a given graph is cyclic. However, the promised property could be difficult to evaluate. For instance, consider the problem "Given a Hamiltonian graph, determine if the graph has a cycle of size 4." Now the promise is NP-hard to evaluate, yet the promise problem is easy to solve since checking for cycles of size 4 can be don
https://en.wikipedia.org/wiki/Novell%20S-Net
S-Net (aka ShareNet) was a network operating system and the set of network protocols it used to talk to client machines on the network. Released by Novell in 1983, the S-Net operating system was an entirely proprietary operating system written for the Motorola 68000 processor. It used a star network topology. S-Net has also been called NetWare 68, with the 68 denoting the 68000 processor. It was superseded by NetWare 86, which was written for the Intel 8086 processor, in 1985. References Network operating systems S-Net Proprietary operating systems
https://en.wikipedia.org/wiki/Non-linear%20sigma%20model
In quantum field theory, a nonlinear σ model describes a scalar field which takes on values in a nonlinear manifold called the target manifold  T. The non-linear σ-model was introduced by , who named it after a field corresponding to a spinless meson called σ in their model. This article deals primarily with the quantization of the non-linear sigma model; please refer to the base article on the sigma model for general definitions and classical (non-quantum) formulations and results. Description The target manifold T is equipped with a Riemannian metric g. is a differentiable map from Minkowski space M (or some other space) to T. The Lagrangian density in contemporary chiral form is given by where we have used a + − − − metric signature and the partial derivative is given by a section of the jet bundle of T×M and is the potential. In the coordinate notation, with the coordinates , a = 1, ..., n where n is the dimension of T, In more than two dimensions, nonlinear σ models contain a dimensionful coupling constant and are thus not perturbatively renormalizable. Nevertheless, they exhibit a non-trivial ultraviolet fixed point of the renormalization group both in the lattice formulation and in the double expansion originally proposed by Kenneth G. Wilson. In both approaches, the non-trivial renormalization-group fixed point found for the O(n)-symmetric model is seen to simply describe, in dimensions greater than two, the critical point separating the ordered from the disordered phase. In addition, the improved lattice or quantum field theory predictions can then be compared to laboratory experiments on critical phenomena, since the O(n) model describes physical Heisenberg ferromagnets and related systems. The above results point therefore to a failure of naive perturbation theory in describing correctly the physical behavior of the O(n)-symmetric model above two dimensions, and to the need for more sophisticated non-perturbative methods such as the lattice f
https://en.wikipedia.org/wiki/Kraft%E2%80%93McMillan%20inequality
In coding theory, the Kraft–McMillan inequality gives a necessary and sufficient condition for the existence of a prefix code (in Leon G. Kraft's version) or a uniquely decodable code (in Brockway McMillan's version) for a given set of codeword lengths. Its applications to prefix codes and trees often find use in computer science and information theory. Kraft's inequality was published in . However, Kraft's paper discusses only prefix codes, and attributes the analysis leading to the inequality to Raymond Redheffer. The result was independently discovered in . McMillan proves the result for the general case of uniquely decodable codes, and attributes the version for prefix codes to a spoken observation in 1955 by Joseph Leo Doob. Applications and intuitions Kraft's inequality limits the lengths of codewords in a prefix code: if one takes an exponential of the length of each valid codeword, the resulting set of values must look like a probability mass function, that is, it must have total measure less than or equal to one. Kraft's inequality can be thought of in terms of a constrained budget to be spent on codewords, with shorter codewords being more expensive. Among the useful properties following from the inequality are the following statements: If Kraft's inequality holds with strict inequality, the code has some redundancy. If Kraft's inequality holds with equality, the code in question is a complete code. If Kraft's inequality does not hold, the code is not uniquely decodable. For every uniquely decodable code, there exists a prefix code with the same length distribution. Formal statement Let each source symbol from the alphabet be encoded into a uniquely decodable code over an alphabet of size with codeword lengths Then Conversely, for a given set of natural numbers satisfying the above inequality, there exists a uniquely decodable code over an alphabet of size with those codeword lengths. Example: binary trees Any binary tree can be viewed
https://en.wikipedia.org/wiki/Search%20problem
In the mathematics of computational complexity theory, computability theory, and decision theory, a search problem is a type of computational problem represented by a binary relation. Intuitively, the problem consists in finding structure "y" in object "x". An algorithm is said to solve the problem if at least one corresponding structure exists, and then one occurrence of this structure is made output; otherwise, the algorithm stops with an appropriate output ("not found" or any message of the like). Every search problem also has a corresponding decision problem, namely This definition may be generalized to n-ary relations using any suitable encoding which allows multiple strings to be compressed into one string (for instance by listing them consecutively with a delimiter). More formally, a relation R can be viewed as a search problem, and a Turing machine which calculates R is also said to solve it. More formally, if R is a binary relation such that field(R) ⊆ Γ+ and T is a Turing machine, then T calculates R if: If x is such that there is some y such that R(x, y) then T accepts x with output z such that R(x, z) (there may be multiple y, and T need only find one of them) If x is such that there is no y such that R(x, y) then T rejects x (Note that the graph of a partial function is a binary relation, and if T calculates a partial function then there is at most one possible output.) Such problems occur very frequently in graph theory and combinatorial optimization, for example, where searching for structures such as particular matchings, optional cliques, particular stable sets, etc. are subjects of interest. Definition A search problem is often characterized by: A set of states A start state A goal state or goal test: a boolean function which tells us whether a given state is a goal state A successor function: a mapping from a state to a set of new states Objective Find a solution when not given an algorithm to solve a problem, but only a specificati
https://en.wikipedia.org/wiki/Metabolic%20ecology
Metabolic ecology is a field of ecology aiming to understand constraints on metabolic organization as important for understanding almost all life processes. Main focus is on the metabolism of individuals, emerging intra- and inter-specific patterns, and the evolutionary perspective. Two main metabolic theories that have been applied in ecology are Kooijman's Dynamic energy budget (DEB) theory and the West, Brown, and Enquist (WBE) theory of ecology. Both theories have an individual-based metabolic underpinning, but have fundamentally different assumptions. Models of individual's metabolism follow the energy uptake and allocation, and can focus on mechanisms and constraints of energy transport (transport models), or on dynamic use of stored metabolites (energy budget models). References Ecology Metabolism
https://en.wikipedia.org/wiki/J%20Strother%20Moore
J Strother Moore (his first name is the alphabetic character "J" – not an abbreviated "J.") is a computer scientist. He is a co-developer of the Boyer–Moore string-search algorithm, Boyer–Moore majority vote algorithm, and the Boyer–Moore automated theorem prover, Nqthm. He made pioneering contributions to structure sharing including the piece table data structure and early logic programming. An example of the workings of the Boyer–Moore string search algorithm is given in Moore's website. Moore received his Bachelor of Science (BS) in mathematics at Massachusetts Institute of Technology in 1970 and his Doctor of Philosophy (Ph.D.) in computational logic at the University of Edinburgh in Scotland in 1973. In addition, Moore is a co-author of the ACL2 automated theorem prover and its predecessors including Nqthm, for which he received, with Robert S. Boyer and Matt Kaufmann, the 2005 ACM Software System Award. He and others used ACL2 to prove the correctness of the floating point division operations of the AMD K5 microprocessor in the wake of the Pentium FDIV bug. For his contributions to automated deduction, Moore received the 1999 Herbrand Award with Robert S. Boyer, and in 2006 he was inducted as a Fellow in the Association for Computing Machinery. Moore was elected a member of the National Academy of Engineering in 2007 for contributions to automated reasoning about computing systems. He is also a Fellow of the AAAI. He was elected a Corresponding Fellow of the Royal Society of Edinburgh in 2015. He is currently the Admiral B.R. Inman Centennial Chair in Computing Theory at The University of Texas at Austin, and was chair of the Department of Computer Science from 2001–2009. Before joining the Department of Computer Sciences as the chair, he formed a company, Computational Logic Inc., along with others including his close friend at the University of Texas at Austin and one of the highly regarded professors in the field of automated reasoning, Robert S. Boyer.
https://en.wikipedia.org/wiki/AMD%20K9
The AMD K9 represents a microarchitecture by AMD designed to replace the K8 processors, featuring dual-core processing. Development K9 appears originally to have been an ambitious 8 issue per clock cycle core redesign of the K7 or the K8 processor core. At one point, K9 was the Greyhound project at AMD, and was worked on by the K7 design team beginning in early 2001, with tape-out revision A0 scheduled for 2003. The L1 instruction cache was said to hold decoded instructions, essentially the same as Intel's trace cache. The existence of a massively parallel CPU design concept for heavily multi threaded applications has also been revealed, as a planned successor to K8. This was reportedly canceled in the conceptualization phase, after about 6 months' work. At one time K9 was the internal codename for the dual-core AMD64 processors as the brand Athlon 64 X2; however, AMD has distanced itself from the old K series naming convention, and now seeks to talk about a portfolio of products tailored to different markets. References K09 AMD microarchitectures X86 microarchitectures ru:Список микропроцессоров AMD#Процессоры серии K9
https://en.wikipedia.org/wiki/Variable-gain%20amplifier
A variable-gain (VGA) or voltage-controlled amplifier (VCA) is an electronic amplifier that varies its gain depending on a control voltage (often abbreviated CV). VCAs have many applications, including audio level compression, synthesizers and amplitude modulation. A crude example is a typical inverting op-amp configuration with a light-dependent resistor (LDR) in the feedback loop. The gain of the amplifier then depends on the light falling on the LDR, which can be provided by an LED (an optocoupler). The gain of the amplifier is then controllable by the current through the LED. This is similar to the circuits used in optical audio compressors. A voltage-controlled amplifier can be realised by first creating a voltage-controlled resistor (VCR), which is used to set the amplifier gain. The VCR is one of the numerous interesting circuit elements that can be produced by using a JFET (junction field-effect transistor) with simple biasing. VCRs manufactured in this way can be obtained as discrete devices, e.g. VCR2N. Another type of circuit uses operational transconductance amplifiers. In audio applications logarithmic gain control is used to emulate how the ear hears loudness. David E. Blackmer's dbx 202 VCA, based on the Blackmer gain cell, was among the first successful implementations of a logarithmic VCA. Analog multipliers are a type of VCA designed to have accurate linear characteristics, the two inputs are identical and often work in all four voltage quadrants, unlike most other VCAs. In sound mixing consoles Some mixing consoles come equipped with VCAs in each channel for console automation. The fader, which traditionally controls the audio signal directly, becomes a DC control voltage for the VCA. The maximum voltage available to a fader can be controlled by one or more master faders called VCA groups. The VCA master fader then controls the overall level of all of the channels assigned to it. Typically VCA groups are used to control various part
https://en.wikipedia.org/wiki/Kaik%C5%8D%20ROV
was a remotely operated underwater vehicle (ROV) built by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) for exploration of the deep sea. Kaikō was the second of only five vessels ever to reach the bottom of the Challenger Deep, as of 2019. Between 1995 and 2003, this 10.6 ton unmanned submersible conducted more than 250 dives, collecting 350 biological species (including 180 different bacteria), some of which could prove to be useful in medical and industrial applications. On 29 May 2003, Kaikō was lost at sea off the coast of Shikoku Island during Typhoon Chan-Hom, when a secondary cable connecting it to its launcher at the ocean surface broke. Another ROV, Kaikō7000II, served as the replacement for Kaikō until 2007. At that time, JAMSTEC researchers began sea trials for the permanent replacement ROV, ABISMO (Automatic Bottom Inspection and Sampling Mobile). Challenger Deep Bathymetric data obtained during the course of the expedition (December 1872 – May 1876) of the British Royal Navy survey ship enabled scientists to draw maps, which provided a rough outline of certain major submarine terrain features, such as the edge of the continental shelves and the Mid-Atlantic Ridge. This discontinuous set of data points was obtained by the simple technique of taking soundings by lowering long lines from the ship to the seabed. Among the many discoveries of the Challenger expedition was the identification of the Challenger Deep. This depression, located at the southern end of the Mariana Trench near the Mariana Islands group, is the deepest surveyed point of the World Ocean. The Challenger scientists made the first recordings of its depth on 23 March 1875 at station 225. The reported depth was 4,475 fathoms (8184 meters) based on two separate soundings. On 23 January 1960, Don Walsh and Jacques Piccard were the first men to descend to the bottom of the Challenger Deep in the Trieste bathyscaphe. Though the initial report claimed the bathyscaphe h
https://en.wikipedia.org/wiki/Pixar%20Image%20Computer
The Pixar Image Computer is a graphics computer originally developed by the Graphics Group, the computer division of Lucasfilm, which was later renamed Pixar. Aimed at commercial and scientific high-end visualization markets, such as medicine, geophysics and meteorology, the original machine was advanced for its time, but sold poorly. History Creation When George Lucas recruited people from NYIT in 1979 to start their Computer Division, the group was set to develop digital optical printing, digital audio, digital non-linear editing and computer graphics. Computer graphics quality was just not good enough due to technological limitations at the time. The team then decided to solve the problem by starting a hardware project, building what they would call the Pixar Image Computer, a machine with more computational power that was able to produce images with higher resolution. Availability About three months after their acquisition by Steve Jobs on February 3, 1986, the computer became commercially available for the first time, and was aimed at commercial and scientific high-end visualization markets, such as medical imaging, geophysics, and meteorology. The machine sold for $135,000, but also required a $35,000 workstation from Sun Microsystems or Silicon Graphics (in total, ). The original machine was well ahead of its time and generated many single sales, for labs and research. However, the system did not sell in quantity. In 1987, Pixar redesigned the machine to create the P-II second generation machine, which sold for $30,000. In an attempt to gain a foothold in the medical market, Pixar donated ten machines to leading hospitals and sent marketing people to doctors' conventions. However, this had little effect on sales, despite the machine's ability to render CAT scan data in 3D to show perfect images of the human body. Pixar did get a contract with the manufacturer of CAT Scanners, which sold 30 machines. By 1988 Pixar had only sold 120 Pixar Image Computers.
https://en.wikipedia.org/wiki/Animal%20embryonic%20development
In developmental biology, animal embryonic development, also known as animal embryogenesis, is the developmental stage of an animal embryo. Embryonic development starts with the fertilization of an egg cell (ovum) by a sperm cell, (spermatozoon). Once fertilized, the ovum becomes a single diploid cell known as a zygote. The zygote undergoes mitotic divisions with no significant growth (a process known as cleavage) and cellular differentiation, leading to development of a multicellular embryo after passing through an organizational checkpoint during mid-embryogenesis. In mammals, the term refers chiefly to the early stages of prenatal development, whereas the terms fetus and fetal development describe later stages. The main stages of animal embryonic development are as follows: The zygote undergoes a series of cell divisions (called cleavage) to form a structure called a morula. The morula develops into a structure called a blastula through a process called blastulation. The blastula develops into a structure called a gastrula through a process called gastrulation. The gastrula then undergoes further development, including the formation of organs (organogenesis). The embryo then transforms into the next stage of development, the nature of which varies between different animal species (examples of possible next stages include a fetus and a larva). Fertilization and the zygote The egg cell is generally asymmetric, having an animal pole (future ectoderm). It is covered with protective envelopes, with different layers. The first envelope – the one in contact with the membrane of the egg – is made of glycoproteins and is known as the vitelline membrane (zona pellucida in mammals). Different taxa show different cellular and acellular envelopes englobing the vitelline membrane. Fertilization is the fusion of gametes to produce a new organism. In animals, the process involves a sperm fusing with an ovum, which eventually leads to the development of an embryo. Depen
https://en.wikipedia.org/wiki/Concrete%20Roman
Concrete Roman is a slab serif typeface designed by Donald Knuth using his METAFONT program. It was intended to accompany the Euler mathematical font which it partners in Knuth's book Concrete Mathematics. It has a darker appearance than its more famous sibling, Computer Modern. Some favour it for use on the computer screen because of this, as the thinner strokes of Computer Modern can make it hard to read at low resolutions. External links Computer Modern family, for general use select .otf fonts Typefaces designed by Donald Knuth Slab serif typefaces TeX
https://en.wikipedia.org/wiki/Heterothallism
Heterothallic species have sexes that reside in different individuals. The term is applied particularly to distinguish heterothallic fungi, which require two compatible partners to produce sexual spores, from homothallic ones, which are capable of sexual reproduction from a single organism. In heterothallic fungi, two different individuals contribute nuclei to form a zygote. Examples of heterothallism are included for Saccharomyces cerevisiae, Aspergillus fumigatus, Aspergillus flavus, Penicillium marneffei and Neurospora crassa. The heterothallic life cycle of N. crassa is given in some detail, since similar life cycles are present in other heterothallic fungi. Life cycle of Saccharomyces cerevisiae The yeast Saccharomyces cerevisiae is heterothallic. This means that each yeast cell is of a certain mating type and can only mate with a cell of the other mating type. During vegetative growth that ordinarily occurs when nutrients are abundant, S. cerevisiae reproduces by mitosis as either haploid or diploid cells. However, when starved, diploid cells undergo meiosis to form haploid spores. Mating occurs when haploid cells of opposite mating type, MATa and MATα, come into contact. Ruderfer et al. pointed out that such contacts are frequent between closely related yeast cells for two reasons. The first is that cells of opposite mating type are present together in the same ascus, the sac that contains the tetrad of cells directly produced by a single meiosis, and these cells can mate with each other. The second reason is that haploid cells of one mating type, upon cell division, often produce cells of the opposite mating type with which they may mate. Katz Ezov et al. presented evidence that in natural S. cerevisiae populations clonal reproduction and a type of “self-fertilization” (in the form of intratetrad mating) predominate. Ruderfer et al. analyzed the ancestry of natural S. cerevisiae strains and concluded that outcrossing occurs only about once every
https://en.wikipedia.org/wiki/Minkowski%27s%20question-mark%20function
In mathematics, Minkowski's question-mark function, denoted , is a function with unusual fractal properties, defined by Hermann Minkowski in 1904. It maps quadratic irrational numbers to rational numbers on the unit interval, via an expression relating the continued fraction expansions of the quadratics to the binary expansions of the rationals, given by Arnaud Denjoy in 1938. It also maps rational numbers to dyadic rationals, as can be seen by a recursive definition closely related to the Stern–Brocot tree. Definition and intuition One way to define the question-mark function involves the correspondence between two different ways of representing fractional numbers using finite or infinite binary sequences. Most familiarly, a string of 0's and 1's with a single point mark ".", like "11.001001000011111..." can be interpreted as the binary representation of a number. In this case this number is There is a different way of interpreting the same sequence, however, using continued fractions. Interpreting the fractional part "0.001001000011111..." as a binary number in the same way, replace each consecutive block of 0's or 1's by its run length (or, for the first block of zeroes, its run length plus one), in this case generating the sequence . Then, use this sequence as the coefficients of a continued fraction: The question-mark function reverses this process: it translates the continued-fraction of a given real number into a run-length encoded binary sequence, and then reinterprets that sequence as a binary number. For instance, for the example above, . To define this formally, if an irrational number has the (non-terminating) continued-fraction representation then the value of the question-mark function on is defined as the value of the infinite series In the same way, if a rational number has the terminating continued-fraction representation then the value of the question-mark function on is a finite sum, Analogously to the way the question-mark function r
https://en.wikipedia.org/wiki/Glossary%20of%20game%20theory
Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject. Definitions of a game Notational conventions Real numbers . The set of players . Strategy space , where Player i's strategy space is the space of all possible ways in which player i can play the game. A strategy for player i is an element of . Complements an element of , is a tuple of strategies for all players other than i. Outcome space is in most textbooks identical to - Payoffs , describing how much gain (money, pleasure, etc.) the players are allocated by the end of the game. Normal form game A game in normal form is a function: Given the tuple of strategies chosen by the players, one is given an allocation of payments (given as real numbers). A further generalization can be achieved by splitting the game into a composition of two functions: the outcome function of the game (some authors call this function "the game form"), and: the allocation of payoffs (or preferences) to players, for each outcome of the game. Extensive form game This is given by a tree, where at each vertex of the tree a different player has the choice of choosing an edge. The outcome set of an extensive form game is usually the set of tree leaves. Cooperative game A game in which players are allowed to form coalitions (and to enforce coalitionary discipline). A cooperative game is given by stating a value for every coalition: It is always assumed that the empty coalition gains nil. Solution concepts for cooperative games usually assume that the players are forming the grand coalition , whose value is then divided among the players to give an allocation. Simple game A Simple game is a simplified form of a cooperative game, where the possible gain is assumed to be either '0' or '1'. A simple game is couple (N, W), where W is the list of "winning" coalitions, capable of gaining the loot ('1
https://en.wikipedia.org/wiki/Polar%20motion
Polar motion of the Earth is the motion of the Earth's rotational axis relative to its crust. This is measured with respect to a reference frame in which the solid Earth is fixed (a so-called Earth-centered, Earth-fixed or ECEF reference frame). This variation is a few meters on the surface of the Earth. Analysis Polar motion is defined relative to a conventionally defined reference axis, the CIO (Conventional International Origin), being the pole's average location over the year 1900. It consists of three major components: a free oscillation called Chandler wobble with a period of about 435 days, an annual oscillation, and an irregular drift in the direction of the 80th meridian west, which has lately been less extremely west. Causes The slow drift, about 20 m since 1900, is partly due to motions in the Earth's core and mantle, and partly to the redistribution of water mass as the Greenland ice sheet melts, and to isostatic rebound, i.e. the slow rise of land that was formerly burdened with ice sheets or glaciers. The drift is roughly along the 80th meridian west. Since about 2000, the pole has found a less extreme drift, which is roughly along the central meridian. This less dramatically westward drift of motion is attributed to the global scale mass transport between the oceans and the continents. Major earthquakes cause abrupt polar motion by altering the volume distribution of the Earth's solid mass. These shifts are quite small in magnitude relative to the long-term core/mantle and isostatic rebound components of polar motion. Principle In the absence of external torques, the vector of the angular momentum M of a rotating system remains constant and is directed toward a fixed point in space. If the earth were perfectly symmetrical and rigid, M would remain aligned with its axis of symmetry, which would also be its axis of rotation. In the case of the Earth, it is almost identical with its axis of rotation, with the discrepancy due to shifts of mass on the
https://en.wikipedia.org/wiki/Jacobi%20triple%20product
In mathematics, the Jacobi triple product is the mathematical identity: for complex numbers x and y, with |x| < 1 and y ≠ 0. It was introduced by in his work Fundamenta Nova Theoriae Functionum Ellipticarum. The Jacobi triple product identity is the Macdonald identity for the affine root system of type A1, and is the Weyl denominator formula for the corresponding affine Kac–Moody algebra. Properties The basis of Jacobi's proof relies on Euler's pentagonal number theorem, which is itself a specific case of the Jacobi Triple Product Identity. Let and . Then we have The Jacobi Triple Product also allows the Jacobi theta function to be written as an infinite product as follows: Let and Then the Jacobi theta function can be written in the form Using the Jacobi Triple Product Identity we can then write the theta function as the product There are many different notations used to express the Jacobi triple product. It takes on a concise form when expressed in terms of q-Pochhammer symbols: where is the infinite q-Pochhammer symbol. It enjoys a particularly elegant form when expressed in terms of the Ramanujan theta function. For it can be written as Proof Let Substituting for and multiplying the new terms out gives Since is meromorphic for , it has a Laurent series which satisfies so that and hence Evaluating Showing that is technical. One way is to set and show both the numerator and the denominator of are weight 1/2 modular under , since they are also 1-periodic and bounded on the upper half plane the quotient has to be constant so that . Other proofs A different proof is given by G. E. Andrews based on two identities of Euler. For the analytic case, see Apostol. References Peter J. Cameron, Combinatorics: Topics, Techniques, Algorithms, (1994) Cambridge University Press, Elliptic functions Theta functions Mathematical identities Theorems in number theory Infinite products
https://en.wikipedia.org/wiki/Plant%20virus
Plant viruses are viruses that affect plants. Like all other viruses, plant viruses are obligate intracellular parasites that do not have the molecular machinery to replicate without a host. Plant viruses can be pathogenic to vascular plants ("higher plants"). Most plant viruses are rod-shaped, with protein discs forming a tube surrounding the viral genome; isometric particles are another common structure. They rarely have an envelope. The great majority have an RNA genome, which is usually small and single stranded (ss), but some viruses have double-stranded (ds) RNA, ssDNA or dsDNA genomes. Although plant viruses are not as well understood as their animal counterparts, one plant virus has become very recognizable: tobacco mosaic virus (TMV), the first virus to be discovered. This and other viruses cause an estimated US$60 billion loss in crop yields worldwide each year. Plant viruses are grouped into 73 genera and 49 families. However, these figures relate only to cultivated plants, which represent only a tiny fraction of the total number of plant species. Viruses in wild plants have not been well-studied, but the interactions between wild plants and their viruses often do not appear to cause disease in the host plants. To transmit from one plant to another and from one plant cell to another, plant viruses must use strategies that are usually different from animal viruses. Most plants do not move, and so plant-to-plant transmission usually involves vectors (such as insects). Plant cells are surrounded by solid cell walls, therefore transport through plasmodesmata is the preferred path for virions to move between plant cells. Plants have specialized mechanisms for transporting mRNAs through plasmodesmata, and these mechanisms are thought to be used by RNA viruses to spread from one cell to another. Plant defenses against viral infection include, among other measures, the use of siRNA in response to dsRNA. Most plant viruses encode a protein to suppress this respo
https://en.wikipedia.org/wiki/Service%20Provisioning%20Markup%20Language
Service Provisioning Markup Language (SPML) is an XML-based framework, being developed by OASIS, for exchanging user, resource and service provisioning information between cooperating organizations. The Service Provisioning Markup language is the open standard for the integration and interoperation of service provisioning requests. SPML is an OASIS standard based on the concepts of Directory Service Markup Language. SPML version 1.0 was approved in October 2003. SPML version 2.0 was approved in April 2006. Security Assertion Markup Language exchanges the authorization data. Definition The OASIS Provisioning Services Technical Committee uses the following definition of "provisioning": Goal of SPML The goal of SPML is to allow organizations to securely and quickly set up user interfaces for Web services and applications, by letting enterprise platforms such as Web portals, application servers, and service centers generate provisioning requests within and across organizations. This can lead to automation of user or system access and entitlement rights to electronic services across diverse IT infrastructures, so that customers are not locked into proprietary solutions. SPML Functionality SPML version 2.0 defines the following functionality: Core functions listTargets - Enables a requestor to determine the set of targets that a provider makes available for provisioning. add - The add operation enables a requestor to create a new object on a target. lookup - The lookup operation enables a requestor to obtain the XML that represents an object on a target. modify - The modify operation enables a requestor to change an object on a target. delete - The delete operation enables a requestor to remove an object from a target. Async capability cancel - The cancel operation enables a requestor to stop the execution of an asynchronous operation. status - The status operation enables a requestor to determine whether an asynchronous operation has completed successfull
https://en.wikipedia.org/wiki/Napierian%20logarithm
The term Napierian logarithm or Naperian logarithm, named after John Napier, is often used to mean the natural logarithm. Napier did not introduce this natural logarithmic function, although it is named after him. However, if it is taken to mean the "logarithms" as originally produced by Napier, it is a function given by (in terms of the modern natural logarithm): The Napierian logarithm satisfies identities quite similar to the modern logarithm, such as or In Napier's 1614 Mirifici Logarithmorum Canonis Descriptio, he provides tables of logarithms of sines for 0 to 90°, where the values given (columns 3 and 5) are Properties Napier's "logarithm" is related to the natural logarithm by the relation and to the common logarithm by Note that and Napierian logarithms are essentially natural logarithms with decimal points shifted 7 places rightward and with sign reversed. For instance the logarithmic values would have the corresponding Napierian logarithms: For further detail, see history of logarithms. References . . . External links Denis Roegel (2012) Napier’s Ideal Construction of the Logarithms, from the Loria Collection of Mathematical Tables. Logarithms
https://en.wikipedia.org/wiki/Audio%20normalization
Audio normalization is the application of a constant amount of gain to an audio recording to bring the amplitude to a target level (the norm). Because the same amount of gain is applied across the entire recording, the signal-to-noise ratio and relative dynamics are unchanged. Normalization is one of the functions commonly provided by a digital audio workstation. Two principal types of audio normalization exist. Peak normalization adjusts the recording based on the highest signal level present in the recording. Loudness normalization adjusts the recording based on perceived loudness. Normalization differs from dynamic range compression, which applies varying levels of gain over a recording to fit the level within a minimum and maximum range. Normalization adjusts the gain by a constant value across the entire recording. Peak normalization One type of normalization is peak normalization, wherein the gain is changed to bring the highest PCM sample value or analog signal peak to a given levelusually 0 dBFS, the loudest level allowed in a digital system. Since it searches only for the highest level, peak normalization alone does not account for the apparent loudness of the content. As such, peak normalization is generally used to change the volume in such a way to ensure optimal use of available dynamic range during the mastering stage of a digital recording. When combined with compression/limiting, however, peak normalization becomes a feature that can provide a loudness advantage over non–peak-normalized material. This feature of digital-recording systems, compression and limiting followed by peak normalization, enables contemporary trends in program loudness. Loudness normalization Another type of normalization is based on a measure of loudness, wherein the gain is changed to bring the average loudness to a target level. This average may be approximate, such as a simple measurement of average power (e.g. RMS), or more accurate, such as a measure that addresses h
https://en.wikipedia.org/wiki/Airtime%20%28software%29
Airtime is a radio management application for remote broadcast automation (via web-based scheduler), and program exchange between radio stations. Airtime was developed and released as free and open-source software, subject to the requirements of the GNU General Public License until it was changed to GNU Affero General Public License. History The initial concept for Airtime, originally named LiveSupport, and then Campcaster was developed in 2003 under GPL-2.0-or-later by Micz Flor, a German new-media developer. The concept was further developed by Ákos Maróy, a software developer and then-member of Tilos Radio, Robert Klajn, a radio producer at Radio B92, and Douglas Arellanes and Sava Tatić from the Media Development Loan Fund (MDLF). The initial development was financed from a grant from the Open Society Institute's Information Program, through its ICT Toolsets initiative. The development was originally coordinated by MDLF through its Campware.org initiative, now spun off as the independent not-for-profit organisation Sourcefabric. In January 2011, Sourcefabric announced a rewrite of Campcaster, beginning with the 1.6 beta release. The new product, known as Airtime, replaced the C++ scheduler of Campcaster with Liquidsoap, and includes a drag and drop web interface based on jQuery. 1.6 was released in February 2011 under GPL-3.0-only. Airtime 1.8.1 was released on May 3, 2011 following up on releases 1.7 and 1.8 in April. The ability to edit shows was introduced, show repeat and rebroadcast made possible, and the calendar improved with reported loading times five to eight times faster. Airtime's default output stream became Ogg, rather than MP3. SoundCloud support, allowing users to automatically upload recorded shows, was announced in May 2011. Airtime 1.8.2 was released on June 14, 2011, with improvements to installation, upgrade, file upload limit and the interface. Airtime 1.9 was released on August 10, 2011, with a new file storage system that allowed
https://en.wikipedia.org/wiki/Flagship%20species
In conservation biology, a flagship species is a species chosen to raise support for biodiversity conservation in a given place or social context. Definitions have varied, but they have tended to focus on the strategic goals and the socio-economic nature of the concept, to support the marketing of a conservation effort. The species need to be popular, to work as symbols or icons, and to stimulate people to provide money or support. Species selected since the idea was developed in 1980s include widely recognised and charismatic species like the black rhinoceros, the Bengal tiger, and the Asian elephant. Some species such as the Chesapeake blue crab and the Pemba flying fox, the former of which is locally significant to Northern America, have suited a cultural and social context. Utilizing a flagship species has limitations. It can skew management and conservation priorities, which may conflict. Stakeholders may be negatively affected if the flagship species is lost. The use of a flagship may have limited effect, and the approach may not protect the species from extinction: all of the top ten charismatic groups of animal, including tigers, lions, elephants and giraffes, are endangered. Definitions The term flagship is linked to the metaphor of representation. In its popular usage, flagships are viewed as ambassadors or icons for a conservation project or movement. The geographer Maan Barua noted that metaphors influence what people understand and how they act; that mammals are disproportionately chosen; and that biologists need to come to grips with language to improve the public's knowledge of conservation. Several definitions have been advanced for the flagship species concept and for some time there has been confusion even in the academic literature. Most of the latest definitions focus on the strategic, socio-economic, and marketing character of the concept. Some definitions are: "a species used as the focus of a broader conservation marketing campaign ba
https://en.wikipedia.org/wiki/External%20sorting
External sorting is a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead they must reside in the slower external memory, usually a disk drive. Thus, external sorting algorithms are external memory algorithms and thus applicable in the external memory model of computation. External sorting algorithms generally fall into two types, distribution sorting, which resembles quicksort, and external merge sort, which resembles merge sort. External merge sort typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of data small enough to fit in main memory are read, sorted, and written out to a temporary file. In the merge phase, the sorted subfiles are combined into a single larger file. Model External sorting algorithms can be analyzed in the external memory model. In this model, a cache or internal memory of size and an unbounded external memory are divided into blocks of size , and the running time of an algorithm is determined by the number of memory transfers between internal and external memory. Like their cache-oblivious counterparts, asymptotically optimal external sorting algorithms achieve a running time (in Big O notation) of . External merge sort One example of external sorting is the external merge sort algorithm, which is a K-way merge algorithm. It sorts chunks that each fit in RAM, then merges the sorted chunks together. The algorithm first sorts items at a time and puts the sorted lists back into external memory. It then recursively does a -way merge on those sorted lists. To do this merge, elements from each sorted list are loaded into internal memory, and the minimum is repeatedly outputted. For example, for sorting 900 megabytes of data using only 100 megabytes of RAM: Read 100 MB of the data in main memory and sort by some conventional method, like quicksort. Write the s
https://en.wikipedia.org/wiki/Match%20moving
In visual effects, match moving is a technique that allows the insertion of 2D elements, other live action elements or CG computer graphics into live-action footage with correct position, scale, orientation, and motion relative to the photographed objects in the shot. It also allows for the removal of live action elements from the live action shot. The term is used loosely to describe several different methods of extracting camera motion information from a motion picture. Sometimes referred to as motion tracking or camera solving, match moving is related to rotoscoping and photogrammetry. Match moving is sometimes confused with motion capture, which records the motion of objects, often human actors, rather than the camera. Typically, motion capture requires special cameras and sensors and a controlled environment (although recent developments such as the Kinect camera and Apple's Face ID have begun to change this). Match moving is also distinct from motion control photography, which uses mechanical hardware to execute multiple identical camera moves. Match moving, by contrast, is typically a software-based technology, applied after the fact to normal footage recorded in uncontrolled environments with an ordinary camera. Match moving is primarily used to track the movement of a camera through a shot so that an identical virtual camera move can be reproduced in a 3-D animation program. When new animated elements are composited back into the original live-action shot, they will appear in perfectly matched perspective and therefore appear seamless. As it is mostly software-based, match moving has become increasingly affordable as the cost of computer power has declined; it is now an established visual-effects tool and is even used in live television broadcasts as part of providing effects such as the yellow virtual down-line in American football. Principle The process of match moving can be broken down into two steps. Tracking The first step is identifying and tr
https://en.wikipedia.org/wiki/Systema%20Naturae
(originally in Latin written with the ligature æ) is one of the major works of the Swedish botanist, zoologist and physician Carl Linnaeus (1707–1778) and introduced the Linnaean taxonomy. Although the system, now known as binomial nomenclature, was partially developed by the Bauhin brothers, Gaspard and Johann, Linnaeus was first to use it consistently throughout his book. The first edition was published in 1736. The full title of the 10th edition (1758), which was the most important one, was or translated: "System of nature through the three kingdoms of nature, according to classes, orders, genera and species, with characters, differences, synonyms, places". The tenth edition of this book (1758) is considered the starting point of zoological nomenclature. In 1766–1768 Linnaeus published the much enhanced 12th edition, the last under his authorship. Another again enhanced work in the same style and titled "" was published by Johann Friedrich Gmelin between 1788 and 1793. Since at least the early 20th century, zoologists have commonly recognized this as the last edition belonging to this series. Overview Linnaeus (later known as "Carl von Linné", after his ennoblement in 1761) published the first edition of in the year 1735, during his stay in the Netherlands. As was customary for the scientific literature of its day, the book was published in Latin. In it, he outlined his ideas for the hierarchical classification of the natural world, dividing it into the animal kingdom (), the plant kingdom (), and the "mineral kingdom" (). Linnaeus's Systema Naturae lists only about 10,000 species of organisms, of which about 6,000 are plants and 4,236 are animals. According to the historian of botany William T. Stearn, "Even in 1753 he believed that the number of species of plants in the whole world would hardly reach 10,000; in his whole career he named about 7,700 species of flowering plants." Linnaeus developed his classification of the plant kingdom in an attempt to
https://en.wikipedia.org/wiki/Peripheral%20Interface%20Adapter
A Peripheral Interface Adapter (PIA) is a peripheral integrated circuit providing parallel I/O interfacing for microprocessor systems. Description Common PIAs include the Motorola MC6820 and MC6821, and the MOS Technology MCS6520, all of which are functionally identical but have slightly different electrical characteristics. The PIA is most commonly packaged in a 40 pin DIP package. The PIA is designed for glueless connection to the Motorola 6800 style bus, and provides 20 I/O lines, which are organised into two 8-bit bidirectional ports (or 16 general-purpose I/O lines) and 4 control lines (for handshaking and interrupt generation). The directions for all 16 general lines (PA0-7, PB0-7) can be programmed independently. The control lines can be programmed to generate interrupts, automatically generate handshaking signals for devices on the I/O ports, or output a plain high or low signal. In 1976 Motorola switched the MC6800 family to a depletion-mode technology to improve the manufacturing yield and to operate at a faster speed. The Peripheral Interface Adapter had a slight change in the electrical characteristics of the I/O pins so the MC6820 became the MC6821. The MC6820 was used in the Apple I to interface the ASCII keyboard and the display. It was also deployed in the 6800-powered first generation of Bally electronic pinball machines (1977-1985), such as Flash Gordon and Kiss. The MCS6520 was used in the Atari 400 and 800 and Commodore PET family of computers (for example, to provide four joystick ports to the machine). The Tandy Color Computer uses two MC6821s to provide I/O access to the video, audio and peripherals. References Leventhal, Lance A. (1986). 6502 Assembly Language Programming 2nd Edition. Osborne/McGraw-Hill. . Input/output integrated circuits
https://en.wikipedia.org/wiki/Hyperspace
In science fiction, hyperspace (also known as nulspace, subspace, overspace, jumpspace and similar terms) is a concept relating to higher dimensions as well as parallel universes and a faster-than-light (FTL) method of interstellar travel. Its use in science fiction originated in the magazine Amazing Stories Quarterly in 1931 and within several decades it became one of the most popular tropes of science fiction, popularized by its use in the works of authors such as Isaac Asimov and E. C. Tubb, and media franchises such as Star Wars. One of the main reasons for the concept's popularity is the impossibility of faster-than-light travel in ordinary space, which hyperspace allows writers to bypass. In most works, hyperspace is described as a higher dimension through which the shape of our three-dimensional space can be distorted to bring distant points close to each other, similar to the concept of a wormhole; or a shortcut-enabling parallel universe that can be travelled through. Usually it can be traversed – the process often known as "jumping" – through a gadget known as a "hyperdrive"; rubber science is sometimes used to explain it. Many works rely on hyperspace as a convenient background tool enabling FTL travel necessary for the plot, with a small minority making it a central element in their storytelling. While most often used in the context of interstellar travel, a minority of works focus on other plot points, such as the inhabitants of hyperspace, hyperspace as an energy source, or even hyperspace as the afterlife. The term occasionally appears in scientific works in related contexts. Concept The basic premise of hyperspace is that vast distances through space can be traversed quickly by taking a kind of shortcut. There are two common models used to explain this shortcut: folding and mapping. In the folding model, hyperspace is a place of higher dimension through which the shape of our three-dimensional space can be distorted to bring distant points close
https://en.wikipedia.org/wiki/Software%20transactional%20memory
In computer science, software transactional memory (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. It is an alternative to lock-based synchronization. STM is a strategy implemented in software, rather than as a hardware component. A transaction in this context occurs when a piece of code executes a series of reads and writes to shared memory. These reads and writes logically occur at a single instant in time; intermediate states are not visible to other (successful) transactions. The idea of providing hardware support for transactions originated in a 1986 paper by Tom Knight. The idea was popularized by Maurice Herlihy and J. Eliot B. Moss. In 1995 Nir Shavit and Dan Touitou extended this idea to software-only transactional memory (STM). Since 2005, STM has been the focus of intense research and support for practical implementations is growing. Performance Unlike the locking techniques used in most modern multithreaded applications, STM is often very optimistic: a thread completes modifications to shared memory without regard for what other threads might be doing, recording every read and write that it is performing in a log. Instead of placing the onus on the writer to make sure it does not adversely affect other operations in progress, it is placed on the reader, who after completing an entire transaction verifies that other threads have not concurrently made changes to memory that it accessed in the past. This final operation, in which the changes of a transaction are validated and, if validation is successful, made permanent, is called a commit. A transaction may also abort at any time, causing all of its prior changes to be rolled back or undone. If a transaction cannot be committed due to conflicting changes, it is typically aborted and re-executed from the beginning until it succeeds. The benefit of this optimistic approach is increased concurrency: no thread need
https://en.wikipedia.org/wiki/Stieltjes%20constants
In mathematics, the Stieltjes constants are the numbers that occur in the Laurent series expansion of the Riemann zeta function: The constant is known as the Euler–Mascheroni constant. Representations The Stieltjes constants are given by the limit (In the case n = 0, the first summand requires evaluation of 00, which is taken to be 1.) Cauchy's differentiation formula leads to the integral representation Various representations in terms of integrals and infinite series are given in works of Jensen, Franel, Hermite, Hardy, Ramanujan, Ainsworth, Howell, Coppo, Connon, Coffey, Choi, Blagouchine and some other authors. In particular, Jensen-Franel's integral formula, often erroneously attributed to Ainsworth and Howell, states that where δn,k is the Kronecker symbol (Kronecker delta). Among other formulae, we find see. As concerns series representations, a famous series implying an integer part of a logarithm was given by Hardy in 1912 Israilov gave semi-convergent series in terms of Bernoulli numbers Connon, Blagouchine and Coppo gave several series with the binomial coefficients where Gn are Gregory's coefficients, also known as reciprocal logarithmic numbers (G1=+1/2, G2=−1/12, G3=+1/24, G4=−19/720,... ). More general series of the same nature include these examples and or where are the Bernoulli polynomials of the second kind and are the polynomials given by the generating equation respectively (note that ). Oloa and Tauraso showed that series with harmonic numbers may lead to Stieltjes constants Blagouchine obtained slowly-convergent series involving unsigned Stirling numbers of the first kind as well as semi-convergent series with rational terms only where m=0,1,2,... In particular, series for the first Stieltjes constant has a surprisingly simple form where Hn is the nth harmonic number. More complicated series for Stieltjes constants are given in works of Lehmer, Liang, Todd, Lavrik, Israilov, Stankus, Keiper, Nan-
https://en.wikipedia.org/wiki/Balloon%20help
Balloon help is a help system introduced by Apple Computer in their 1991 release of System 7.0. The name referred to the way the help text was displayed, in "speech balloons", like those containing words in a comic strip. The name has since been used by many to refer to any sort of pop-up help text. The problem During the leadup to System 7, Apple studied the problem of getting help in depth. They identified a number of common questions, such as Where am I?, How do I get to...?, or worse, Why is that item "grayed out"?. In the context of computer use they identified two main types of questions users asked: What is this thing? and How do I accomplish...?. Existing help systems typically didn't provide useful information on either of these topics, and were often nothing more than the paper manual copied into an electronic form. One of the particularly thorny problems was the What is this thing? question. In an interface that often included non-standard widgets or buttons labeled with an indecipherable icon, many functions required the end user referring to their manual. Users generally refused to do this, and ended up not using the full power of their applications since many of their functions were "hidden". It was this problem that Apple decided to attack, and after extensive testing, settled on Balloon Help as the solution. Apple's solution for How do I accomplish...? was Apple Guide, which would be added to System 7.5 in 1994. Mechanism Balloon help was activated by choosing Show Balloon Help from System 7's new Help menu (labelled with a Balloon Help icon in System 7, the Apple Guide icon in System 7.5, and the word Help in Mac OS 8). While balloon help was active, moving the mouse over an item would display help for that item. Balloon help was deactivated by choosing Hide Balloon Help from the same menu. The underlying system was based on a set of resources included in application software, holding text that would appear in the balloons. The balloon graphics
https://en.wikipedia.org/wiki/Phytic%20acid
Phytic acid is a six-fold dihydrogenphosphate ester of inositol (specifically, of the myo isomer), also called inositol hexaphosphate, inositol hexakisphosphate (IP6) or inositol polyphosphate. At physiological pH, the phosphates are partially ionized, resulting in the phytate anion. The (myo) phytate anion is a colorless species that has significant nutritional role as the principal storage form of phosphorus in many plant tissues, especially bran and seeds. It is also present in many legumes, cereals, and grains. Phytic acid and phytate have a strong binding affinity to the dietary minerals, calcium, iron, and zinc, inhibiting their absorption in the small intestine. The lower inositol polyphosphates are inositol esters with less than six phosphates, such as inositol penta- (IP5), tetra- (IP4), and triphosphate (IP3). These occur in nature as catabolites of phytic acid. Significance in agriculture Phytic acid was discovered in 1903. Generally, phosphorus and inositol in phytate form are not bioavailable to non-ruminant animals because these animals lack the enzyme phytase required to hydrolyze the inositol-phosphate linkages. Ruminants are able to digest phytate because of the phytase produced by rumen microorganisms. In most commercial agriculture, non-ruminant livestock, such as swine, fowl, and fish, are fed mainly grains, such as maize, legumes, and soybeans. Because phytate from these grains and beans is unavailable for absorption, the unabsorbed phytate passes through the gastrointestinal tract, elevating the amount of phosphorus in the manure. Excess phosphorus excretion can lead to environmental problems, such as eutrophication. The use of sprouted grains may reduce the quantity of phytic acids in feed, with no significant reduction of nutritional value. Also, viable low-phytic acid mutant lines have been developed in several crop species in which the seeds have drastically reduced levels of phytic acid and concomitant increases in inorganic phosph
https://en.wikipedia.org/wiki/Internet%20Public%20Library
The Virtual library (IPL, ipl2) was a non-profit, largely student-run website managed by a consortium, headed by Drexel University. Visitors could ask reference questions, and volunteer librarians and graduate students in library and information science formed collections and answered questions. The IPL opened on March 17, 1995. On January 1, 2010 it merged with the Librarians' Internet Index to become ipl2. It ceased operations completely on June 30, 2015. The digital collections on the site were divided into five broad categories, and include Resources by Subject, Newspapers & Magazines, Special Collections Created By the ipl2, and Special Collections for Kids and Teens. As of March 2011 it had about 40,000 searchable resources. As of 2020 IPL has been purchased by Barnes and Noble Education and the above layout has been replaced; it now resembles an essay repository for students. History The IPL originated at the University of Michigan’s School of Information. Michigan SI students almost exclusively generated its content. They also managed the Ask a Question reference service. In 2006 the University of Michigan opened up management of the IPL to other information science and library schools. They stopped hosting the IPL and moved the servers and staff positions to Drexel University, and by January 2007 the "IPL Consortium" that ran the IPL comprised a group of 15 colleges, including the University of Michigan. Drexel's College of Computing and Informatics hosted the site. With a grant from the Institute of Museum and Library Services, Drexel also used the site as a "'technological training center' for digital librarians." In 2009 the Internet Public Library merged with the Librarians' Internet Index, a publicly funded website that until then was managed by the Califa Library group; the new web presence, which continued to be hosted by Drexel University, was dubbed "ipl2". According to Joseph Janes of the University of Washington, ipl2 would no longer be su
https://en.wikipedia.org/wiki/Menelaus%27s%20theorem
In Euclidean geometry, Menelaus's theorem, named for Menelaus of Alexandria, is a proposition about triangles in plane geometry. Suppose we have a triangle , and a transversal line that crosses at points respectively, with distinct from . A weak version of the theorem states that where "| |" denotes absolute value (i.e., all segment lengths are positive). The theorem can be strengthened to a statement about signed lengths of segments, which provides some additional information about the relative order of collinear points. Here, the length is taken to be positive or negative according to whether is to the left or right of in some fixed orientation of the line; for example, is defined as having positive value when is between and and negative otherwise. The signed version of Menelaus's theorem states Equivalently, Some authors organize the factors differently and obtain the seemingly different relation but as each of these factors is the negative of the corresponding factor above, the relation is seen to be the same. The converse is also true: If points are chosen on respectively so that then are collinear. The converse is often included as part of the theorem. (Note that the converse of the weaker, unsigned statement is not necessarily true.) The theorem is very similar to Ceva's theorem in that their equations differ only in sign. By re-writing each in terms of cross-ratios, the two theorems may be seen as projective duals. Proofs A standard proof First, the sign of the left-hand side will be negative since either all three of the ratios are negative, the case where the line misses the triangle (lower diagram), or one is negative and the other two are positive, the case where crosses two sides of the triangle. (See Pasch's axiom.) To check the magnitude, construct perpendiculars from to the line and let their lengths be respectively. Then by similar triangles it follows that Therefore, For a simpler, if less symmetrical way to check
https://en.wikipedia.org/wiki/Domain/OS
Domain/OS is the discontinued operating system used by the Apollo/Domain line of workstations manufactured by Apollo Computer. It was originally launched in 1981 as AEGIS, and was rebranded to Domain/OS in 1988 when Unix environments were added to the operating system. It is one of the early distributed operating systems. Hewlett-Packard supported the operating system for a short time after they purchased Apollo, but they later ended the product line in favor of HP-UX. HP ended final support for Domain/OS on January 1, 2001. AEGIS AEGIS is distinctive mainly for being designed for the networked computer, as distinct from its competitors, which are essentially standalone systems with added network features. The prime examples of this are the file system, which is fully integrated across machines, as opposed to Unix which draws a distinction between file systems on the host system and on others, and the user administration system, which is fundamentally network-based. So basic is this orientation that even a standalone Apollo machine cannot be configured without a network card. Domain/OS implements functionality derived from both System V and early BSD Unix systems. It improves on AEGIS by providing a core OS upon which the user can install any or all of three environments: AEGIS, System V Unix, and BSD Unix. This was done in order to provide greater compatibility with Unix; AEGIS version SR9, which immediately preceded Domain/OS (itself numbered SR10) has an optional product called Domain/IX available, which provides a similar capability, but with some drawbacks, principally the fact that core administrative tasks still require AEGIS commands. Also, the SR9 permissions system is not fully compatible with Unix behaviour. Domain/OS provides new administrative commands and a more complex permissions system which can be configured to behave properly under any of the three environments. Domain/OS also provides an improved version of the X Window System, complete with VU
https://en.wikipedia.org/wiki/Tumor%20marker
A tumor marker is a biomarker found in blood, urine, or body tissues that can be elevated by the presence of one or more types of cancer. There are many different tumor markers, each indicative of a particular disease process, and they are used in oncology to help detect the presence of cancer. An elevated level of a tumor marker can indicate cancer; however, there can also be other causes of the elevation (false positive values). Tumor markers can be produced directly by the tumor or by non-tumor cells as a response to the presence of a tumor. Although mammography, ultrasonography, computed tomography, magnetic resonance imaging scans, and tumor marker assays help in the staging and treatment of the cancer, they are usually not definitive diagnostic tests. The diagnosis is mostly confirmed by biopsy. Classification On the basis of their chemical nature, tumor markers can be proteins, conjugated proteins, peptides and carbohydrates. Proteins or conjugated proteins may be enzymes, hormones or fragments of proteins. Sequencing of genes for diagnostic purposes is mostly classified under the biomarker heading and is not discussed here. Uses Tumor markers may be used for the following purposes: Screening for common cancers on a population basis. Broad screening for all or most types of cancer was originally suggested, but has since been shown not to be a realistic goal. Screening for specific cancer types or locations requires a level of specificity and sensitivity that has so far only been reached by Example: elevated prostate specific antigen suggests that is used in some countries to screen for prostate cancer. Monitoring of cancer survivors after treatment, detection of recurrent disease. Example: elevated AFP in a child previously treated for teratoma suggests relapse with endodermal sinus tumor. Diagnosis of specific tumor types, particularly in certain brain tumors and other instances where biopsy is not feasible. Confirmation of diagnosis to verify the char
https://en.wikipedia.org/wiki/Cournot%20competition
Cournot competition is an economic model used to describe an industry structure in which companies compete on the amount of output they will produce, which they decide on independently of each other and at the same time. It is named after Antoine Augustin Cournot (1801–1877) who was inspired by observing competition in a spring water duopoly. It has the following features: There is more than one firm and all firms produce a homogeneous product, i.e., there is no product differentiation; Firms do not cooperate, i.e., there is no collusion; Firms have market power, i.e., each firm's output decision affects the good's price; The number of firms is fixed; Firms compete in quantities rather than prices; and The firms are economically rational and act strategically, usually seeking to maximize profit given their competitors' decisions. An essential assumption of this model is the "not conjecture" that each firm aims to maximize profits, based on the expectation that its own output decision will not have an effect on the decisions of its rivals. Price is a commonly known decreasing function of total output. All firms know , the total number of firms in the market, and take the output of the others as given. The market price is set at a level such that demand equals the total quantity produced by all firms. Each firm takes the quantity set by its competitors as a given, evaluates its residual demand, and then behaves as a monopoly. History Antoine Augustin Cournot (1801–1877) first outlined his theory of competition in his 1838 volume Recherches sur les Principes Mathématiques de la Théorie des Richesses as a way of describing the competition with a market for spring water dominated by two suppliers (a duopoly). The model was one of a number that Cournot set out "explicitly and with mathematical precision" in the volume. Specifically, Cournot constructed profit functions for each firm, and then used partial differentiation to construct a function representing a fir
https://en.wikipedia.org/wiki/Edgeworth%20paradox
To solve the Bertrand paradox, the Irish economist Francis Ysidro Edgeworth put forward the Edgeworth Paradox in his paper "The Pure Theory of Monopoly", published in 1897. In economics, the Edgeworth paradox describes a situation in which two players cannot reach a state of equilibrium with pure strategies, i.e. each charging a stable price. A fact of the Edgeworth Paradox is that in some cases, even if the direct price impact is negative and exceeds the conditions, an increase in cost proportional to the quantity of an item provided may cause a decrease in all optimal prices. Due to the limited production capacity of enterprises in reality, if only one enterprise's total production capacity can be supplied cannot meet social demand, another enterprise can charge a price that exceeds the marginal cost for the residual social need. Example Suppose two companies, A and B, sell an identical commodity product, and that customers choose the product solely on the basis of price. Each company faces capacity constraints, in that on its own it cannot satisfy demand at its zero-profit price, but together they can more than satisfy such demand. The Edgeworth Pardox assumption of the Cournot model is as follows: 1. The production capacity of the two manufacturers is limited. Under a certain price level, the output of a particular Oligopoly cannot meet the market demand at this price level so that another manufacturer can obtain the residual market demand. 2. In a certain period, two prices can exist in the market at the same time. 3. When a particular oligopoly chooses a certain price level, another oligopoly will not immediately respond to the price. Edgeworth model Edgeworth's model follows Bertrand's hypothesis, where each seller assumes that the price of its competitor, not its output, remains constant. Suppose there are two sellers, A and B, facing the same demand curve in the market. To explain Edgeworth's model, let us first assume that A is the only seller in
https://en.wikipedia.org/wiki/Tornado%20code
In coding theory, Tornado codes are a class of erasure codes that support error correction. Tornado codes require a constant C more redundant blocks than the more data-efficient Reed–Solomon erasure codes, but are much faster to generate and can fix erasures faster. Software-based implementations of tornado codes are about 100 times faster on small lengths and about 10,000 times faster on larger lengths than Reed–Solomon erasure codes. Since the introduction of Tornado codes, many other similar erasure codes have emerged, most notably Online codes, LT codes and Raptor codes. Tornado codes use a layered approach. All layers except the last use an LDPC error correction code, which is fast but has a chance of failure. The final layer uses a Reed–Solomon correction code, which is slower but is optimal in terms of failure recovery. Tornado codes dictates how many levels, how many recovery blocks in each level, and the distribution used to generate blocks for the non-final layers. Overview The input data is divided into blocks. Blocks are sequences of bits that are all the same size. Recovery data uses the same block size as the input data. The erasure of a block (input or recovery) is detected by some other means. (For example, a block from disk does not pass a CRC check or a network packet with a given sequence number never arrived.) The number of recovery blocks is given by the user. Then the number of levels is determined along with the number of blocks in each level. The number in each level is determined by a factor B which is less than one. If there are N input blocks, the first recovery level has B*N blocks, the second has B*B*N, the third has B*B*B*N, and so on. All levels of recovery except the final one use an LDPC, which works by xor (exclusive-or). Xor operates on binary values, 1s and 0s. A xor B is 1 if A and B have different values and 0 if A and B have the same values. If you are given result of (A xor B) and A, you can determine the
https://en.wikipedia.org/wiki/Anisole
Anisole, or methoxybenzene, is an organic compound with the formula . It is a colorless liquid with a smell reminiscent of anise seed, and in fact many of its derivatives are found in natural and artificial fragrances. The compound is mainly made synthetically and is a precursor to other synthetic compounds. Structurally, it is an ether () with a methyl () and phenyl () group attached. Anisole is a standard reagent of both practical and pedagogical value. It can be prepared by the Williamson ether synthesis; sodium phenoxide is reacted with a methyl halide to yield anisole. Reactivity Anisole undergoes electrophilic aromatic substitution reaction at a faster speed than benzene, which in turn reacts more quickly than nitrobenzene. The methoxy group is an ortho/para directing group, which means that electrophilic substitution preferentially occurs at these three sites. The enhanced nucleophilicity of anisole vs. benzene reflects the influence of the methoxy group, which renders the ring more electron-rich. The methoxy group strongly affects the pi cloud of the ring as a mesomeric electron donor, more so than as an inductive electron withdrawing group despite the electronegativity of the oxygen. Stated more quantitatively, the Hammett constant for para-substitution of anisole is –0.27. Illustrative of its nucleophilicity, anisole reacts with acetic anhydride to give Unlike most acetophenones, but reflecting the influence of the methoxy group, methoxyacetophenone undergoes a second acetylation. Many related reactions have been demonstrated. For example, phosphorus pentasulfide () converts anisole to Lawesson's reagent, . Also indicating an electron-rich ring, anisole readily forms π-complexes with metal carbonyls, e.g. . The ether linkage is highly stable, but the methyl group can be removed with hydroiodic acid: Birch reduction of anisole gives 1-methoxycyclohexa-1,4-diene. Preparation Anisole is prepared by methylation of sodium phenoxide with dimethyl
https://en.wikipedia.org/wiki/Scott%20Kim
Scott Kim is an American puzzle and video game designer, artist, and author of Korean descent. He started writing an occasional "Boggler" column for Discover magazine in 1990, and became an exclusive columnist in 1999, and created hundreds of other puzzles for magazines such as Scientific American and Games, as well as thousands of puzzles for computer games. He was the holder of the Harold Keables chair at Iolani School in 2008. Kim was born in 1955 in Washington, D.C., and grew up in Rolling Hills Estates, California. He had an early interest in mathematics, education, and art, and attended Stanford University, receiving a BA in music, and a PhD in Computers and Graphic Design under Donald Knuth. In 1981, he created a book called Inversions, words that can be read in more than one way. His first puzzles appeared in Scientific American in Martin Gardner's "Mathematical Games" column and he said that the column inspired his own career as a puzzle designer. Kim is one of the best-known masters of the art of ambigrams. Kim designed logos for Silicon Graphics, Inc., GOES, The Hackers Conference, the Computer Game Developers Conference, and Dylan. Kim is a regular speaker on puzzle design, such as at the International Game Developers Conference and Casual Games Conference. His wife, Amy Jo Kim, is the author of Community Building on the Web. He lives in Burlingame, California with his wife Amy Jo Kim, son Gabriel and daughter Lila Rose. Works Inversions, 1981, Byte Books, , a book of 60 original ambigrams "Letterforms & Illusion", 1989, W. H. Freeman & Co., created with Robin Samelson, accompanies the book, Inversions. Quintapaths, 1969 (tiling puzzle), published by Kadon since 1999. Heaven and Earth, Buena Vista / Disney (computer game) Obsidian, SegaSoft (computer game) MetaSquares, 1996 (computer game, created with Kai Krause, Phil Clevenger, and Ian Gilman). The Next Tetris, Hasbro Interactive, PlayStation Railroad Rush Hour, ThinkFun (toy) Charlie
https://en.wikipedia.org/wiki/Marginal%20value%20theorem
The marginal value theorem (MVT) is an optimality model that usually describes the behavior of an optimally foraging individual in a system where resources (often food) are located in discrete patches separated by areas with no resources. Due to the resource-free space, animals must spend time traveling between patches. The MVT can also be applied to other situations in which organisms face diminishing returns. The MVT was first proposed by Eric Charnov in 1976. In his original formulation: "The predator should leave the patch it is presently in when the marginal capture rate in the patch drops to the average capture rate for the habitat." Definition All animals must forage for food in order to meet their energetic needs, but doing so is energetically costly. It is assumed that evolution by natural selection results in animals utilizing the most economic and efficient strategy to balance energy gain and consumption. The Marginal Value Theorem is an optimality model that describes the strategy that maximizes gain per unit time in systems where resources, and thus rate of returns, decrease with time. The model weighs benefits and costs and is used to predict giving up time and giving up density. Giving up time (GUT) is the interval of time between when the animal last feeds and when it leaves the patch. Giving up density (GUD) is the food density within a patch when the animal will choose to move on to other food patches. When an animal is foraging in a system where food sources are patchily distributed, the MVT can be used to predict how much time an individual will spend searching for a particular patch before moving on to a new one. In general, individuals will stay longer if (1) patches are farther apart or (2) current patches are poor in resources. Both situations increase the ratio of travel cost to foraging benefit. Modeling As animals forage in patchy systems, they balance resource intake, traveling time, and foraging time. Resource intake within a patch
https://en.wikipedia.org/wiki/Copper%28II%29%20chloride
Copper(II) chloride, also known as cupric chloride, is an inorganic compound with the chemical formula CuCl2. The monoclinic yellowish-brown anhydrous form slowly absorbs moisture to form the orthorhombic blue-green dihydrate CuCl2·2H2O, with two water molecules of hydration. It is industrially produced for use as a co-catalyst in the Wacker process. Both the anhydrous and the dihydrate forms occur naturally as the rare minerals tolbachite and eriochalcite, respectively. Structure Anhydrous copper(II) chloride adopts a distorted cadmium iodide structure. In this structure, the copper centers are octahedral. Most copper(II) compounds exhibit distortions from idealized octahedral geometry due to the Jahn-Teller effect, which in this case describes the localization of one d-electron into a molecular orbital that is strongly antibonding with respect to a pair of chloride ligands. In CuCl2·2H2O, the copper again adopts a highly distorted octahedral geometry, the Cu(II) centers being surrounded by two water ligands and four chloride ligands, which bridge asymmetrically to other Cu centers. Copper(II) chloride is paramagnetic. Of historical interest, CuCl2·2H2O was used in the first electron paramagnetic resonance measurements by Yevgeny Zavoisky in 1944. Properties and reactions Aqueous solutions prepared from copper(II) chloride contain a range of copper(II) complexes depending on concentration, temperature, and the presence of additional chloride ions. These species include the blue color of [Cu(H2O)6]2+ and the yellow or red color of the halide complexes of the formula [CuCl2+x]x−. Hydrolysis When copper(II) chloride solutions are treated with a base, a precipitation of copper(II) hydroxide occurs: CuCl2 + 2 NaOH → Cu(OH)2 + 2 NaCl Partial hydrolysis gives dicopper chloride trihydroxide, Cu2(OH)3Cl, a popular fungicide. When an aqueous solution of copper(II) chloride is left in the air and isn't stabilized by a small amount of acid, it is prone to under
https://en.wikipedia.org/wiki/Real%20coordinate%20space
In mathematics, the real coordinate space of dimension , denoted or is the set of the -tuples of real numbers, that is the set of all sequences of real numbers. Special cases are called the real line and the real coordinate plane . With component-wise addition and scalar multiplication, it is a real vector space, and its elements are called coordinate vectors. The coordinates over any basis of the elements of a real vector space form a real coordinate space of the same dimension as that of the vector space. Similarly, the Cartesian coordinates of the points of a Euclidean space of dimension form a real coordinate space of dimension . These one to one correspondences between vectors, points and coordinate vectors explain the names of coordinate space and coordinate vector. It allows using geometric terms and methods for studying real coordinate spaces, and, conversely, to use methods of calculus in geometry. This approach of geometry was introduced by René Descartes in the 17th century. It is widely used, as it allows locating points in Euclidean spaces, and computing with them. Definition and structures For any natural number , the set consists of all -tuples of real numbers (). It is called the "-dimensional real space" or the "real -space". An element of is thus a -tuple, and is written where each is a real number. So, in multivariable calculus, the domain of a function of several real variables and the codomain of a real vector valued function are subsets of for some . The real -space has several further properties, notably: With componentwise addition and scalar multiplication, it is a real vector space. Every -dimensional real vector space is isomorphic to it. With the dot product (sum of the term by term product of the components), it is an inner product space. Every -dimensional real inner product space is isomorphic to it. As every inner product space, it is a topological space, and a topological vector space. It is a Euclidean space and
https://en.wikipedia.org/wiki/Euclidean%20topology
In mathematics, and especially general topology, the Euclidean topology is the natural topology induced on -dimensional Euclidean space by the Euclidean metric. Definition The Euclidean norm on is the non-negative function defined by Like all norms, it induces a canonical metric defined by The metric induced by the Euclidean norm is called the Euclidean metric or the Euclidean distance and the distance between points and is In any metric space, the open balls form a base for a topology on that space. The Euclidean topology on is the topology by these balls. In other words, the open sets of the Euclidean topology on are given by (arbitrary) unions of the open balls defined as for all real and all where is the Euclidean metric. Properties When endowed with this topology, the real line is a T5 space. Given two subsets say and of with where denotes the closure of there exist open sets and with and such that See also References Topology Euclid
https://en.wikipedia.org/wiki/Structured%20program%20theorem
The structured program theorem, also called the Böhm–Jacopini theorem, is a result in programming language theory. It states that a class of control-flow graphs (historically called flowcharts in this context) can compute any computable function if it combines subprograms in only three specific ways (control structures). These are Executing one subprogram, and then another subprogram (sequence) Executing one of two subprograms according to the value of a boolean expression (selection) Repeatedly executing a subprogram as long as a boolean expression is true (iteration) The structured chart subject to these constraints, particularly the loop constraint implying a single exit (as described later in this article), may however use additional variables in the form of bits (stored in an extra integer variable in the original proof) in order to keep track of information that the original program represents by the program location. The construction was based on Böhm's programming language P′′. The theorem forms the basis of structured programming, a programming paradigm which eschews goto commands and exclusively uses subroutines, sequences, selection and iteration. Origin and variants The theorem is typically credited to a 1966 paper by Corrado Böhm and Giuseppe Jacopini. David Harel wrote in 1980 that the Böhm–Jacopini paper enjoyed "universal popularity", particularly with proponents of structured programming. Harel also noted that "due to its rather technical style [the 1966 Böhm–Jacopini paper] is apparently more often cited than read in detail" and, after reviewing a large number of papers published up to 1980, Harel argued that the contents of the Böhm–Jacopini proof were usually misrepresented as a folk theorem that essentially contains a simpler result, a result which itself can be traced to the inception of modern computing theory in the papers of von Neumann and Kleene. Harel also writes that the more generic name was proposed by H.D. Mills as "The Structure
https://en.wikipedia.org/wiki/CyberCash
CyberCash, Inc. was an internet payment service for electronic commerce, headquartered in Reston, Virginia. It was founded in August 1994 by Daniel C. Lynch (who served as chairman), William N. Melton (who served as president and CEO, and later chairman), Steve Crocker (Chief Technology Officer), and Bruce G. Wilson. The company initially provided an online wallet software to consumers and provided software to merchants to accept credit card payments. Later, they additionally offered "CyberCoin," a micropayment system modeled after the NetBill research project at Carnegie Mellon University, which they later licensed. At the time, the U.S. government had a short-lived restriction on the export of cryptography, making it illegal to provide encryption technology outside the United States. CyberCash obtained an exemption from the Department of State, which concluded that it would be easier to create encryption technology from scratch than to extract it out of Cyber-Cash's software. In 1995, the company proposed RFC 1898, CyberCash Credit Card Protocol Version 0.8. The company went public on February 19, 1996, with the symbol "CYCH" and its shares rose 79% on the first day of trading. In 1998, CyberCash bought ICVerify, makers of computer-based credit card processing software, and in 1999 added another software company to their lineup, purchasing Tellan Software. In January 2000, a teenage Russian hacker nicknamed "Maxus" announced that he had cracked CyberCash's ICVerify application; the company denied this, stating that ICVerify was not even in use by the purportedly hacked organization. On January 1, 2000, many users of CyberCash's ICVerify application fell victim to the Y2K Bug, causing double recording of credit card payments through their system. Although CyberCash had already released a Y2K-compliant update to the software, many users had not installed it. Bankruptcy The company filed for Chapter 11 bankruptcy on March 11, 2001. VeriSign acquired the Cyber
https://en.wikipedia.org/wiki/Reproductive%20success
Reproductive success is an individual's production of offspring per breeding event or lifetime. This is not limited by the number of offspring produced by one individual, but also the reproductive success of these offspring themselves. Reproductive success is different from fitness in that individual success is not necessarily a determinant for adaptive strength of a genotype since the effects of chance and the environment have no influence on those specific genes. Reproductive success turns into a part of fitness when the offspring are actually recruited into the breeding population. If offspring quantity is not correlated with quality this holds up, but if not then reproductive success must be adjusted by traits that predict juvenile survival in order to be measured effectively. Quality and quantity is about finding the right balance between reproduction and maintenance. The disposable soma theory of aging tells us that a longer lifespan will come at the cost of reproduction and thus longevity is not always correlated with high fecundity. Parental investment is a key factor in reproductive success since taking better care to offspring is what often will give them a fitness advantage later in life. This includes mate choice and sexual selection as an important factor in reproductive success, which is another reason why reproductive success is different from fitness as individual choices and outcomes are more important than genetic differences. As reproductive success is measured over generations, longitudinal studies are the preferred study type as they follow a population or an individual over a longer period of time in order to monitor the progression of the individual(s). These long term studies are preferable since they negate the effects of the variation in a single year or breeding season. Nutritional contribution Nutrition is one of the factors that influences reproductive success. For example, different amounts of consumption and more specifically c
https://en.wikipedia.org/wiki/Ranking
A ranking is a relationship between a set of items such that, for any two items, the first is either "ranked higher than", "ranked lower than", or "ranked equal to" the second. In mathematics, this is known as a weak order or total preorder of objects. It is not necessarily a total order of objects because two different objects can have the same ranking. The rankings themselves are totally ordered. For example, materials are totally preordered by hardness, while degrees of hardness are totally ordered. If two items are the same in rank it is considered a tie. By reducing detailed measures to a sequence of ordinal numbers, rankings make it possible to evaluate complex information according to certain criteria. Thus, for example, an Internet search engine may rank the pages it finds according to an estimation of their relevance, making it possible for the user quickly to select the pages they are likely to want to see. Analysis of data obtained by ranking commonly requires non-parametric statistics. Strategies for handling ties It is not always possible to assign rankings uniquely. For example, in a race or competition two (or more) entrants might tie for a place in the ranking. When computing an ordinal measurement, two (or more) of the quantities being ranked might measure equal. In these cases, one of the strategies below for assigning the rankings may be adopted. A common shorthand way to distinguish these ranking strategies is by the ranking numbers that would be produced for four items, with the first item ranked ahead of the second and third (which compare equal) which are both ranked ahead of the fourth. These names are also shown below. Standard competition ranking ("1224" ranking) In competition ranking, items that compare equal receive the same ranking number, and then a gap is left in the ranking numbers. The number of ranking numbers that are left out in this gap is one less than the number of items that compared equal. Equivalently, each item's ra
https://en.wikipedia.org/wiki/Xoom%20%28web%20hosting%29
Xoom was an early dot-com company that provided free unlimited space web hosting, similar to GeoCities. The domain "xoom.com" is now held by the Xoom Corporation, an international-focused money transfer website run by PayPal. History Xoom was founded by Chris Kitze in September 1996 as a download website offering free clipart and a productivity suite including a word-processing application, centering on a word processor based on Wordstar. In March 1997, Xoom became a web hosting (offering 100 MB) and an e-mail hosting website. The company acquired several small service providers in 1997 and 1998, including Paralogic, creator of ParaChat, which was the largest chat network on the web at the time, and PageCount, a web counter service. The main revenue sources for the company were direct marketing via email to members and banner advertising. Funding The company was funded by a former Lycos executive who had previously started Creative Multimedia (Portland, OR), Aris Multimedia (Marina del Rey, CA) and Point Communications (New York, NY), and angel investors who invested a total of $10M in common stock. No venture capital was raised and the company went public in December 1998 (ticker symbol: XMCM). Around that time, it was ranked as the 13th most popular site on the web by Media Metrix. In May 1999, a deal was announced to use Xoom.com as a vehicle for NBC's internet ventures, that combined Snap.com (owned by CNET and NBC), and various NBC internet assets plus $400M of NBC on-air promotion to form NBC Internet (NBCi). At that time, the combined entity was ranked as the 7th most popular site on the web by Media Metrix. Reception Xoom was both criticized and praised for its strict policies on violations of terms of service. A short-lived experiment in franchising and licensing the Xoom software platform and business model in 1998 led to an Italian website, xoom.it, which is still in operation and owned by Virgilio. References Motley Fool article on Xoom.com W
https://en.wikipedia.org/wiki/Host%20Identity%20Protocol
The Host Identity Protocol (HIP) is a host identification technology for use on Internet Protocol (IP) networks, such as the Internet. The Internet has two main name spaces, IP addresses and the Domain Name System. HIP separates the end-point identifier and locator roles of IP addresses. It introduces a Host Identity (HI) name space, based on a public key security infrastructure. The Host Identity Protocol provides secure methods for IP multihoming and mobile computing. In networks that implement the Host Identity Protocol, all occurrences of IP addresses in applications are eliminated and replaced with cryptographic host identifiers. The cryptographic keys are typically, but not necessarily, self-generated. The effect of eliminating IP addresses in application and transport layers is a decoupling of the transport layer from the internetworking layer (Internet Layer) in TCP/IP. HIP was specified in the IETF HIP working group. An Internet Research Task Force (IRTF) HIP research group looks at the broader impacts of HIP. The working group is chartered to produce Requests for Comments on the "Experimental" track, but it is understood that their quality and security properties should match the standards track requirements. The main purpose for producing Experimental documents instead of standards track ones are the unknown effects that the mechanisms may have on applications and on the Internet in the large. RFC references - Host Identity Protocol (HIP) Architecture (early "informational" snapshot) - Host Identity Protocol base (Obsoleted by RFC 7401) - Using the Encapsulating Security Payload (ESP) Transport Format with the Host Identity Protocol (HIP) (Obsoleted by RFC 7402) - Host Identity Protocol (HIP) Registration Extension (obsoleted by RFC 8003) - Host Identity Protocol (HIP) Rendezvous Extension (obsoleted by RFC 8004) - Host Identity Protocol (HIP) Domain Name System (DNS) Extension (obsoleted by RFC 8005) - End-Host Mobility and Multihomin
https://en.wikipedia.org/wiki/Nutmeg%20oil
Nutmeg oil is a volatile essential oil from nutmeg (Myristica fragrans). The oil is colorless or light yellow and smells and tastes of nutmeg. It contains numerous components of interest to the oleochemical industry. The essential oil consists of approximately 90% terpene hydrocarbons. Prominent components are sabinene, α-pinene, β-pinene, and limonene. A major oxygen-containing component is terpinen-4-ol. The oil also contains small amounts of various phenolic compounds and aromatic ethers, e.g. myristicin, elemicin, safrole, and methyl eugenol. The phenolic fraction is considered main contributor to the characteristic nutmeg odor. However, in spite of the low oil content, the characteristic composition of nutmeg oil makes it a valuable product for food, cosmetic and pharmaceutical industries. Therefore, an improved process for its extraction would be of industrial interest. General uses The essential oil is obtained by the steam distillation of ground nutmeg and is used heavily in the perfumery and pharmaceutical industries. The nutmeg essential oil is used as a natural food flavoring in baked goods, syrups, beverages (e.g. Coca-Cola), sweets, etc. It can then be used to replace ground nutmeg, as it leaves no particles in the food. The essential oil is also used in the cosmetic and pharmaceutical industries for instance in toothpaste and as a major ingredient in some cough syrups. References Essential oils Flavors
https://en.wikipedia.org/wiki/Verbatim%20%28brand%29
Verbatim is a brand for storage media and flash memory products currently owned by CMC Magnetics Corporation (CMC), a Taiwanese company that is known for optical disc manufacturing. Formerly a subsidiary of Mitsubishi Chemical, the global business and assets of Verbatim were sold to CMC Magnetics in 2019 at an estimated price of $32 million USD. Originally an American company and known for its floppy disks in the 1970s and 1980s, Verbatim is now known for its recordable optical media. History The original Verbatim first started in Mountain View, California, in 1969, under the name Information Terminals Corporation, founded by Reid Anderson. It grew quickly and became a leading manufacturer of floppy disks by the end of the 1970s, and it was soon renamed Verbatim. In 1982, it formed a floppy disk joint venture with Japanese company Mitsubishi Kasei Corporation (forerunner of Mitsubishi Chemical Corporation), with the joint venture called Kasei Verbatim. Verbatim mostly struggled in the decade and was purchased by Eastman Kodak in 1985, while its floppy partnership with Mitsubishi Kasei Corporation was still intact. It was eventually purchased fully by Mitsubishi Kasei Corporation in March 1990, after eight years in a joint venture. Many new products were launched under the new Japanese ownership, and the brand saw immense growth in the decade. Mitsubishi Kagaku Media was founded in October 1994 as a subsidiary through the merger of Mitsubishi Kasei and Mitsubishi Petrochemical, resulting in Mitsubishi Chemical. The new company absorbed the former American company and created a new Japanese entity, whilst the old Verbatim brand lived on. In addition, Mitubishi Kagaku Media sold products under the Freecom brand. Freecom was founded in Berlin, Germany, in 1989 and had been based in the Netherlands when it was purchased by Mitsubishi Chemical Holdings in September 2009. The company was selling products under the Mitsubishi brand in Japan from 1994 to 2010, when Ver
https://en.wikipedia.org/wiki/Rail%20directions
Railroad directions are used to describe train directions on rail systems. The terms used may be derived from such sources as compass directions, altitude directions, or other directions. However, the railroad directions frequently vary from the actual directions, so that, for example, a "northbound" train may really be headed west over some segments of its trip, or a train going "down" may actually be increasing its elevation. Railroad directions are often specific to system, country, or region. Radial directions Many rail systems use the concept of a centre (usually a major city) to define rail directions. Up and down In British practice, railway directions are usually described as "up" and "down", with "up" being towards a major location. This convention is applied not only to the trains and the tracks, but also to items of lineside equipment and to areas near a track. Since British trains run on the left, the "up" side of a line is usually on the left when proceeding in the "up" direction. On most of the network, "up" is the direction towards London. In most of Scotland, with the exception of the West and East Coast Main Lines , and the Borders Railway, "up" is towards Edinburgh. The Valley Lines network around Cardiff has its own peculiar usage, relating to the literal meaning of travelling "up" and "down" the valley. On the former Midland Railway "up" was towards Derby. On the Northern Ireland Railways network, "up" generally means toward Belfast (the specific zero milepost varying from line to line); except for cross-border services to Dublin, where Belfast is "down". Mileposts normally increase in the "down" direction, but there are exceptions, such as the Trowbridge line between Bathampton Junction and Hawkeridge Junction, where mileage increases in the "up" direction. Individual tracks will have their own names, such as Up Main or Down Loop. Trains running towards London are normally referred to as "up" trains, and those away from London as "down". He
https://en.wikipedia.org/wiki/Dependency%20injection
In software engineering, dependency injection is a programming technique in which an object or function receives other objects or functions that it requires, as opposed to creating them internally. Dependency injection aims to separate the concerns of constructing objects and using them, leading to loosely coupled programs. The pattern ensures that an object or function which wants to use a given service should not have to know how to construct those services. Instead, the receiving 'client' (object or function) is provided with its dependencies by external code (an 'injector'), which it is not aware of. Dependency injection makes implicit dependencies explicit and helps solve the following problems: How can a class be independent from the creation of the objects it depends on? How can an application, and the objects it uses support different configurations? How can the behavior of a piece of code be changed without editing it directly? Dependency injection is often used to keep code in-line with the dependency inversion principle. In statically-typed languages using dependency injection means a client only needs to declare the interfaces of the services it uses, rather than their concrete implementations, making it easier to change which services are used at runtime without recompiling. Application frameworks often combine dependency injection with Inversion of Control. Under inversion of control, the framework first constructs an object (such as a controller), then passes control flow to it. With dependency injection, the framework also instantiates the dependencies declared by the application object (often in the constructor method's parameters), and passes the dependencies into the object. Dependency injection implements the idea of "inverting control over the implementations of dependencies", which is why certain Java frameworks generically name the concept "inversion of control" (not to be confused with inversion of control flow). Roles Dependency inj
https://en.wikipedia.org/wiki/Primer%20walking
Primer walking is a technique used to clone a gene (e.g., disease gene) from its known closest markers (e.g., known gene). As a result, it is employed in cloning and sequencing efforts in plants, fungi, and mammals with minor alterations. This technique, also known as "directed sequencing," employs a series of Sanger sequencing reactions to either confirm the reference sequence of a known plasmid or PCR product based on the reference sequence (sequence confirmation service) or to discover the unknown sequence of a full plasmid or PCR product by designing primers to sequence overlapping sections (sequence discovery service). Primer walking: a DNA sequencing method Primer walking is a method to determine the sequence of DNA up to the 1.3–7.0 kb range whereas chromosome walking is used to produce the clones of already known sequences of the gene. Too long fragments cannot be sequenced in a single sequence read using the chain termination method. This method works by dividing the long sequence into several consecutive short ones. The DNA of interest may be a plasmid insert, a PCR product or a fragment representing a gap when sequencing a genome. The term "primer walking" is used where the main aim is to sequence the genome. The term "chromosome walking" is used instead when the sequence is known but there is no clone of a gene. For example, the gene for a disease may be located near a specific marker such as an RFLP on the sequence. Chromosome walking is a technique used to clone a gene (e.g., disease gene) from its known closest markers (e.g., known gene) and hence is used in moderate modifications in cloning and sequencing projects in plants, fungi, and animals. To put it another way, it's utilized to find, isolate, and clone a specific sequence existing near the gene to be mapped. Libraries of large fragments, mainly bacterial artificial chromosome libraries, are mostly used in genomic projects. To identify the desired colony and to select a particular clone the l
https://en.wikipedia.org/wiki/Beveridge%20curve
A Beveridge curve, or UV curve, is a graphical representation of the relationship between unemployment and the job vacancy rate, the number of unfilled jobs expressed as a proportion of the labour force. It typically has vacancies on the vertical axis and unemployment on the horizontal. The curve, named after William Beveridge, is hyperbolic-shaped and slopes downward, as a higher rate of unemployment normally occurs with a lower rate of vacancies. If it moves outward over time, a given level of vacancies would be associated with higher and higher levels of unemployment, which would imply decreasing efficiency in the labour market. Inefficient labour markets are caused by mismatches between available jobs and the unemployed and an immobile labour force. The position on the curve can indicate the current state of the economy in the business cycle. For example, recessionary periods are indicated by high unemployment and low vacancies, corresponding to a position on the lower side of the 45° line, and high vacancies and low unemployment indicate the expansionary periods on the upper side of the 45° line. In the United States, following the Great Recession, there was a marked shift in the Beveridge curve. A 2012 International Monetary Fund (IMF) said the shift can be explained in part by "extended unemployment insurance benefits" and "skill mismatch" between unemployment and vacancies. History The Beveridge curve, or UV curve, was developed in 1958 by Christopher Dow and Leslie Arthur Dicks-Mireaux. They were interested in measuring excess demand in the goods market for the guidance of Keynesian fiscal policies and took British data on vacancies and unemployment in the labour market as a proxy, since excess demand is unobservable. By 1958, they had 12 years of data available since the British government had started collecting data on unfilled vacancies from notification at labour exchanges in 1946. Dow and Dicks-Mireaux presented the unemployment and vacancy data in
https://en.wikipedia.org/wiki/Ralph%20Henstock
Ralph Henstock (2 June 1923 – 17 January 2007) was an English mathematician and author. As an Integration theorist, he is notable for Henstock–Kurzweil integral. Henstock brought the theory to a highly developed stage without ever having encountered Jaroslav Kurzweil's 1957 paper on the subject. Early life He was born in the coal-mining village of Newstead, Nottinghamshire, the only child of mineworker and former coalminer William Henstock and Mary Ellen Henstock (née Bancroft). On the Henstock side he was descended from 17th century Flemish immigrants called Hemstok. Because of his early academic promise it was expected that Henstock would attend the University of Nottingham where his father and uncle had received technical education, but as it turned out he won scholarships which enabled him to study mathematics at St John's College, Cambridge from October 1941 until November 1943, when he was sent for war service to the Ministry of Supply's department of Statistical Method and Quality Control in London. This work did not satisfy him, so he enrolled at Birkbeck College, London where he joined the weekly seminar of Professor Paul Dienes which was then a focus for mathematical activity in London. Henstock wanted to study divergent series but Dienes prevailed upon him to get involved in the theory of integration, thereby setting him on course for his life's work. A devoted Methodist, the lasting impression he made was one of gentle sincerity and amiability. Henstock married Marjorie Jardine in 1949. Their son John was born 10 July 1952. Ralph Henstock died on 17 January 2007 after a short illness. Work He was awarded the Cambridge B.A. in 1944 and began research for the PhD in Birkbeck College, London, under the supervision of Paul Dienes. His PhD thesis, entitled Interval Functions and their Integrals, was submitted in December 1948. His Ph.D. examiners were Burkill and H. Kestelman. In 1947 he returned briefly to Cambridge to complete the undergraduate mathem
https://en.wikipedia.org/wiki/Mathematical%20Reviews
Mathematical Reviews is a journal published by the American Mathematical Society (AMS) that contains brief synopses, and in some cases evaluations, of many articles in mathematics, statistics, and theoretical computer science. The AMS also publishes an associated online bibliographic database called MathSciNet which contains an electronic version of Mathematical Reviews and additionally contains citation information for over 3.5 million items Reviews Mathematical Reviews was founded by Otto E. Neugebauer in 1940 as an alternative to the German journal Zentralblatt für Mathematik, which Neugebauer had also founded a decade earlier, but which under the Nazis had begun censoring reviews by and of Jewish mathematicians. The goal of the new journal was to give reviews of every mathematical research publication. As of November 2007, the Mathematical Reviews database contained information on over 2.2 million articles. The authors of reviews are volunteers, usually chosen by the editors because of some expertise in the area of the article. It and Zentralblatt für Mathematik are the only comprehensive resources of this type. (The Mathematics section of Referativny Zhurnal is available only in Russian and is smaller in scale and difficult to access.) Often reviews give detailed summaries of the contents of the paper, sometimes with critical comments by the reviewer and references to related work. However, reviewers are not encouraged to criticize the paper, because the author does not have an opportunity to respond. The author's summary may be quoted when it is not possible to give an independent review, or when the summary is deemed adequate by the reviewer or the editors. Only bibliographic information may be given when a work is in an unusual language, when it is a brief paper in a conference volume, or when it is outside the primary scope of the Reviews. Originally the reviews were written in several languages, but later an "English only" policy was introduced. Selected
https://en.wikipedia.org/wiki/Overscan
Overscan is a behaviour in certain television sets, in which part of the input picture is cut off by the visible bounds of the screen. It exists because cathode-ray tube (CRT) television sets from the 1930s to the early 2000s were highly variable in how the video image was positioned within the borders of the screen. It then became common practice to have video signals with black edges around the picture, which the television was meant to discard in this way. Origins Early analog televisions varied in the displayed image because of manufacturing tolerance problems. There were also effects from the early design limitations of power supplies, whose DC voltage was not regulated as well as in later power supplies. This could cause the image size to change with normal variations in the AC line voltage, as well as a process called blooming, where the image size increased slightly when a brighter overall picture was displayed due to the increased electron beam current causing the CRT anode voltage to drop. Because of this, TV producers could not be certain where the visible edges of the image would be. In order to compensate, they defined three areas: Title safe: An area visible by all reasonably maintained sets, where text was certain not to be cut off. Action safe: A larger area that represented where a "perfect" set (with high precision to allow less overscanning) would cut the image off. Underscan: The full image area to the electronic edge of the signal with additional black borders which weren't part of the original image. Fullscan: The full image area to the electronic edge of the signal (with the black borders of the image if they exist). Observable fullscan: An overscan image area which dismisses only the additional black borders of the image (if they exist). A significant number of people would still see some of the overscan area, so while nothing important in a scene would be placed there, it also had to be kept free of microphones, stage hands, and othe
https://en.wikipedia.org/wiki/Chuckie%20Egg
Chuckie Egg is a video game released by A&F Software in 1983 initially for the ZX Spectrum, BBC Micro, and Dragon 32/64. It was ported to the Commodore 64, Acorn Electron, MSX, Tatung Einstein, Amstrad CPC, and Atari 8-bit family. It was later updated for the Amiga, Atari ST, and IBM PC compatibles. The game was written by Nigel Alderton, then 16 or 17 years old. After a month or two of development, Nigel took a pre-release version of his Spectrum code to the two-year-old software company A&F, co-founded by Doug Anderson and Mike Fitzgerald (the "A" and "F", respectively). Doug took on the simultaneous development of the BBC Micro version, whilst Mike Webb, an A&F employee, completed the Dragon port. The versions fall broadly into two groups: those with realistic physics (e.g., BBC Micro and Amstrad CPC) and those without (e.g., ZX Spectrum). Although there is a substantial difference in play between the two, levels remain largely the same and all the 8-bit versions have been cited as classics. Gameplay As Hen-House Harry, the player must collect the twelve eggs positioned in each level, before a countdown timer reaches zero. In addition there are piles of seed which may be collected to increase points and stop the countdown timer for a while, but will otherwise be eaten by hens that patrol the level, causing them to pause. If the player touches a hen or falls through a gap in the bottom of the level, they lose a life. Each level is made of solid platforms, ladders, and occasionally lift platforms that move upwards and when they reach the top of the screen wrap around to the bottom. Hitting the top of the screen while on one of these lifts, however, will also cause the player to lose a life. Eight levels are defined and are played initially under the watch of a giant caged duck. Upon completion of all eight the levels are played again without hens, but Harry is now pursued by the freed duck flying around the screen and homing in on him. A second completion of
https://en.wikipedia.org/wiki/235%20%28number%29
235 (two hundred [and] thirty-five) is the integer following 234 and preceding 236. Additionally, 235 is: a semiprime. a heptagonal number. a centered triangular number. therefore a figurate number in two ways. palindromic in bases 4 (32234), 7 (4547), 8 (3538), 13 (15113), and 46 (5546). a Harshad number in bases 6, 47, 48, 95, 116, 189 and 231. a Smarandache–Wellin number Also: There are 235 different trees with 11 unlabeled nodes. If an equilateral triangle is subdivided into smaller equilateral triangles whose side length is 1/9 as small, the resulting "matchstick arrangement" will have exactly 235 different equilateral triangles of varying sizes in it. References Integers
https://en.wikipedia.org/wiki/Telepen
Telepen the a name of a barcode symbology designed to express all 128 ASCII characters without using shift characters for code switching, and using only two different widths for bars and spaces. (Unlike Code 128, which uses shifts and four different element widths.) The symbology was devised by George Sims of SB Electronic Systems Ltd. Telepen was originally designed in the UK in 1972. Unlike most linear barcodes, Telepen does not define independent encodings for each character, but instead operates on a stream of bits. It is able to represent any bit stream containing an even number of 0 bit, and is applied to ASCII bytes with even parity, which satisfies that rule. Bytes are encoded in little-endian bit order. The string of bits is divided into 1 bit, and blocks of the form 01*0. That is, blocks beginning an ending with a 0 bit, with any number of 1 bits in between. These are then encoded as follows: "1" is encoded as a narrow bar-narrow space "00" is encoded as a wide bar-narrow space "010" is encoded as wide bar-wide space Otherwise, the leading "01" and trailing "10" are both encoded as narrow bar-wide space, with additional 1 bit in between coded as described above. Wide elements are 3 times the width of narrow elements, so every bit occupies 2 narrow elements of space. Barcodes always start with ASCII _ (underscore). This has code 0x5F, so the (lsbit-first) bit stream is 11111010. Thus, it is represented as 5 narrow bar/narrow space pairs, followed by a wide bar/wide space. Barcodes always end with ASCII z. This has (including parity) code 0xFA, so the (lsbit-first) bit stream is 01011111. This is encoded as a wide bar/wide space, followed by 5 narrow bar/narrow space pairs. Each end of the bar code consists of repeated narrow elements terminated by a pair of wide elements, but the start has a wide bar first, while if the code is read in reverse, the wide space will be encountered first. In addition to per-character parity bits, a Telepen
https://en.wikipedia.org/wiki/Melanie%20Wood
Melanie Matchett Wood (born 1981) is an American mathematician at Harvard University who was the first woman to qualify for the U.S. International Mathematical Olympiad Team. She completed her PhD in 2009 at Princeton University (under Manjul Bhargava) and is currently Professor of Mathematics at Harvard University, after being Chancellor's Professor of Mathematics at UC Berkeley and Vilas Distinguished Achievement Professor of Mathematics at the University of Wisconsin, and spending 2 years as Szegö Assistant Professor at Stanford University. She is a number theorist; more specifically, her research centers on arithmetic statistics, with excursions into related questions in arithmetic geometry and probability theory. Early life Wood was born in Indianapolis, Indiana, to Sherry Eggers and Archie Wood, both middle school teachers. Her father, a mathematics teacher, died of cancer when Wood was six weeks old. While a high school student at Park Tudor School in Indianapolis, Wood (then aged 16) became the first, and until 2004 the only female American to make the U.S. International Mathematical Olympiad Team, receiving silver medals in the 1998 and 1999 International Mathematical Olympiad. Wood was also a cheerleader and student newspaper editor at her school. Awards In 2002, she received the Alice T. Schafer Prize from the Association for Women in Mathematics. In 2003, Wood graduated from Duke University where she won a Gates Cambridge Scholarship, Fulbright fellowship, and a National Science Foundation graduate fellowship, in addition to becoming the first American woman and second woman overall to be named a Putnam Fellow in 2002. During the 2003–2004 year she studied at Cambridge University. She was also named the Deputy Leader of the U.S. team that finished second overall at the 2005 International Mathematical Olympiad. In 2004, she won the Morgan Prize for work in two topics, Belyi-extending maps and P-orderings, making her the first woman to win this
https://en.wikipedia.org/wiki/Poincar%C3%A9%20metric
In mathematics, the Poincaré metric, named after Henri Poincaré, is the metric tensor describing a two-dimensional surface of constant negative curvature. It is the natural metric commonly used in a variety of calculations in hyperbolic geometry or Riemann surfaces. There are three equivalent representations commonly used in two-dimensional hyperbolic geometry. One is the Poincaré half-plane model, defining a model of hyperbolic space on the upper half-plane. The Poincaré disk model defines a model for hyperbolic space on the unit disk. The disk and the upper half plane are related by a conformal map, and isometries are given by Möbius transformations. A third representation is on the punctured disk, where relations for q-analogues are sometimes expressed. These various forms are reviewed below. Overview of metrics on Riemann surfaces A metric on the complex plane may be generally expressed in the form where λ is a real, positive function of and . The length of a curve γ in the complex plane is thus given by The area of a subset of the complex plane is given by where is the exterior product used to construct the volume form. The determinant of the metric is equal to , so the square root of the determinant is . The Euclidean volume form on the plane is and so one has A function is said to be the potential of the metric if The Laplace–Beltrami operator is given by The Gaussian curvature of the metric is given by This curvature is one-half of the Ricci scalar curvature. Isometries preserve angles and arc-lengths. On Riemann surfaces, isometries are identical to changes of coordinate: that is, both the Laplace–Beltrami operator and the curvature are invariant under isometries. Thus, for example, let S be a Riemann surface with metric and T be a Riemann surface with metric . Then a map with is an isometry if and only if it is conformal and if . Here, the requirement that the map is conformal is nothing more than the statement that is, Metric a
https://en.wikipedia.org/wiki/Isolecithal
Isolecithal (Greek iso = equal, lekithos = yolk) refers to the even distribution of yolk in the cytoplasm of ova of mammals and other vertebrates, notably fishes of the families Petromyzontidae, Amiidae, and Lepisosteidae. Isolecithal cells have two equal hemispheres of yolk. However, during cellular development, normally under the influence of gravity, some of the yolk settles to the bottom of the egg, producing an uneven distribution of yolky hemispheres. Such uneven cells are known as telolecithal and are common where there is sufficient yolk mass. In the absence of a large concentration of yolk, four major cleavage types can be observed in isolecithal cells: radial holoblastic, spiral holoblastic, bilateral holoblastic, and rotational holoblastic cleavage. These holoblastic cleavage planes pass all the way through isolecithal zygotes during the process of cytokinesis. Coeloblastula is the next stage of development for eggs that undergo this radial cleavage. In mammals, because the isolecithal cells have only a small amount of yolk, they require immediate implantation onto the uterine wall to receive nutrients. See also Cell cycle Centrolecithal Telolecithal References Cell biology
https://en.wikipedia.org/wiki/Schwarz%E2%80%93Ahlfors%E2%80%93Pick%20theorem
In mathematics, the Schwarz–Ahlfors–Pick theorem is an extension of the Schwarz lemma for hyperbolic geometry, such as the Poincaré half-plane model. The Schwarz–Pick lemma states that every holomorphic function from the unit disk U to itself, or from the upper half-plane H to itself, will not increase the Poincaré distance between points. The unit disk U with the Poincaré metric has negative Gaussian curvature −1. In 1938, Lars Ahlfors generalised the lemma to maps from the unit disk to other negatively curved surfaces: Theorem (Schwarz–Ahlfors–Pick). Let U be the unit disk with Poincaré metric ; let S be a Riemann surface endowed with a Hermitian metric whose Gaussian curvature is ≤ −1; let be a holomorphic function. Then for all A generalization of this theorem was proved by Shing-Tung Yau in 1973. References Hyperbolic geometry Riemann surfaces Theorems in complex analysis Theorems in differential geometry
https://en.wikipedia.org/wiki/Sodium%20acetate
Sodium acetate, CH3COONa, also abbreviated NaOAc, is the sodium salt of acetic acid. This colorless deliquescent salt has a wide range of uses. Applications Biotechnological Sodium acetate is used as the carbon source for culturing bacteria. Sodium acetate is also useful for increasing yields of DNA isolation by ethanol precipitation. Industrial Sodium acetate is used in the textile industry to neutralize sulfuric acid waste streams and also as a photoresist while using aniline dyes. It is also a pickling agent in chrome tanning and helps to impede vulcanization of chloroprene in synthetic rubber production. In processing cotton for disposable cotton pads, sodium acetate is used to eliminate the buildup of static electricity. Concrete longevity Sodium acetate is used to mitigate water damage to concrete by acting as a concrete sealant, while also being environmentally benign and cheaper than the commonly used epoxy alternative for sealing concrete against water permeation. Food Sodium acetate may be added to food as a seasoning, sometimes in the form of sodium diacetate, a one-to-one complex of sodium acetate and acetic acid, given the E-number E262. It is often used to give potato chips a salt and vinegar flavour, and may be used as a substitute for vinegar itself on potato chips as it doesn't add moisture to the final product. Sodium acetate (anhydrous) is widely used as a shelf-life extending agent and pH control agent. It is safe to eat at low concentration. Buffer solution A solution of sodium acetate (a basic salt of acetic acid) and acetic acid can act as a buffer to keep a relatively constant pH level. This is useful especially in biochemical applications where reactions are pH-dependent in a mildly acidic range (pH 4–6). Heating pad Sodium acetate is also used in heating pads, hand warmers, and hot ice. A supersaturated solution of sodium acetate in water is supplied with a device to initiate crystallization, a process that releases substantial heat
https://en.wikipedia.org/wiki/Panbiogeography
Panbiogeography, originally proposed by the French-Italian scholar Léon Croizat (1894–1982) in 1958, is a cartographical approach to biogeography that plots distributions of a particular taxon or group of taxa on maps, and connects the disjunct distribution areas or collection localities together with lines called tracks , regarding vicariance as the primary mechanism for the distribution of organisms rather than dispersal. While Panbiogeography influenced development of modern biogeography, the ideas in their original form are not considered mainstream biogeographical theory, and the theory was described in 2007 as "almost moribund". Tracks A track is a representation of the spatial form of a species distribution and can give insights into the spatial processes that generated that distribution. Crossing of an ocean or sea basin or any other major tectonic structure (e.g. a fault zone) by an individual track constitutes a baseline. Individual tracks are superimposed, and if they coincide according to a specified criterion (e.g. shared baselines or compatible track geometries), the resulting summary lines are considered generalized (or standard) tracks. Generalized tracks suggest the pre-existence of ancestral biotas, which subsequently become fragmented by tectonic and/or climate change. The area where two or more generalized tracks intersect is called node. It means that different ancestral biotic and geological fragments interrelate in space/time, as a consequence of terrain collision, docking, or suturing, thus constituting a composite area. A concentration of numerical, genetical or morphological diversity within a taxon in a given area constitutes a main massing. Panbiogeography was first conceived by Croizat and further applied by researchers in New Zealand and Latin America. Panbiogeography provides a method for analyzing the geographic (spatial) structure of distributions in order to generate predictions about the evolution of species and other taxa in s
https://en.wikipedia.org/wiki/Cabin%20pressurization
Cabin pressurization is a process in which conditioned air is pumped into the cabin of an aircraft or spacecraft in order to create a safe and comfortable environment for humans flying at high altitudes. For aircraft, this air is usually bled off from the gas turbine engines at the compressor stage, and for spacecraft, it is carried in high-pressure, often cryogenic, tanks. The air is cooled, humidified, and mixed with recirculated air by one or more environmental control systems before it is distributed to the cabin. The first experimental pressurization systems saw use during the 1920s and 1930s. In the 1940s, the first commercial aircraft with a pressurized cabin entered service. The practice would become widespread a decade later, particularly with the introduction of the British de Havilland Comet jetliner in 1949. However, two catastrophic failures in 1954 temporarily grounded the Comet worldwide. The causes were investigated and found to be a combination of progressive metal fatigue and aircraft skin stresses caused from pressurization. Improved testing involved multiple full scale pressurization cycle tests of the entire fuselage in a water tank, and the key engineering principles learned were applied to the design of subsequent jet airliners. Certain aircraft have unusual pressurization needs. For example, the supersonic airliner Concorde had a particularly high pressure differential due to flying at unusually high altitude: up to while maintaining a cabin altitude of . This increased airframe weight and saw the use of smaller cabin windows intended to slow the decompression rate if a depressurization event occurred. The Aloha Airlines Flight 243 incident, involving a Boeing 737-200 that suffered catastrophic cabin failure mid-flight, was primarily caused by the aircraft's continued operation despite having accumulated more than twice the number of flight cycles that the airframe was designed to endure. For increased passenger comfort, several modern a
https://en.wikipedia.org/wiki/Hyponastic%20response
In plant biology, the hyponastic response is a nastic movement characterized by an upward bending of leaves or other plant parts, resulting from accelerated growth of the lower side of the petiole in comparison to its upper part. This can be observed in many terrestrial plants and is linked to the plant hormone ethylene. The plant’s root senses the water excess and produces 1-Aminocyclopropane-1-carboxylic acid which then is converted into ethylene, regulating this process. Submerged plants often show the hyponastic response, where the upward bending of the leaves and the elongation of the petioles might help the plant to restore normal gas exchange with the atmosphere. Plants that are exposed to elevated ethylene levels in experimental set-ups also show the hyponastic response. References Plant physiology Botany
https://en.wikipedia.org/wiki/MPEG%20Industry%20Forum
The MPEG Industry Forum (MPEGIF) is a non-profit consortium dedicated to "further the adoption of MPEG Standards, by establishing them as well accepted and widely used standards among creators of content, developers, manufacturers, providers of services, and end users." The group is involved in many tasks, which include promotion of MPEG standards (particularly MPEG-4, MPEG-4 AVC / H.264, MPEG-7 and MPEG-21); developing MPEG certification for products; organising educational events; and collaborating on development of new de facto MPEG standards. MPEGIF, founded in 2000, has played a significant role in facilitating the widespread adoption and deployment of MPEG-4 AVC/H.264 as the industry's standard video compression technology, powering next generation television, most mainstream content delivery and consumption applications including packaged media. MPEGIF serves as a single point of information on technology, products and services for these standards, offers interoperability testing, a conformance program, marketing activities and is supporting over 50 international trade shows and conferences per year. The key activities of the forum are structured via three main Committees: Technology & Engineering Interoperability & Compliance Marketing & Communication 2009–2010 focus areas 3DTV Addressable advertising: extension and adoption of CableLabs SCTE-104 for all multimedia MPEG-4/Scalable Video Coding (SVC) Simplifying competitive licensing Quality of Experience / Quality of Service metrics Royalty free DRM initiatives Online Video / Internet Streaming IPTV ecosystem Ultra HD (7680x4320) MPEG/High-Performance Video Coding (HVC, H.265) MPEG-7 / MPEG-21 MPEGIF is also running the MPEGIF Logo Qualification Program, which is designed to help guide interoperability among products and technology. The program, based on a self-certification process, is free of charge and open to all companies using MPEG technology, not just members of MPEGIF although, me
https://en.wikipedia.org/wiki/Electroactive%20polymer
An electroactive polymer (EAP) is a polymer that exhibits a change in size or shape when stimulated by an electric field. The most common applications of this type of material are in actuators and sensors. A typical characteristic property of an EAP is that they will undergo a large amount of deformation while sustaining large forces. The majority of historic actuators are made of ceramic piezoelectric materials. While these materials are able to withstand large forces, they commonly will only deform a fraction of a percent. In the late 1990s, it has been demonstrated that some EAPs can exhibit up to a 380% strain, which is much more than any ceramic actuator. One of the most common applications for EAPs is in the field of robotics in the development of artificial muscles; thus, an electroactive polymer is often referred to as an artificial muscle. History The field of EAPs emerged back in 1880, when Wilhelm Röntgen designed an experiment in which he tested the effect of an electrostatic field on the mechanical properties of a stripe of natural rubber. The rubber stripe was fixed at one end and was attached to a mass at the other. Electric charges were then sprayed onto the rubber, and it was observed that the length changed. It was in 1925 that the first piezoelectric polymer was discovered (Electret). Electret was formed by combining carnauba wax, rosin and beeswax, and then cooling the solution while it is subject to an applied DC electrical bias. The mixture would then solidify into a polymeric material that exhibited a piezoelectric effect. Polymers that respond to environmental conditions, other than an applied electric current, have also been a large part of this area of study. In 1949 Katchalsky et al. demonstrated that when collagen filaments are dipped in acid or alkali solutions, they would respond with a change in volume. The collagen filaments were found to expand in an acidic solution and contract in an alkali solution. Although other stimuli (such
https://en.wikipedia.org/wiki/H.248
The Gateway Control Protocol (Megaco, H.248) is an implementation of the media gateway control protocol architecture for providing telecommunication services across a converged internetwork consisting of the traditional public switched telephone network (PSTN) and modern packet networks, such as the Internet. H.248 is the designation of the recommendations developed by the ITU Telecommunication Standardization Sector (ITU-T) and Megaco is a contraction of media gateway control protocol used by the earliest specifications by the Internet Engineering Task Force (IETF). The standard published in March 2013 by ITU-T is entitled H.248.1: Gateway control protocol: Version 3. Megaco/H.248 follows the guidelines published in RFC 2805 in April 2000, entitled Media Gateway Control Protocol Architecture and Requirements. The protocol performs the same functions as the Media Gateway Control Protocol (MGCP), is however a formal standard while MGCP has only informational status. Using different syntax and symbolic representation, the two protocols are not directly interoperable. They are both complementary to H.323 and the Session Initiation Protocol (SIP) protocols. H.248 was the result of collaboration of the MEGACO working group of the Internet Engineering Task Force (IETF) and the International Telecommunication Union Telecommunication Study Group 16. The IETF originally published the standard as RFC 3015, which was superseded by RFC 3525. The term Megaco is the IETF designation. Megaco combines concepts from MGCP and the Media Device Control Protocol (MDCP). MGCP originated from a combination of the Simple Gateway Control Protocol (SGCP) with the Internet Protocol Device Control (IPDC). After the ITU took responsibility of the protocol maintenance, the IETF reclassified its publications as historic in RFC 5125. The ITU has published three versions of H.248, the most recent in September 2005. H.248 encompasses not only the base protocol specification in H.248.1, but many
https://en.wikipedia.org/wiki/Perturbation%20%28astronomy%29
In astronomy, perturbation is the complex motion of a massive body subjected to forces other than the gravitational attraction of a single other massive body. The other forces can include a third (fourth, fifth, etc.) body, resistance, as from an atmosphere, and the off-center attraction of an oblate or otherwise misshapen body. Introduction The study of perturbations began with the first attempts to predict planetary motions in the sky. In ancient times the causes were unknown. Isaac Newton, at the time he formulated his laws of motion and of gravitation, applied them to the first analysis of perturbations, recognizing the complex difficulties of their calculation. Many of the great mathematicians since then have given attention to the various problems involved; throughout the 18th and 19th centuries there was demand for accurate tables of the position of the Moon and planets for marine navigation. The complex motions of gravitational perturbations can be broken down. The hypothetical motion that the body follows under the gravitational effect of one other body only is a conic section, and can be described in geometrical terms. This is called a two-body problem, or an unperturbed Keplerian orbit. The differences between that and the actual motion of the body are perturbations due to the additional gravitational effects of the remaining body or bodies. If there is only one other significant body then the perturbed motion is a three-body problem; if there are multiple other bodies it is an n-body problem. A general analytical solution (a mathematical expression to predict the positions and motions at any future time) exists for the two-body problem; when more than two bodies are considered analytic solutions exist only for special cases. Even the two-body problem becomes insoluble if one of the bodies is irregular in shape. Most systems that involve multiple gravitational attractions present one primary body which is dominant in its effects (for example, a star,
https://en.wikipedia.org/wiki/Crosby%20system
The Crosby system was an FM stereophonic broadcasting standard developed by Murray G. Crosby. In the United States, it competed with, and ultimately lost to, the Zenith/GE system, which the FCC chose as the standard in 1961. While both systems used multiplexing to transmit the L-R stereo signal, the Crosby system used a frequency-modulated 50 kHz subcarrier, whereas the competing Zenith/GE system used an amplitude-modulated 38 kHz subcarrier. As FM is less susceptible to interference and noise than AM, the Crosby system had better frequency response and less noise of the two systems especially under weak signal conditions. However, the Crosby system was incompatible with existing subsidiary communications authorization (SCA) services which used subcarrier frequencies including 41 and 67 kHz. These SCA services were used by many FM stations since the mid-1950s for subscription-based "storecasting" to raise revenue and for other non-broadcast purposes. They consequently lobbied the FCC to adopt the Zenith/GE system. FCC tests in 1960 confirmed that the Zenith/GE stereo system was compatible with 67 kHz SCA operation, although not 41 kHz. According to Jack Hannold: On April 19, 1961, the FCC released its Final Order selecting the Zenith/GE system as the FM stereophonic broadcasting standard. At 9:59 AM that day, Crosby-Teletronics stock was worth $15 a share; by 2:00 P.M. it was down to less than $2.50. Another (albeit relatively minor) factor in the FCC choosing the Zenith/GE system was the widespread use of vacuum tubes in radios at the time; the additional tubes for an all-FM system would have increased the size, weight, cost of and heat generated by each tuner or receiver. References Nichols, Roger: I Can't Keep Up With All The Formats, 2003 (copy at the Internet Archive). Schoenherr, Steven E.: Stereophonic Sound, 1999-2001 (on the website of the Audio Engineering Society) External links Beaubien, William H.: A Report of FM Stereo at the CCIR Study Group X C
https://en.wikipedia.org/wiki/Grim%20trigger
In game theory, grim trigger (also called the grim strategy or just grim) is a trigger strategy for a repeated game. Initially, a player using grim trigger will cooperate, but as soon as the opponent defects (thus satisfying the trigger condition), the player using grim trigger will defect for the remainder of the iterated game. Since a single defect by the opponent triggers defection forever, grim trigger is the most strictly unforgiving of strategies in an iterated game. In Robert Axelrod's book The Evolution of Cooperation, grim trigger is called "Friedman", for a 1971 paper by James Friedman, which uses the concept. The infinitely repeated prisoners' dilemma The infinitely repeated prisoners’ dilemma is a well-known example for the grim trigger strategy. The normal game for two prisoners is as follows: In the prisoners' dilemma, each player has two choices in each stage: Cooperate Defect for an immediate gain If a player defects, he will be punished for the remainder of the game. In fact, both players are better off to stay silent (cooperate) than to betray the other, so playing (C, C) is the cooperative profile while playing (D, D), also the unique Nash equilibrium in this game, is the punishment profile. In the grim trigger strategy, a player cooperates in the first round and in the subsequent rounds as long as his opponent does not defect from the agreement. Once the player finds that the opponent has betrayed in the previous game, he will then defect forever. In order to evaluate the subgame perfect equilibrium (SPE) for the following grim trigger strategy of the game, strategy S* for players i and j is as follows: Play C in every period unless someone has ever played D in the past Play D forever if someone has played D in the past Then, the strategy is an SPE only if the discount factor is . In other words, neither Player 1 or Player 2 is incentivized to defect from the cooperation profile if the discount factor is greater than one half. To pro
https://en.wikipedia.org/wiki/Session%20border%20controller
A session border controller (SBC) is a network element deployed to protect SIP based voice over Internet Protocol (VoIP) networks. Early deployments of SBCs were focused on the borders between two service provider networks in a peering environment. This role has now expanded to include significant deployments between a service provider's access network and a backbone network to provide service to residential and/or enterprise customers. The term "session" refers to a communication between two or more parties – in the context of telephony, this would be a call. Each call consists of one or more call signaling message exchanges that control the call, and one or more call media streams which carry the call's audio, video, or other data along with information of call statistics and quality. Together, these streams make up a session. It is the job of a session border controller to exert influence over the data flows of sessions. The term "border" refers to a point of demarcation between one part of a network and another. As a simple example, at the edge of a corporate network, a firewall demarcates the local network (inside the corporation) from the rest of the Internet (outside the corporation). A more complex example is that of a large corporation where different departments have security needs for each location and perhaps for each kind of data. In this case, filtering routers or other network elements are used to control the flow of data streams. It is the job of a session border controller to assist policy administrators in managing the flow of session data across these borders. The term "controller" refers to the influence that session border controllers have on the data streams that comprise sessions, as they traverse borders between one part of a network and another. Additionally, session border controllers often provide measurement, access control, and data conversion facilities for the calls they control. Functions SBCs commonly maintain full session sta
https://en.wikipedia.org/wiki/Primosome
In molecular biology, a primosome is a protein complex responsible for creating RNA primers on single stranded DNA during DNA replication. The primosome consists of seven proteins: DnaG primase, DnaB helicase, DnaC helicase assistant, DnaT, PriA, Pri B, and PriC. At each replication fork, the primosome is utilized once on the leading strand of DNA and repeatedly, initiating each Okazaki fragment, on the lagging DNA strand. Initially the complex formed by PriA, PriB, and PriC binds to DNA. Then the DnaB-DnaC helicase complex attaches along with DnaT. This structure is referred to as the pre-primosome. Finally, DnaG will bind to the pre-primosome forming a complete primosome. The primosome attaches 1-10 RNA nucleotides to the single stranded DNA creating a DNA-RNA hybrid. This sequence of RNA is used as a primer to initiate DNA polymerase III. The RNA bases are ultimately replaced with DNA bases by RNase H nuclease (eukaryotes) or DNA polymerase I nuclease (prokaryotes). DNA Ligase then acts to join the two ends together. Assembly of the Escherichia coli primosome requires six proteins, PriA, PriB, PriC, DnaB, DnaC, and DnaT, acting at a primosome assembly site (pas) on an SSBcoated single-stranded (8s) DNA. Assembly is initiated by interactions of PriA and PriB with ssDNA and the pas. PriC, DnaB, DnaC, and DnaT then act on the PriAPriB- DNA complex to yield the primosome. Primosomes are nucleoproteins assemblies that activate DNA replication forks. Their primary role is to recruit the replicative helicase onto single-stranded DNA. The "replication restart" primosome, defined in Escherichia coli, is involved in the reactivation of arrested replication forks. Binding of the PriA protein to forked DNA triggers its assembly. PriA is conserved in bacteria, but its primosomal partners are not. In Bacillus subtilis, genetic analysis has revealed three primosomal proteins, DnaB, DnaD, and DnaI, that have no obvious homologues in E. coli. They are involved in primosom
https://en.wikipedia.org/wiki/Sigil%20%28computer%20programming%29
In computer programming, a sigil () is a symbol affixed to a variable name, showing the variable's datatype or scope, usually a prefix, as in $foo, where $ is the sigil. Sigil, from the Latin sigillum, meaning a "little sign", means a sign or image supposedly having magical power. Sigils can be used to separate and demarcate namespaces that possess different properties or behaviors. Historical context The use of sigils was popularized by the BASIC programming language. The best known example of a sigil in BASIC is the dollar sign ("$") appended to the names of all strings. Consequently, programmers outside America tend to pronounce $ as "string" instead of "dollar". Many BASIC dialects use other sigils (like "%") to denote integers and floating-point numbers and their precision, and sometimes other types as well. Larry Wall adopted shell scripting's use of sigils for his Perl programming language. In Perl, the sigils do not specify fine-grained data types like strings and integers, but the more general categories of scalars (using a prefixed "$"), arrays (using "@"), hashes (using "%"), and subroutines (using "&"). Raku also uses secondary sigils, or twigils, to indicate the scope of variables. Prominent examples of twigils in Raku include "^" (caret), used with self-declared formal parameters ("placeholder variables"), and ".", used with object attribute accessors (i.e., instance variables). Sigil use in some languages In CLIPS, scalar variables are prefixed with a "?" sigil, while multifield (e.g., a 1-level list) variables are prefixed with "$?". In Common Lisp, special variables (with dynamic scope) are typically surrounded with * in what is called the "earmuff convention". While this is only convention, and not enforced, the language itself adopts the practice (e.g., *standard-output*). Similarly, some programmers surround constants with +. In CycL, variables are prefixed with a "?" sigil. Similarly, constant names are prefixed with "#$" (pronounced "ha
https://en.wikipedia.org/wiki/Centrolecithal
Centrolecithal (Greek kentron = center of a circle, lekithos = yolk) describes the placement of the yolk in the centre of the cytoplasm of ova. Many arthropod eggs are centrolecithal. During cytokinesis, centrolecithal zygotes undergo meroblastic cleavage, where the cleavage plane extends only to the accumulated yolk and is superficial. This is due to the large dense yolk found within centrolecithal eggs and triggers a delayed embryonic development. See also Cell cycle Isolecithal Telolecithal References Centrolecithal
https://en.wikipedia.org/wiki/Language-oriented%20programming
Language-oriented programming (LOP) is a software-development paradigm where "language" is a software building block with the same status as objects, modules and components, and rather than solving problems in general-purpose programming languages, the programmer creates one or more domain-specific languages (DSLs) for the problem first, and solves the problem in those languages. Language-oriented programming was first described in detail in Martin Ward's 1994 paper Language Oriented Programming, published in Software - Concepts and Tools, Vol.15, No.4, pp 147–161, 1994. Concept The concept of language-oriented programming takes the approach to capture requirements in the user's terms, and then to try to create an implementation language as isomorphic as possible to the user's descriptions, so that the mapping between requirements and implementation is as direct as possible. A measure of the closeness of this isomorphism is the "redundancy" of the language, defined as the number of editing operations needed to implement a stand-alone change in requirements. It is not assumed a-priori what is the best language for implementing the new language. Rather, the developer can choose among options created by analysis of the information flows — what information is acquired, what its structure is, when it is acquired, from whom, and what is done with it. Development The Racket programming language and RascalMPL were designed to support language-oriented programming from the ground up. Other language workbench tools such as JetBrains MPS, Kermeta, or Xtext provide the tools to design and implement DSLs and language-oriented programming. See also Grammar-oriented programming Dialecting Domain-specific language Extensible programming Intentional programming Homoiconicity References External links Language Oriented Programming: The Next Programming Paradigm Sergey Dmitriev's paper that further explored the topic. The State of the Art in Language Workbenches. Conclus