source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Tait%20conjectures
|
The Tait conjectures are three conjectures made by 19th-century mathematician Peter Guthrie Tait in his study of knots. The Tait conjectures involve concepts in knot theory such as alternating knots, chirality, and writhe. All of the Tait conjectures have been solved, the most recent being the Flyping conjecture.
Background
Tait came up with his conjectures after his attempt to tabulate all knots in the late 19th century. As a founder of the field of knot theory, his work lacks a mathematically rigorous framework, and it is unclear whether he intended the conjectures to apply to all knots, or just to alternating knots. It turns out that most of them are only true for alternating knots. In the Tait conjectures, a knot diagram is called "reduced" if all the "isthmi", or "nugatory crossings" have been removed.
Crossing number of alternating knots
Tait conjectured that in certain circumstances, crossing number was a knot invariant, specifically:
Any reduced diagram of an alternating link has the fewest possible crossings.
In other words, the crossing number of a reduced, alternating link is an invariant of the knot. This conjecture was proved by Louis Kauffman, Kunio Murasugi (村杉 邦男), and Morwen Thistlethwaite in 1987, using the Jones polynomial.
A geometric proof, not using knot polynomials, was given in 2017 by Joshua Greene.
Writhe and chirality
A second conjecture of Tait:
An amphicheiral (or acheiral) alternating link has zero writhe.
This conjecture was also proved by Kauffman and Thistlethwaite.
Flyping
The Tait flyping conjecture can be stated:
Given any two reduced alternating diagrams and of an oriented, prime alternating link: may be transformed to by means of a sequence of certain simple moves called flypes.
The Tait flyping conjecture was proved by Thistlethwaite and William Menasco in 1991.
The Tait flyping conjecture implies some more of Tait's conjectures:
Any two reduced diagrams of the same alternating knot have the same writhe.
This follo
|
https://en.wikipedia.org/wiki/AMSDOS
|
AMSDOS is a disk operating system for the 8-bit Amstrad CPC Computer (and various clones). The name is a contraction of Amstrad Disk
Operating System.
AMSDOS first appeared in 1984 on the CPC 464, with added 3 inch disk drive, and then on the CPC 664 and CPC 6128. Relatively fast and efficient for its time, AMSDOS was quicker and more effective than most of its contemporaries.
AMSDOS was provided built into ROM (either supplied with the external disk drive or in the machine ROM, depending on model) and was accessible through the built-in Locomotive BASIC as well as through firmware routines. Its main function was to map the cassette access routines (which were built into every CPC model) through to a disk drive. This enabled the majority of cassette-based programs to work with a disk drive with no modification. AMSDOS was able to support up to two connected disk drives.
Commands
AMDOS extends the AMSTRAD BASIC by the addition of a number of external commands which are identified by a preceding ¦ (bar) symbol. The following is a list of external commands supported by AMSDOS.
¦A
¦B
¦CPM
¦DIR
¦DISC
¦DISC.IN
¦DISC.OUT
¦DRIVE
¦ERA
¦REN
¦TAPE
¦TAPE.IN
¦TAPE.OUT
¦USER
Alternatives
Other disk operating systems for the Amstrad range included CP/M (which was also bundled with an external disk drive, or built-in on ROM depending on model), RAMDOS, which allowed the full (800K) capacity of single-density 3 ½" disks to be used providing a suitable drive was connected and SymbOS.
References
Amstrad CPC
Disk operating systems
1984 software
|
https://en.wikipedia.org/wiki/Myzocytosis
|
Myzocytosis (from Greek: myzein, () meaning "to suck" and kytos () meaning "container", hence referring to "cell") is a method of feeding found in some heterotrophic organisms. It is also called "cellular vampirism" as the predatory cell pierces the cell wall and/or cell membrane of the prey cell with a feeding tube, the conoid, sucks out the cellular content and digests it.
Myzocytosis is found in Myzozoa and also in some species of Ciliophora (both comprise the alveolates). A classic example of myzocytosis is the feeding method of the infamous predatory ciliate, Didinium, where it is often depicted devouring a hapless Paramecium. The suctorian ciliates were originally thought to have fed exclusively through myzocytosis, sucking out the cytoplasm of prey via superficially drinking straw-like pseudopodia. It is now understood that suctorians do not feed through myzocytosis, but actually, instead, manipulate and envenomate captured prey with their tentacle-like pseudopodia.
References
Further reading
(2010) Endosymbiotic associations within protists Phil. Trans. R. Soc. B 12 March 2010 vol. 365 no. 1541 699-712
Alveolate biology
Ecology
Metabolism
|
https://en.wikipedia.org/wiki/Tarski%E2%80%93Kuratowski%20algorithm
|
In computability theory and mathematical logic the Tarski–Kuratowski algorithm is a non-deterministic algorithm that produces an upper bound for the complexity of a given formula in the arithmetical hierarchy and analytical hierarchy.
The algorithm is named after Alfred Tarski and Kazimierz Kuratowski.
Algorithm
The Tarski–Kuratowski algorithm for the arithmetical hierarchy consists of the following steps:
Convert the formula to prenex normal form. (This is the non-deterministic part of the algorithm, as there may be more than one valid prenex normal form for the given formula.)
If the formula is quantifier-free, it is in and .
Otherwise, count the number of alternations of quantifiers; call this k.
If the first quantifier is ∃, the formula is in .
If the first quantifier is ∀, the formula is in .
References
Rogers, Hartley The Theory of Recursive Functions and Effective Computability, MIT Press. ;
Mathematical logic hierarchies
Computability theory
Theory of computation
|
https://en.wikipedia.org/wiki/Pantropical
|
A pantropical ("all tropics") distribution is one which covers tropical regions of both hemispheres. Examples of species include caecilians, modern sirenians and the plant genera Acacia and Bacopa.
Neotropical is a zoogeographic term that covers a large part of the Americas, roughly from Mexico and the Caribbean southwards (including cold regions in southernmost South America).
Palaeotropical refers to geographical occurrence. For a distribution to be palaeotropical a taxon must occur in tropical regions in the Old World.
According to Takhtajan (1978), the following families have a pantropical distribution:
Annonaceae, Hernandiaceae, Lauraceae, Piperaceae, Urticaceae, Dilleniaceae, Tetrameristaceae, Passifloraceae, Bombacaceae, Euphorbiaceae, Rhizophoraceae, Myrtaceae, Anacardiaceae, Sapindaceae, Malpighiaceae, Proteaceae, Bignoniaceae, Orchidaceae and Arecaceae.
See also
Afrotropical realm
Tropical Africa
Tropical Asia
References
Tropics
Biogeography
|
https://en.wikipedia.org/wiki/T.51/ISO/IEC%206937
|
T.51 / ISO/IEC 6937:2001, Information technology — Coded graphic character set for text communication — Latin alphabet, is a multibyte extension of ASCII, or more precisely ISO/IEC 646-IRV. It was developed in common with ITU-T (then CCITT) for telematic services under the name of T.51, and first became an ISO standard in 1983. Certain byte codes are used as lead bytes for letters with diacritics (accents). The value of the lead byte often indicates which diacritic that the letter has, and the follow byte then has the ASCII-value for the letter that the diacritic is on.
ISO/IEC 6937's architects were Hugh McGregor Ross, Peter Fenwick, Bernard Marti and Loek Zeckendorf.
ISO6937/2 defines 327 characters found in modern European languages using the Latin alphabet. Non-Latin European characters, such as Cyrillic and Greek, are not included in the standard. Also, some diacritics used with the Latin alphabet like the Romanian comma are not included, using cedilla instead as no distinction between cedilla and comma below was made at the time.
IANA has registered the charset names ISO_6937-2-25 and ISO_6937-2-add for two (older) versions of this standard (plus control codes). But in practice this character encoding is unused on the Internet.
Single byte characters
The primary set (first half) originally followed ISO 646-IRV before the ISO/IEC 646:1991 revision, that is, mostly following ASCII but with character 0x24 still denoted as an "international currency sign" (¤) instead of the dollar sign ($). The 1992 edition of ITU T.51 permits existing CCITT services to continue to interpret 0x24 as the international currency sign, but stipulates that new telecommunication applications should use it for the dollar sign (i.e. following the current ISO 646-IRV), and instead represent the international currency sign using the supplementary set.
The supplementary set (second half) contains a selection of spacing and non-spacing graphic characters, additional symbols and some loca
|
https://en.wikipedia.org/wiki/Multiple%20integral
|
In mathematics (specifically multivariable calculus), a multiple integral is a definite integral of a function of several real variables, for instance, or . Integrals of a function of two variables over a region in (the real-number plane) are called double integrals, and integrals of a function of three variables over a region in (real-number 3D space) are called triple integrals. For multiple integrals of a single-variable function, see the Cauchy formula for repeated integration.
Introduction
Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the -axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function (on the three-dimensional Cartesian plane where ) and the plane which contains its domain. If there are more variables, a multiple integral will yield hypervolumes of multidimensional functions.
Multiple integration of a function in variables: over a domain is most commonly represented by nested integral signs in the reverse order of execution (the leftmost integral sign is computed last), followed by the function and integrand arguments in proper order (the integral with respect to the rightmost argument is computed last). The domain of integration is either represented symbolically for every argument over each integral sign, or is abbreviated by a variable at the rightmost integral sign:
Since the concept of an antiderivative is only defined for functions of a single real variable, the usual definition of the indefinite integral does not immediately extend to the multiple integral.
Mathematical definition
For , consider a so-called "half-open" -dimensional hyperrectangular domain , defined as:
Partition each interval into a finite family of non-overlapping subintervals , with each subinterval closed at the left end, and open at the right end.
Then the finite family of subr
|
https://en.wikipedia.org/wiki/Boolean-valued%20function
|
A Boolean-valued function (sometimes called a predicate or a proposition) is a function of the type f : X → B, where X is an arbitrary set and where B is a Boolean domain, i.e. a generic two-element set, (for example B = {0, 1}), whose elements are interpreted as logical values, for example, 0 = false and 1 = true, i.e., a single bit of information.
In the formal sciences, mathematics, mathematical logic, statistics, and their applied disciplines, a Boolean-valued function may also be referred to as a characteristic function, indicator function, predicate, or proposition. In all of these uses, it is understood that the various terms refer to a mathematical object and not the corresponding semiotic sign or syntactic expression.
In formal semantic theories of truth, a truth predicate is a predicate on the sentences of a formal language, interpreted for logic, that formalizes the intuitive concept that is normally expressed by saying that a sentence is true. A truth predicate may have additional domains beyond the formal language domain, if that is what is required to determine a final truth value.
See also
Bit
Boolean data type
Boolean algebra (logic)
Boolean domain
Boolean logic
Propositional calculus
Truth table
Logic minimization
Indicator function
Predicate
Proposition
Finitary boolean function
Boolean function
References
Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY, 2003.
Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGraw–Hill, 1970. 2nd edition, McGraw–Hill, 1978. 3rd edition, McGraw–Hill, 2010.
Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY.
Mathematical Society of Japan, Encyclopedic Dictionary of Mathematics, 2nd edition, 2 vols., Kiyosi Itô (ed.), MIT Press, Cambridge, MA, 1993. Cited as EDM.
Minsky, Marvin L., and Papert, Seymou
|
https://en.wikipedia.org/wiki/Crackme
|
A crackme (often abbreviated by cm) is a small program designed to test a programmer's reverse engineering skills.
They are programmed by other reversers as a legal way to crack software, since no intellectual property is being infringed upon.
Crackmes, reversemes and keygenmes generally have similar protection schemes and algorithms to those found in proprietary software. However, due to the wide use of packers/protectors in commercial software, many crackmes are actually more difficult as the algorithm is harder to find and track than in commercial software.
Keygenme
A keygenme is specifically designed for the reverser to not only find the protection algorithm used in the application, but also write a small keygen for it in the programming language of their choice.
Most keygenmes, when properly manipulated, can be self-keygenning. For example, when checking, they might generate the corresponding key and simply compare the expected and entered keys. This makes it easy to copy the key generation algorithm.
Often anti-debugging and anti-disassemble routines are used to confuse debuggers or make the disassembly useless. Code-obfuscation is also used to make the reversing even harder.
References
External links
tdhack.com - Includes cryptographic riddles, hackmes and software applications to crack for both Windows and Linux. Polish and English languages are supported.
Ollydbg - A program used both by beginners and experienced people.
Computer security
Software cracking
Reverse engineering
|
https://en.wikipedia.org/wiki/Inventory%20control
|
Inventory control or stock control can be broadly defined as "the activity of checking a shop's stock". It is the process of ensuring that the right amount of supply is available within a business. However, a more focused definition takes into account the more science-based, methodical practice of not only verifying a business's inventory but also maximising the amount of profit from the least amount of inventory investment without affecting customer satisfaction. Other facets of inventory control include forecasting future demand, supply chain management, production control, financial flexibility, purchasing data, loss prevention and turnover, and customer satisfaction.
An extension of inventory control is the inventory control system. This may come in the form of a technological system and its programmed software used for managing various aspects of inventory problems, or it may refer to a methodology (which may include the use of technological barriers) for handling loss prevention in a business. The inventory control system allows for companies to assess their current state concerning assets, account balances, and financial reports.
Inventory control management
An inventory control system is used to keep inventories in a desired state while continuing to adequately supply customers, and its success depends on maintaining clear records on a periodic or perpetual basis.
Inventory management software often plays an important role in the modern inventory control system, providing timely and accurate analytical, optimization, and forecasting techniques for complex inventory management problems. Typical features of this type of software include:
inventory tracking and forecasting tools that use selectable algorithms and review cycles to identify anomalies and other areas of concern
inventory optimization
purchase and replenishment tools that include automated and manual replenishment components, inventory calculations, and lot size optimization
lead time v
|
https://en.wikipedia.org/wiki/Control%20structure%20diagram
|
A control structure diagram (CSD) automatically documents the program flow within the source code and adds indentation with graphical symbols. Thereby the source code becomes visibly structured without sacrificing space.
See also
Data structure diagram
Diagram
Entity-relationship model
Hierarchy diagram
Unified Modeling Language
Visual programming language
External links
"The Control Structure Diagram (CSD)" - A chapter from jGRASP Tutorials
"Control Structure Diagrams for Ada 95"
Data modeling diagrams
Data modeling languages
Source code
|
https://en.wikipedia.org/wiki/Polyworld
|
Polyworld is a cross-platform (Linux, Mac OS X) program written by Larry Yaeger to evolve Artificial Intelligence through natural selection and evolutionary algorithms.
It uses the Qt graphics toolkit and OpenGL to display a graphical environment in which a population of trapezoid agents search for food, mate, have offspring, and prey on each other. The population is typically only in the hundreds, as each individual is rather complex and the environment consumes considerable computer resources. The graphical environment is necessary since the individuals actually move around the 2-D plane and must be able to "see." Since some basic abilities, like eating carcasses or randomly generated food, seeing other individuals, mating or fighting with them, etc., are possible, a number of interesting behaviours have been observed to spontaneously arise after prolonged evolution, such as cannibalism, predators and prey, and mimicry.
Each individual makes decisions based on a neural net using Hebbian learning; the neural net is derived from each individual's genome. The genome does not merely specify the wiring of the neural nets, but also determines their size, speed, color, mutation rate and a number of other factors. The genome is randomly mutated at a set probability, which are also changed in descendant organisms.
External links
Github entry
Yaeger's page on Polyworld
Google TechTalk about Polyworld
Applications of artificial intelligence
Artificial life
Digital organisms
|
https://en.wikipedia.org/wiki/AppImage
|
AppImage (formerly known as klik and PortableLinuxApps) is a format for distributing portable software on Linux without needing superuser permissions to install the application. It aims to enable application developers to deploy binary software without being restricted to specific Linux distributions, a concept often referred to as upstream packaging. In this manner, a single developed software can effortlessly run on any Linux distribution, such as Ubuntu, RHEL, or Arch.
Released first in 2004 under the name klik, it was continuously developed, then renamed in 2011 to PortableLinuxApps and later in 2013 to AppImage.
History
AppImage's predecessor klik was designed in 2004 by Simon Peter. The client-side software is GPL-licensed. klik integrated with web browsers on the user's computer. Users downloaded and installed software by typing a URL beginning with klik://. This downloaded a klik "recipe" file, which was used to generate a .cmg file. For main ingredients, usually pre-built .deb packages from Debian Stable repositories were fed into the recipe's .cmg generation process. In this way, one recipe could be used to supply packages to a wide variety of platforms. With klik, only eight programs could be run at once because of the limitation of mounting compressed images with the Linux kernel, unless FUSE was used. The file was remounted each time the program is run, meaning the user could remove the program by simply deleting the .cmg file. A next version, klik2, was in development; and would natively incorporate the FUSE kernel module, but it never reached past the beta stage. Around 2011, the klik project went dormant and the homepage went offline for some time.
Simon Peter started a successor project named PortableLinuxApps with similar goals around that time. The technology was adapted for instance by the "portablelinuxgames.org" repository, providing hundreds of mostly open-source video games.
Around 2013, the software was renamed again from portableLinux
|
https://en.wikipedia.org/wiki/Critical%20distance
|
Critical distance is, in acoustics, the distance at which the sound pressure level of the direct sound D and the reverberant sound R are equal when dealing with a directional source. As the source is directional, the sound pressure as a function of distance between source and sampling point (listener) varies with their relative position, so that for a particular room and source the set of points where direct and reverberant sound pressure are equal constitutes a surface rather than a distinguished location in the room. In other words, it is the point in space at which the combined amplitude of all the reflected echoes are the same as the amplitude of the sound coming directly from the source (D = R). This distance, called the critical distance , is dependent on the geometry and absorption of the space in which the sound waves propagate, as well as the dimensions and shape of the sound source.
A reverberant room generates a short critical distance and an acoustically dead (anechoic) room generates a longer critical distance.
Calculation
The calculation of the critical distance for a diffuse approximation of the reverberant field:
where is the degree of directivity of the source ( for an omnidirectional source), the equivalent absorption surface, the room volume in m3 and
the reverberation time of room in seconds. The latter approximation is using Sabine's reverberation formula .
Sources
Acoustics
Audio effects
|
https://en.wikipedia.org/wiki/Data%20theft
|
Data theft is a growing phenomenon primarily caused by system administrators and office workers with access to technology such as database servers, desktop computers and a growing list of hand-held devices capable of storing digital information, such as USB flash drives, iPods and even digital cameras. Since employees often spend a considerable amount of time developing contacts, confidential, and copyrighted information for the company they work for, they may feel they have some right to the information and are inclined to copy and/or delete part of it when they leave the company, or misuse it while they are still in employment. Information can be sold and bought and then used by criminals and criminal organizations. Alternatively, an employee may choose to deliberately abuse trusted access to information for the purpose of exposing misconduct by the employer. From the perspective of the society, such an act of whistleblowing can be seen as positive and is protected by law in certain situations in some jurisdictions, such as the USA.
A common scenario is where a sales person makes a copy of the contact database for use in their next job. Typically, this is a clear violation of their terms of employment.
Notable acts of data theft include those by leaker Chelsea Manning and self-proclaimed whistleblowers Edward Snowden and Hervé Falciani.
Data theft methods
Thumbsucking
Thumbsucking, similar to podslurping, is the intentional or undeliberate use of a portable USB mass storage device, such as a USB flash drive (or "thumbdrive"), to illicitly download confidential data from a network endpoint.
A USB flash drive was allegedly used to remove without authorization highly classified documents about the design of U.S. nuclear weapons from a vault at Los Alamos.
The threat of thumbsucking has been amplified for a number of reasons, including the following:
The storage capacity of portable USB storage devices has increased.
The cost of high-capacity portable USB storag
|
https://en.wikipedia.org/wiki/Retroreflective%20sheeting
|
Retroreflective sheeting is flexible retroreflective material primarily used to increase the nighttime conspicuity of traffic signs, high-visibility clothing, and other items so they are safely and effectively visible in the light of an approaching driver's headlamps. They are also used as a material to increase the scanning range of barcodes in factory settings. The sheeting consists of retroreflective glass beads, microprisms, or encapsulated lenses sealed onto a fabric or plastic substrate. Many different colors and degrees of reflection intensity are provided by numerous manufacturers for various applications. As with any retroreflector, sheeting glows brightly when there is a small angle between the observer's eye and the light source directed toward the sheeting but appears nonreflective when viewed from other directions.
Applications
Retroreflective sheeting is widely used in a variety of applications today, after early widespread use on road signs in the 1960s.
High-visibility clothing
High-visibility clothing frequently combines retroreflective sheeting with fluorescent fabrics in order to significantly increase the wearer's visibility from a distance, which in turn reduces the risk of traffic-related accidents. Such clothing is commonly worn as (often mandatory) PPE by professionals who work near road traffic or heavy machinery, often at night or in low-visibility weather conditions, such as construction workers, road workers and emergency service personnel. It is also commonly worn by cyclists or joggers to increase their nighttime visibility to road traffic.
For road signs
Retroreflective sheeting for road signs is categorized by construction and performance specified by technical standards such as ASTM D4956-11a.; various types give differing levels of retroreflection, effective view angles, and lifespan. Sheeting has replaced button copy as the predominant type of retroreflector used in roadway signs.
There are several grades of retroreflective
|
https://en.wikipedia.org/wiki/Stream%20%28computing%29
|
In computer science, a stream is a sequence of data elements made available over time. A stream can be thought of as items on a conveyor belt being processed one at a time rather than in large batches.
Streams are processed differently from batch data – normal functions cannot operate on streams as a whole, as they have potentially unlimited data, and formally, streams are codata (potentially unlimited), not data (which is finite). Functions that operate on a stream, producing another stream, are known as filters, and can be connected in pipelines, analogously to function composition. Filters may operate on one item of a stream at a time, or may base an item of output on multiple items of input, such as a moving average.
Examples
The term "stream" is used in a number of similar ways:
"Stream editing", as with sed, awk, and perl. Stream editing processes a file or files, in-place, without having to load the file(s) into a user interface. One example of such use is to do a search and replace on all the files in a directory, from the command line.
On Unix and related systems based on the C language, a stream is a source or sink of data, usually individual bytes or characters. Streams are an abstraction used when reading or writing files, or communicating over network sockets. The standard streams are three streams made available to all programs.
I/O devices can be interpreted as streams, as they produce or consume potentially unlimited data over time.
In object-oriented programming, input streams are generally implemented as iterators.
In the Scheme language and some others, a stream is a lazily evaluated or delayed sequence of data elements. A stream can be used similarly to a list, but later elements are only calculated when needed. Streams can therefore represent infinite sequences and series.
In the Smalltalk standard library and in other programming languages as well, a stream is an external iterator. As in Scheme, streams can represent finite or infinite
|
https://en.wikipedia.org/wiki/Walrasian%20auction
|
A Walrasian auction, introduced by Léon Walras, is a type of simultaneous auction where each agent calculates its demand for the good at every possible price and submits this to an auctioneer. The price is then set so that the total demand across all agents equals the total amount of the good. Thus, a Walrasian auction perfectly matches the supply and the demand.
Walras suggested that equilibrium would always be achieved through a process of tâtonnement (French for "trial and error"), a form of hill climbing. More recently, however, the Sonnenschein–Mantel–Debreu theorem proved that such a process would not necessarily reach a unique and stable equilibrium, even if the market is populated with perfectly rational agents.
Walrasian auctioneer
The Walrasian auctioneer is the presumed auctioneer that matches supply and demand in a market of perfect competition. The auctioneer provides for the features of perfect competition: perfect information and no transaction costs. The process is called tâtonnement, or groping, relating to finding the market clearing price for all commodities and giving rise to general equilibrium.
The device is an attempt to avoid one of deepest conceptual problems of perfect competition, which may, essentially, be defined by the stipulation that no agent can affect prices. But if no one can affect prices no one can change them, so prices cannot change. However, involving as it does an artificial solution, the device is less than entirely satisfactory.
As a mistranslation
Until Walker and van Daal's 2014 translation (retitled Elements of Theoretical Economics), William Jaffé's Elements of Pure Economics (1954) was for many years the only English translation of Walras's Éléments d’économie politique pure.
Walker and van Daal argue that the idea of the Walrasian auction and Walrasian auctioneer resulted from Jaffé's mistranslation of the French word crieurs (criers) into auctioneers. Walker and van Daal call this "a momentous error that has mis
|
https://en.wikipedia.org/wiki/Combinatorial%20design
|
Combinatorial design theory is the part of combinatorial mathematics that deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. These concepts are not made precise so that a wide range of objects can be thought of as being under the same umbrella. At times this might involve the numerical sizes of set intersections as in block designs, while at other times it could involve the spatial arrangement of entries in an array as in sudoku grids.
Combinatorial design theory can be applied to the area of design of experiments. Some of the basic theory of combinatorial designs originated in the statistician Ronald Fisher's work on the design of biological experiments. Modern applications are also found in a wide gamut of areas including finite geometry, tournament scheduling, lotteries, mathematical chemistry, mathematical biology, algorithm design and analysis, networking, group testing and cryptography.
Example
Given a certain number n of people, is it possible to assign them to sets so that each person is in at least one set, each pair of people is in exactly one set together, every two sets have exactly one person in common, and no set contains everyone, all but one person, or exactly one person? The answer depends on n.
This has a solution only if n has the form q2 + q + 1. It is less simple to prove that a solution exists if q is a prime power. It is conjectured that these are the only solutions. It has been further shown that if a solution exists for q congruent to 1 or 2 mod 4, then q is a sum of two square numbers. This last result, the Bruck–Ryser theorem, is proved by a combination of constructive methods based on finite fields and an application of quadratic forms.
When such a structure does exist, it is called a finite projective plane; thus showing how finite geometry and combinatorics intersect. When q = 2, the projective plane is called the Fano plan
|
https://en.wikipedia.org/wiki/Rolled%20oats
|
Rolled oats are a type of lightly processed whole-grain food. They are made from oat groats that have been dehusked and steamed, before being rolled into flat flakes under heavy rollers and then stabilized by being lightly toasted.
Thick-rolled oats usually remain unbroken during processing, while thin-rolled oats often become fragmented. Rolled whole oats, without further processing, can be cooked into a porridge and eaten as old-fashioned oats or Scottish oats; when the oats are rolled thinner and steam-cooked more in the factory, they will later absorb water much more easily and cook faster into a porridge, and when processed this way are sometimes called "quick" or "instant" oats.
Rolled oats are most often the main ingredient in granola and muesli. They can be further processed into a coarse powder, which breaks down to nearly a liquid consistency when boiled. Cooked oatmeal powder is often used as baby food.
Process
The oat, like other cereals, has a hard, inedible outer husk that must be removed before the grain can be eaten. After the outer husk (or chaff) has been removed from the still bran-covered oat grains, the remainder is called oat groats. Since the bran layer, though nutritious, makes the grains tougher to chew and contains an enzyme that can cause the oats to go rancid, raw oat groats are often further steam-treated to soften them for a quicker cooking time and to denature the enzymes for a longer shelf life.
Steel-cut or pinhead oats
Steel-cut oats (sometimes called "pinhead oats", especially if cut small) are oat groats that have been chopped by a sharp-bladed machine before any steaming, and thus retain bits of the bran layer.
Preparation
Rolled oats can be eaten without further heating or cooking, if they are soaked for 1–6 hours in water-based liquid, such as water, milk, or plant-based dairy substitutes. The required soaking duration depends on shape, size and pre-processing technique.
Whole oat groats can be cooked as a breakfast ce
|
https://en.wikipedia.org/wiki/STREAMS
|
In computer networking, STREAMS is the native framework in Unix System V for implementing character device drivers, network protocols, and inter-process communication. In this framework, a stream is a chain of coroutines that pass messages between a program and a device driver (or between a pair of programs). STREAMS originated in Version 8 Research Unix, as Streams (not capitalized).
STREAMS's design is a modular architecture for implementing full-duplex I/O between kernel and device drivers. Its most frequent uses have been in developing terminal I/O (line discipline) and networking subsystems. In System V Release 4, the entire terminal interface was reimplemented using STREAMS. An important concept in STREAMS is the ability to push drivers custom code modules which can modify the functionality of a network interface or other device together to form a stack. Several of these drivers can be chained together in order.
History
STREAMS was based on the Streams I/O subsystem introduced in the Eighth Edition Research Unix (V8) by Dennis Ritchie, where it was used for the terminal I/O subsystem and the Internet protocol suite. This version, not yet called STREAMS in capitals, fit the new functionality under the existing device I/O system calls (open, close, read, write, and ioctl), and its application was limited to terminal I/O and protocols providing pipe-like I/O semantics.
This I/O system was ported to System V Release 3 by Robert Israel, Gil McGrath, Dave Olander, Her-Daw Che, and Maury Bach as part of a wider framework intended to support a variety of transport protocols, including TCP, ISO Class 4 transport, SNA LU 6.2, and the AT&T NPACK protocol (used in RFS). It was first released with the Network Support Utilities (NSU) package of UNIX System V Release 3. This port added the putmsg, getmsg, and poll system calls, which are nearly equivalent in purpose to the send, recv, and select calls from Berkeley sockets. The putmsg and getmsg system calls were orig
|
https://en.wikipedia.org/wiki/Molecular%20motor
|
Molecular motors are natural (biological) or artificial molecular machines that are the essential agents of movement in living organisms. In general terms, a motor is a device that consumes energy in one form and converts it into motion or mechanical work; for example, many protein-based molecular motors harness the chemical free energy released by the hydrolysis of ATP in order to perform mechanical work. In terms of energetic efficiency, this type of motor can be superior to currently available man-made motors. One important difference between molecular motors and macroscopic motors is that molecular motors operate in the thermal bath, an environment in which the fluctuations due to thermal noise are significant.
Examples
Some examples of biologically important molecular motors:
Cytoskeletal motors
Myosins are responsible for muscle contraction, intracellular cargo transport, and producing cellular tension.
Kinesin moves cargo inside cells away from the nucleus along microtubules, in anterograde transport.
Dynein produces the axonemal beating of cilia and flagella and also transports cargo along microtubules towards the cell nucleus, in retrograde transport.
Polymerisation motors
Actin polymerization generates forces and can be used for propulsion. ATP is used.
Microtubule polymerization using GTP.
Dynamin is responsible for the separation of clathrin buds from the plasma membrane. GTP is used.
Rotary motors:
FoF1-ATP synthase family of proteins convert the chemical energy in ATP to the electrochemical potential energy of a proton gradient across a membrane or the other way around. The catalysis of the chemical reaction and the movement of protons are coupled to each other via the mechanical rotation of parts of the complex. This is involved in ATP synthesis in the mitochondria and chloroplasts as well as in pumping of protons across the vacuolar membrane.
The bacterial flagellum responsible for the swimming and tumbling of E. coli and other bacteria
|
https://en.wikipedia.org/wiki/Insect%20trap
|
Insect traps are used to monitor or directly reduce populations of insects or other arthropods, by trapping individuals and killing them. They typically use food, visual lures, chemical attractants and pheromones as bait and are installed so that they do not injure other animals or humans or result in residues in foods or feeds. Visual lures use light, bright colors and shapes to attract pests. Chemical attractants or pheromones may attract only a specific sex. Insect traps are sometimes used in pest management programs instead of pesticides but are more often used to look at seasonal and distributional patterns of pest occurrence. This information may then be used in other pest management approaches.
The trap mechanism or bait can vary widely. Flies and wasps are attracted by proteins. Mosquitoes and many other insects are attracted by bright colors, carbon dioxide, lactic acid, floral or fruity fragrances, warmth, moisture and pheromones. Synthetic attractants like methyl eugenol are very effective with tephritid flies.
Trap types
Insect traps vary widely in shape, size, and construction, often reflecting the behavior or ecology of the target species. Some common varieties are described below
Light traps
Light traps, with or without ultraviolet light, attract certain insects. Light sources may include fluorescent lamps, mercury-vapor lamps, black lights, or light-emitting diodes.
Designs differ according to the behavior of the insects being targeted.
Light traps are widely used to survey nocturnal moths. Total species richness and abundance of trapped moths may be influenced by several factors such as night temperature, humidity and lamp type.
Grasshoppers and some beetles are attracted to lights at a long range but are repelled by it at short range. Farrow's light trap has a large base so that it captures insects that may otherwise fly away from regular light traps. Light traps can attract flying and terrestrial insects, and lights may be combined with
|
https://en.wikipedia.org/wiki/IMUnited
|
IMUnited was a coalition of instant messaging service providers, including Yahoo! and Microsoft, that wanted AOL to open its proprietary AIM network to them. It appears to have disappeared, possibly because both Yahoo!'s and Microsoft's instant messaging services started to gain popularity.
See also
IMUnified
Instant messaging
|
https://en.wikipedia.org/wiki/Five%20Equations%20That%20Changed%20the%20World
|
Five Equations That Changed the World: The Power and Poetry of Mathematics is a book by Michael Guillen, published in 1995.
It is divided into five chapters that talk about five different equations in physics and the people who have developed them.
The scientists and their equations are:
Isaac Newton (Universal Law of Gravity)
Daniel Bernoulli (Law of Hydrodynamic Pressure)
Michael Faraday (Law of Electromagnetic Induction)
Rudolf Clausius (Second Law of Thermodynamics)
Albert Einstein (Theory of Special Relativity)
The book is a light study in science and history, portraying the preludes to and times and settings of discoveries that have been the basis of further development, including space travel, flight and nuclear power. Each chapter of the book is divided into sections titled Veni, Vidi, Vici.
The reviews of the book have been mixed. Publishers Weekly called it "wholly accessible, beautifully written", Kirkus Reviews wrote that it is a "crowd-pleasing kind of book designed to make the science as palatable as possible", and Frank Mahnke wrote that Guillen "has a nice touch for the history of mathematics and physics and their impact on the world". However, in contrast, Charles Stephens panned "the superficiality of the author's treatment of scientific ideas", and the editors of The Capital Times called the book a "miserable failure" at its goal of helping the public appreciate the beauty of mathematics.
References
1995 non-fiction books
Popular physics books
Mathematical physics
Popular mathematics books
|
https://en.wikipedia.org/wiki/Gaisberg%20Transmitter
|
Gaisberg Transmitter is a facility for FM and TV-transmission on the Gaisberg mountain near Salzburg, Austria. It was the first large transmitter in Austria finished after the war and started its work on 22 August 1956 (however, a provisional transmitter already broadcast a VHF radio signal since 1953 with 1kW). It used a lattice tower and broadcast Austria's first radio station on 99.0MHz and third radio station on 94.8 MHz, each with 50 kW, as well as a TV station on channel 8 with 60/12 kW (picture/sound). During the 1980s an UHF antenna was put on top of the tower, bringing its height to 100 meters.
The ALDIS (Austrian Lightning Detection & Information System) maintains the Austrian Lightning Research Station Gaisberg next to the transmitter .
Towers in Austria
Broadcast transmitters
1956 establishments in Austria
Towers completed in 1956
20th-century architecture in Austria
|
https://en.wikipedia.org/wiki/Jos%C3%A9%20Luis%20Rodr%C3%ADguez%20Pitt%C3%AD
|
José Luis Rodríguez Pittí is a contemporary writer, videoartist and documentary photographer.
He is the author of short stories, poems and essays. Rodríguez Pittí is author of the books Panamá Blues (2010, miniTEXTOS (2008), Sueños urbanos (2008) and Crónica de invisibles (1999). Most of his stories and essays were published in literary magazines and newspapers.
In 1994, the Universidad de Panamá awarded him with the Premio "Darío Herrera". Other literary honors received are Accesit in the Premio Nacional "Signos" 1993 (Panamá), Concurso Nacional de Cuentos "José María Sánchez" 1998 (Panamá), Concurso "Amadís de Gaula" 1999 (Soria, España) and the Concurso "Maga" de Cuento Corto 2001 (Panamá)
Early life and education
Rodríguez Pittí was born in Panama City on 29 March 1971. He grew up in Mexico City, Santiago de Veraguas and Panamá City. He is resident of Toronto, Canada.
Graduated from the Universidad Tecnológica de Panamá, was Computer Vision, Programming Languages and Deep Learning Professor at the Universidad Tecnológica de Panamá and Universidad Santa María la Antigua.
Biography
President of the Writers Association of Panama from 2008 to 2010. Founder and President of Fundación El Hacedor (since 2007).
From 1990 to 1995 he traveled extensively in the Panamanian region of Azuero to collect stories and photograph, the body of three photo essays: "Viernes Santo en Pesé", "Cuadernos de Azuero", and "Noche de carnaval". Other photography essays are "De diablos, diablicos y otros seres de la mitología panameña" and "Regee Child". Some of his photographs are cover art of books published in Panamá. His work has been exhibited in Panamá, México, Canadá and Italy.
Awards and honors
1993, Finalista, Premio "Signos" de Joven Literatura 1993 otorgado en Panamá
1994, Premio "Darío Herrera" de Literatura otorgado por la Universidad de Panamá
1994, Premio Canon "Día de la Tierra"
1998, Accésit, Premio Nacional de Cuento "José María Sánchez" 1998 otorgado en Panamá
|
https://en.wikipedia.org/wiki/Necking%20%28engineering%29
|
In engineering and materials science, necking is a mode of tensile deformation where relatively large amounts of strain localize disproportionately in a small region of the material. The resulting prominent decrease in local cross-sectional area provides the basis for the name "neck". Because the local strains in the neck are large, necking is often closely associated with yielding, a form of plastic deformation associated with ductile materials, often metals or polymers. Once necking has begun, the neck becomes the exclusive location of yielding in the material, as the reduced area gives the neck the largest local stress.
Formation
Necking results from an instability during tensile deformation when the cross-sectional area of the sample decreases by a greater proportion than the material strain hardens. Armand Considère published the basic criterion for necking in 1885, in the context of the stability of large scale structures such as bridges. Three concepts provide the framework for understanding neck formation.
Before deformation, all real materials have heterogeneities such as flaws or local variations in dimensions or composition that cause local fluctuations in stresses and strains. To determine the location of the incipient neck, these fluctuations need only be infinitesimal in magnitude.
During plastic tensile deformation the material decreases in cross-sectional area due to the incompressibility of plastic flow. (Not due to the Poisson effect, which is linked to elastic behaviour.)
During plastic tensile deformation the material strain hardens. The amount of hardening varies with extent of deformation.
The latter two effects determine the stability while the first effect determines the neck's location.
The Considère treatment
Instability (onset of necking) is expected to occur when an increase in the (local) strain produces no net increase in the load, . This will happen when
This leads to
with the subscript being used to emphasize that these
|
https://en.wikipedia.org/wiki/Mycoremediation
|
Mycoremediation (from ancient Greek μύκης (mukēs), meaning "fungus" and the suffix -remedium, in Latin meaning 'restoring balance') is a form of bioremediation in which fungi-based remediation methods are used to decontaminate the environment. Fungi have been proven to be a cheap, effective and environmentally sound way for removing a wide array of contaminants from damaged environments or wastewater. These contaminants include heavy metals, organic pollutants, textile dyes, leather tanning chemicals and wastewater, petroleum fuels, polycyclic aromatic hydrocarbons, pharmaceuticals and personal care products, pesticides and herbicides in land, fresh water, and marine environments.
The byproducts of the remediation can be valuable materials themselves, such as enzymes (like laccase), edible or medicinal mushrooms, making the remediation process even more profitable. Some fungi are useful in the biodegradation of contaminants in extremely cold or radioactive environments where traditional remediation methods prove too costly or are unusable due to the extreme conditions. Mycoremediation can even be used for fire management with the encapsulation method. This process consists of using fungal spores coated with agarose in a pellet form. This pellet is introduced to a substrate in the burnt forest, breaking down the toxins in the environment and stimulating growth.
Pollutants
Fungi, thanks to their non-specific enzymes, are able to break down many kinds of substances including pharmaceuticals and fragrances that are normally recalcitrant to bacteria degradation, such as paracetamol (also known as acetaminophen). For example, using Mucor hiemalis, the breakdown of products which are toxic in traditional water treatment, such as phenols and pigments of wine distillery wastewater, X-ray contrast agents, and ingredients of personal care products, can be broken down in a non-toxic way.
Mycoremediation is a cheaper method of remediation, and it doesn't usually require expe
|
https://en.wikipedia.org/wiki/Baum%C3%A9%20scale
|
The Baumé scale is a pair of hydrometer scales developed by French pharmacist Antoine Baumé in 1768 to measure density of various liquids. The unit of the Baumé scale has been notated variously as degrees Baumé, B°, Bé° and simply Baumé (the accent is not always present). One scale measures the density of liquids heavier than water and the other, liquids lighter than water. The Baumé of distilled water is 0. The API gravity scale is based on errors in early implementations of the Baumé scale.
Definitions
Baumé degrees (heavy) originally represented the percent by mass of sodium chloride in water at . Baumé degrees (light) was calibrated with 0°Bé (light) being the density of 10% NaCl in water by mass and 10°Bé (light) set to the density of water.
Consider, at near room temperature:
+100°Bé (specific gravity, 3.325) would be among the densest fluids known (except some liquid metals), such as diiodomethane.
Near 0°Bé would be approximately the density of water.
−100°Bé (specific gravity, 0.615) would be among the lightest fluids known, such as liquid butane.
Thus, the system could be understood as representing a practical spectrum of the density of liquids between −100 and 100, with values near 0 being the approximate density of water.
Conversions
The relationship between specific gravity (s.g.; i.e., water-specific gravity, the density relative to water) and degrees Baumé is a function of the temperature. Different versions of the scale may use different reference temperatures. Different conversions formulae can therefore be found in various handbooks.
As an example, a 2008 handbook states the conversions between specific gravity and degrees Baumé at a temperature of :
The numerator in the specific gravity calculation is commonly known as the "modulus".
An older handbook gives the following formulae (no reference temperature being mentioned):
Other scales
Because of vague instructions or errors in translation a large margin of error was introduced when
|
https://en.wikipedia.org/wiki/Motor%20drive
|
Motor drive means a system that includes a motor. An adjustable speed motor drive means a system that includes a motor that has multiple operating speeds. A variable speed motor drive is a system that includes a motor and is continuously variable in speed. If the motor is generating electrical energy rather than using it – this could be called a generator drive but is often still referred to as a motor drive.
A variable frequency drive (VFD) or variable speed drive (VSD) describes the electronic portion of the system that controls the speed of the motor. More generally, the term drive, describes equipment used to control the speed of machinery. Many industrial processes such as assembly lines must operate at different speeds for different products. Where process conditions demand adjustment of flow from a pump or fan, varying the speed of the drive may save energy compared with other techniques for flow control.
Where speeds may be selected from several different pre-set ranges, usually the drive is said to be adjustable speed. If the output speed can be changed without steps over a range, the drive is usually referred to as variable speed.
Adjustable and variable speed drives may be purely mechanical (termed variators), electromechanical, hydraulic, or electronic.
Sometimes motor drive refers to a drive used to control a motor and therefore gets interchanged with VFD or VSD.
Electric motors
AC electric motors can be run in fixed-speed operation determined by the number of stator pole pairs in the motor and the frequency of the alternating current supply. AC motors can be made for "pole changing" operation, reconnecting the stator winding to vary the number of poles so that two, sometimes three, speeds are obtained. For example a machine with 8 physical pairs of poles, could be connected to allow running with either 4 or 8 pole pairs, giving two speeds - at 60 Hz, these would be 1800 RPM and 900 RPM. If speed changes are rare, the motor may be initially
|
https://en.wikipedia.org/wiki/Stub%20%28electronics%29
|
In microwave and radio-frequency engineering, a stub or resonant stub is a length of transmission line or waveguide that is connected at one end only. The free end of the stub is either left open-circuit, or short-circuited (as is always the case for waveguides). Neglecting transmission line losses, the input impedance of the stub is purely reactive; either capacitive or inductive, depending on the electrical length of the stub, and on whether it is open or short circuit. Stubs may thus function as capacitors, inductors and resonant circuits at radio frequencies.
The behaviour of stubs is due to standing waves along their length. Their reactive properties are determined by their physical length in relation to the wavelength of the radio waves. Therefore, stubs are most commonly used in UHF or microwave circuits in which the wavelengths are short enough that the stub is conveniently small. They are often used to replace discrete capacitors and inductors, because at UHF and microwave frequencies lumped components perform poorly due to parasitic reactance. Stubs are commonly used in antenna impedance matching circuits, frequency selective filters, and resonant circuits for UHF electronic oscillators and RF amplifiers.
Stubs can be constructed with any type of transmission line: parallel conductor line (where they are called Lecher lines), coaxial cable, stripline, waveguide, and dielectric waveguide. Stub circuits can be designed using a Smith chart, a graphical tool which can determine what length line to use to obtain a desired reactance.
Short circuited stub
The input impedance of a lossless, short circuited line is,
where
is the imaginary unit (),
is the characteristic impedance of the line,
is the phase constant of the line, and
is the physical length of the line.
Thus, depending on whether is positive or negative, the short circuited stub will be inductive or capacitive, respectively.
The length of a stub to act as a capacitor at an angul
|
https://en.wikipedia.org/wiki/Paris%20meridian
|
The Paris meridian is a meridian line running through the Paris Observatory in Paris, France – now longitude 2°20′14.02500″ East. It was a long-standing rival to the Greenwich meridian as the prime meridian of the world. The "Paris meridian arc" or "French meridian arc" (French: la Méridienne de France) is the name of the meridian arc measured along the Paris meridian.
The French meridian arc was important for French cartography, since the triangulations of France began with the measurement of the French meridian arc. Moreover, the French meridian arc was important for geodesy as it was one of the meridian arcs which were measured to determine the figure of the Earth via the arc measurement method. The determination of the figure of the Earth was a problem of the highest importance in astronomy, as the diameter of the Earth was the unit to which all celestial distances had to be referred.
History
French cartography and the figure of the Earth
In the year 1634, France ruled by Louis XIII and Cardinal Richelieu, decided that the Ferro Meridian through the westernmost of the Canary Islands should be used as the reference on maps, since El Hierro (Ferro) was the most western position of the Ptolemy's world map. It was also thought to be exactly 20 degrees west of Paris. The astronomers of the French Academy of Sciences, founded in 1666, managed to clarify the position of El Hierro relative to the meridian of Paris, which gradually supplanted the Ferro meridian. In 1666, Louis XIV of France had authorized the building of the Paris Observatory. On Midsummer's Day 1667, members of the Academy of Sciences traced the future building's outline on a plot outside town near the Port Royal abbey, with Paris meridian exactly bisecting the site north–south. French cartographers would use it as their prime meridian for more than 200 years. Old maps from continental Europe often have a common grid with Paris degrees at the top and Ferro degrees offset by 20 at the bottom.
A Fr
|
https://en.wikipedia.org/wiki/OS-level%20virtualization
|
OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, called containers (LXC, Solaris containers, AIX WPARs, HP-UX SRP Containers, Docker, Podman), zones (Solaris containers), virtual private servers (OpenVZ), partitions, virtual environments (VEs), virtual kernels (DragonFly BSD), or jails (FreeBSD jail or chroot jail). Such instances may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside of a container can only see the container's contents and devices assigned to the container.
On Unix-like operating systems, this feature can be seen as an advanced implementation of the standard chroot mechanism, which changes the apparent root folder for the current running process and its children. In addition to isolation mechanisms, the kernel often provides resource-management features to limit the impact of one container's activities on other containers. Linux containers are all based on the virtualization, isolation, and resource management mechanisms provided by the Linux kernel, notably Linux namespaces and cgroups.
The term container, while most popularly referring to OS-level virtualization systems, is sometimes ambiguously used to refer to fuller virtual machine environments operating in varying degrees of concert with the host OS, e.g., Microsoft's Hyper-V containers. A more historic overview of virtualization in general since 1960 can be found in the Timeline of virtualization development.
Operation
On ordinary operating systems for personal computers, a computer program can see (even though it might not be able to access) all the system's resources. They include:
Hardware capabilities that can be emplo
|
https://en.wikipedia.org/wiki/List%20of%20games%20in%20game%20theory
|
Game theory studies strategic interaction between individuals in situations called games. Classes of these games have been given names. This is a list of the most commonly studied games
Explanation of features
Games can have several features, a few of the most common are listed here.
Number of players: Each person who makes a choice in a game or who receives a payoff from the outcome of those choices is a player.
Strategies per player: In a game each player chooses from a set of possible actions, known as pure strategies. If the number is the same for all players, it is listed here.
Number of pure strategy Nash equilibria: A Nash equilibrium is a set of strategies which represents mutual best responses to the other strategies. In other words, if every player is playing their part of a Nash equilibrium, no player has an incentive to unilaterally change their strategy. Considering only situations where players play a single strategy without randomizing (a pure strategy) a game can have any number of Nash equilibria.
Sequential game: A game is sequential if one player performs their actions after another player; otherwise, the game is a simultaneous move game.
Perfect information: A game has perfect information if it is a sequential game and every player knows the strategies chosen by the players who preceded them.
Constant sum: A game is a constant sum game if the sum of the payoffs to every player are the same for every single set of strategies. In these games, one player gains if and only if another player loses. A constant sum game can be converted into a zero sum game by subtracting a fixed value from all payoffs, leaving their relative order unchanged.
Move by nature: A game includes a random move by nature.
List of games
External links
List of games from gametheory.net
Notes
References
Arthur, W. Brian “Inductive Reasoning and Bounded Rationality”, American Economic Review (Papers and Proceedings), 84,406-411, 1994.
Bolton, Katok, Zwick 1998, "
|
https://en.wikipedia.org/wiki/Giuga%20number
|
A Giuga number is a composite number n such that for each of its distinct prime factors pi we have , or equivalently such that for each of its distinct prime factors pi we have .
The Giuga numbers are named after the mathematician Giuseppe Giuga, and relate to his conjecture on primality.
Definitions
Alternative definition for a Giuga number due to Takashi Agoh is: a composite number n is a Giuga number if and only if the congruence
holds true, where B is a Bernoulli number and is Euler's totient function.
An equivalent formulation due to Giuseppe Giuga is: a composite number n is a Giuga number if and only if the congruence
and if and only if
All known Giuga numbers n in fact satisfy the stronger condition
Examples
The sequence of Giuga numbers begins
30, 858, 1722, 66198, 2214408306, 24423128562, 432749205173838, … .
For example, 30 is a Giuga number since its prime factors are 2, 3 and 5, and we can verify that
30/2 - 1 = 14, which is divisible by 2,
30/3 - 1 = 9, which is 3 squared, and
30/5 - 1 = 5, the third prime factor itself.
Properties
The prime factors of a Giuga number must be distinct. If divides , then it follows that , where is divisible by . Hence, would not be divisible by , and thus would not be a Giuga number.
Thus, only square-free integers can be Giuga numbers. For example, the factors of 60 are 2, 2, 3 and 5, and 60/2 - 1 = 29, which is not divisible by 2. Thus, 60 is not a Giuga number.
This rules out squares of primes, but semiprimes cannot be Giuga numbers either. For if , with primes, then
, so will not divide , and thus is not a Giuga number.
All known Giuga numbers are even. If an odd Giuga number exists, it must be the product of at least 14 primes. It is not known if there are infinitely many Giuga numbers.
It has been conjectured by Paolo P. Lava (2009) that Giuga numbers are the solutions of the differential equation n' = n+1, where n' is the arithmetic derivative of n. (For square-free numbers ,
|
https://en.wikipedia.org/wiki/Temporal%20resolution
|
Temporal resolution (TR) refers to the discrete resolution of a measurement with respect to time.
Physics
Often there is a trade-off between the temporal resolution of a measurement and its spatial resolution, due to Heisenberg's uncertainty principle. In some contexts, such as particle physics, this trade-off can be attributed to the finite speed of light and the fact that it takes a certain period of time for the photons carrying information to reach the observer. In this time, the system might have undergone changes itself. Thus, the longer the light has to travel, the lower the temporal resolution.
Technology
Computing
In another context, there is often a tradeoff between temporal resolution and computer storage. A transducer may be able to record data every millisecond, but available storage may not allow this, and in the case of 4D PET imaging the resolution may be limited to several minutes.
Electronic displays
In some applications, temporal resolution may instead be equated to the sampling period, or its inverse, the refresh rate, or update frequency in Hertz, of a TV, for example.
The temporal resolution is distinct from temporal uncertainty. This would be analogous to conflating image resolution with optical resolution. One is discrete, the other, continuous.
The temporal resolution is a resolution somewhat the 'time' dual to the 'space' resolution of an image. In a similar way, the sample rate is equivalent to the pixel pitch on a display screen, whereas the optical resolution of a display screen is equivalent to temporal uncertainty.
Note that both this form of image space and time resolutions are orthogonal to measurement resolution, even though space and time are also orthogonal to each other. Both an image or an oscilloscope capture can have a signal-to-noise ratio, since both also have measurement resolution.
Oscilloscopy
An oscilloscope is the temporal equivalent of a microscope, and it is limited by temporal uncertainty the same way a m
|
https://en.wikipedia.org/wiki/King%27s%20Valley
|
King’s Valley is a platform game released by Konami for MSX in 1985. The game is considered a spiritual successor to Konami's earlier arcade game Tutankham (1982), employing similar concepts such as treasure hunting in Egyptian tombs and an identical end-level music tune. It also has similarities to Lode Runner (1983).
The game was initially released on ROM cartridge with 15 levels. It was also planned to be released on floppy disk with 60 levels but that version was shelved. The floppy disk version would ultimately be released a few years later in 1988 as part of Konami Game Collection Vol. 1 on MSX.
Gameplay
As an intrepid adventurer, the player's goal is to collect various gems, while evading angry mummies and other monsters long enough to find the exit to the next level. A port to MS-DOS was made by a Korean company named APROMAN which supports monochrome and CGA graphic cards.
Legacy
A sequel King's Valley II was released for the MSX in two versions, each specifically designed for the MSX and MSX2 respectively.
See also
Pharaoh's Revenge (1988)
References
1985 video games
MSX games
DOS games
Konami franchises
Konami games
Video games set in Egypt
Video game clones
Platformers
Video games developed in Japan
|
https://en.wikipedia.org/wiki/Pitch%20correction
|
Pitch correction is an electronic effects unit or audio software that changes the intonation (highness or lowness in pitch) of an audio signal so that all pitches will be notes from the equally tempered system (i.e., like the pitches on a piano). Pitch correction devices do this without affecting other aspects of its sound. Pitch correction first detects the pitch of an audio signal (using a live pitch detection algorithm), then calculates the desired change and modifies the audio signal accordingly. The widest use of pitch corrector devices is in Western popular music on vocal lines.
History
Prior to the invention of pitch correction, errors in vocal intonation in recordings could only be corrected by re-recording the entire song (in the early era of recording) or, after the development of multitrack recording, by overdubbing the incorrect vocal pitches by re-recording those specific notes or sections. By the late 70s, engineers were fixing parts using the Eventide Harmonizer. Prior to the development of electronic pitch correction devices, there was no way to make "real time" corrections to a live vocal performance in a concert (although lip-syncing was used in some cases where a performer was not able to sing adequately in live performances).
Pitch correction was relatively uncommon before 1997 when Antares Audio Technology's Auto-Tune Pitch Correcting Plug-In was introduced. Developed by Dr. Andy Hildebrand, a geophysical engineer, the software leveraged auto-correlation algorithms originally used in seismic wave mapping for the oil industry. Andy Hildebrand adapted these algorithms for musical applications, offering a more efficient and precise way to correct vocal imperfections. This replaced slow studio techniques with a real-time process that could also be used in live performances.
Auto-Tune is still widely used, as are other pitch-correction algorithms including Celemony's Direct Note Access which allows adjustment of individual notes in a polyphonic au
|
https://en.wikipedia.org/wiki/Scott%20Yanoff
|
Scott Yanoff (born October 20, 1969) is an IT manager and web developer who was a key person in the early days of the internet, most notably for creating and maintaining the Yanoff List, an alphabetical list of internet sites.
Career
Yanoff authored the Inter-Network Mail Guide, a text written in 1997 documenting the different methods of sending email from one network to another. He was also a co-author of The Web Site Administrator's Survival Guide with Jerry Ablan, a book that explains how to set up, administer, care for, and feed your own Web server. Most of this work was accomplished as an undergraduate student at the University of Wisconsin–Milwaukee, while working as a mainframe/UNIX consultant for the university.
He has worked for SpectraCom, Inc., and the now-defunct Strong Capital Management in Menomonee Falls, Wisconsin and at Northwestern Mutual in Milwaukee, Wisconsin from February, 2004 to June, 2023.
The Yanoff List
In the early and mid-1990s, before the use of search engines, the Yanoff List became an important tool for internet users. The list consisted of internet sites listed alphabetically and grouped by subject acting as a type of internet yellow pages containing hundreds of FTP, gopher, and web locations relevant to each subject. Users of the internet in the early 1990s would eagerly await the latest version of this list. As a minor tribute to his service, a popular Palm-based newsreader, Yanoff, was named after him.
Additional work
Yanoff created a visual basic script called "iTunesStats" in 2008 that can be run on Windows-based computers to generate a file of statistics of one's listening habits based upon the user's iTunes library. Additionally, he transposed popular music guitar tablature in the 1990s including that of The Beatles, R.E.M., Bruce Springsteen, and U2.
References
External links
Yanoff family website
iTunesStats script
Electronic Publishing on the Internet, Case Study - Yanoff List
Living people
Computer programmers
Univ
|
https://en.wikipedia.org/wiki/Axalto
|
See Gemalto for current company information.
Axalto has been a smart card manufacturer, that during its brief independent existence, with over 4,500 employees in 60 countries, was one of the world's leading providers of microprocessor cards (Gartner, 2005) and also a major supplier of point of sale terminals.
Axalto's business covered the telecommunications, public telephony, finance, retail, transport, entertainment, healthcare, personal identification, information technology and public sector markets. The company recorded sales of over $992 million in 2005 and was fully listed on Euronext, the pan-European market.
History
Starting business as the Smart Card and Terminal Department of Schlumberger, after Schlumberger purchased Sema Group, it was merged with the latter to form SchlumbergerSema.
When Schlumberger sold the IT services business of SchlumbergerSema to Atos Origin, the Smart Card and Terminal Department was again spun off to become Axalto, which went public in 2004, with its initial public offering.
On December 7, 2005, Axalto announced its merger plan with main competitor Gemplus International. On May 19, 2006, the European Commission approved the merger between Axalto and Gemplus, leading to the creation of the new company Gemalto, on June 2, 2006.
External links
Gemalto Official Site
Axalto Official Site
Gemplus Official Site
Smart cards
|
https://en.wikipedia.org/wiki/Crossover%20distortion
|
Crossover distortion is a type of distortion which is caused by switching between devices driving a load. It is most commonly seen in complementary, or "push-pull", Class-B amplifier stages, although it is occasionally seen in other types of circuits as well.
The term crossover signifies the "crossing over" of the signal between devices, in this case, from the upper transistor to the lower and vice versa. The term is not related to the audio loudspeaker crossover filter—a filtering circuit which divides an audio signal into frequency bands to drive separate drivers in multiway speakers.
Distortion mechanism
The image shows a typical class-B emitter-follower complementary output stage. Under no signal conditions, the output is exactly midway between the supplies (i.e., at 0 V). When this is the case, the base-emitter bias of both the transistors is zero, so they are in the cut-off region where the transistors are not conducting.
Consider a positive-going swing: As long as the input is less than the required forward VBE drop (≈ 0.65 V) of the upper NPN transistor, it will remain off or conduct very little. This is the same as a diode operation as far as the base circuit is concerned, and the output voltage does not follow the input (the lower PNP transistor is still off because its base-emitter diode is being reverse biased by the positive-going input). The same applies to the lower transistor but for a negative-going input. Thus, between about ±0.65 V of input, the output voltage is not a true replica or amplified version of the input, and we can see that as a "kink" in the output waveform near 0 V (or where one transistor stops conducting and the other starts). This kink is the most pronounced form of crossover distortion, and it becomes more evident and intrusive when the output voltage swing is reduced.
Less pronounced forms of distortion may be observed in this circuit as well. An emitter-follower will have a voltage gain of just under 1. In the circuit sho
|
https://en.wikipedia.org/wiki/Protein%20misfolding%20cyclic%20amplification
|
Protein misfolding cyclic amplification (PMCA) is an amplification technique (conceptually like PCR but not involving nucleotides) to multiply misfolded prions originally developed by Soto and colleagues. It is a test for spongiform encephalopathies like CWD or BSE.
Technique
The technique initially incubates a small amount of abnormal prion with an excess of normal protein, so that some conversion takes place. The growing chain of misfolded protein is then blasted with ultrasound, breaking it down into smaller chains and so rapidly increasing the amount of abnormal protein available to cause conversions. By repeating the cycle, the mass of normal protein is rapidly changed into the prion being tested for.
Development
PMCA was originally developed to, in vitro, mimic prion replication with a similar efficiency to the in vivo process, but with accelerated kinetics. PMCA is conceptually analogous to the polymerase chain reaction - in both systems a template grows at the expense of a substrate in a cyclic reaction, combining growing and multiplication of the template units.
Replication
PMCA has been applied to replicate the misfolded protein from diverse species. The newly generated protein exhibits the same biochemical, biological, and structural properties as brain-derived PrPSc and strikingly it is infectious to wild type animals, producing a disease with similar characteristics as the illness produced by brain-isolated prions.
Automation
The technology has been automated, leading to a dramatic increase in the efficiency of amplification. Now, a single cycle results in a 2500-fold increase in sensitivity of detection over western blotting, whereas 2 and 7 consecutive cycles result in 6 million and 3 billion-fold increases in sensitivity of detection over western blotting, a technique widely used in BSE surveillance in several countries.
Sensitivity
It has been shown that PMCA is capable of detecting as little as a single molecule of oligomeric infectious PrPSc.
|
https://en.wikipedia.org/wiki/Online%20producer
|
An online producer oversees the making of content for websites and other online properties. Online producers are sometimes called "web producers," "publishers," "content producers," or "online editors."
Online producers have a range of responsibilities. They are in charge of arranging, editing, and sometimes even creating content, which comes in various forms like writing, music, video, and Adobe Flash, for websites. Many online producers often, but not always, specialize in one particular form of web content.
The role is distinct from that of web designer, developer, or webmaster. Online producers define and maintain the character of a website, as opposed to running it from a technical standpoint. However, technical and design knowledge is imperative for an online producer to be effective at their job. Online producers are typically responsible for working with system engineers or web designers to design site features with a user-friendly interface for smooth navigation and transitions. This means that an online producer should be familiar with common web publishing technologies such as CSS and HTML to effectively communicate with the system engineers or web designers on their teams.
Online producers may also be responsible for finding ways to boost the popularity of a website and increase user activity, particularly if the website sells advertising space. Online producers will also work with web teams to conceive, design and launch new web products such as blogs, community forums and user profiles.
Online producer roles often feature a project management component. The producer will schedule resources to create content, ensure that the content has passed Q/A on a staging server, and publish the content to the production server; keeping to a pre-defined schedule or project plan.
Annual Pay
The estimated annual pay for an online producer in the United States ranges from around $42K USD to $98K USD a year; with the current reported, average salary being around
|
https://en.wikipedia.org/wiki/AS-Interface
|
Actuator Sensor Interface (AS-Interface or ASi) is an industrial networking solution (physical layer, data access method and protocol) used in PLC, DCS and PC-based automation systems. It is designed for connecting simple field I/O devices (e.g. binary ON/OFF devices such as actuators, sensors, rotary encoders, analog inputs and outputs, push buttons, and valve position sensors) in discrete manufacturing and process applications using a single two-conductor cable.
AS-Interface is an 'open' technology supported by a multitude of automation equipment vendors. The AS-Interface has been an international standard according to IEC 62026-2 since 1999.
AS-Interface is a networking alternative to the hard wiring of field devices. It can be used as a partner network for higher level fieldbus networks such as Profibus, DeviceNet, Interbus and Industrial Ethernet, for whom it offers a low-cost remote I/O solution. It is used in automation applications, including conveyor control, packaging machines, process control valves, bottling plants, electrical distribution systems, airport baggage carousels, elevators, bottling lines and food production lines. AS-Interface provides a basis for Functional Safety in machinery safety/emergency stop applications. Safety devices communicating over AS-Interface follow all the normal AS-Interface data rules. The AS-Interface specification is managed by AS-International, a member funded non-profit organization located in Gelnhausen/Germany. Several international subsidiaries exist around the world.
History
AS-Interface was developed during the late 1980s and early 1990s by a development partnership of 11 companies mostly known for their offering of industrial non-contact sensing devices like inductive sensors, photoelectric sensors, capacitive sensors and ultrasonic sensors. Once development was completed the consortium was resolved and a member organization, AS-International, was founded. The first operational system was shown at the 1994 Ha
|
https://en.wikipedia.org/wiki/Refinement%20calculus
|
The refinement calculus is a formalized approach to stepwise refinement for program construction. The required behaviour of the final executable program is specified as an abstract and perhaps non-executable "program", which is then refined by a series of correctness-preserving transformations into an efficiently executable program.
Proponents include Ralph-Johan Back, who originated the approach in his 1978 PhD thesis On the Correctness of Refinement Steps in Program Development, and Carroll Morgan, especially with his book Programming from Specifications (Prentice Hall, 2nd edition, 1994, ). In the latter case, the motivation was to link Abrial's specification notation Z, via a rigorous relation of behaviour-preserving program refinement, to an executable programming notation based on Dijkstra's language of guarded commands. Behaviour-preserving in this case means that any Hoare triple satisfied by a program should also be satisfied by any refinement of it, which notion leads directly to specification statements as pre- and postconditions standing, on their own, for any program that could soundly be placed between them.
References
Formal methods
Formal specification languages
Logical calculi
|
https://en.wikipedia.org/wiki/Retromer
|
Retromer is a complex of proteins that has been shown to be important in recycling transmembrane receptors from endosomes to the trans-Golgi network (TGN) and directly back to the plasma membrane. Mutations in retromer and its associated proteins have been linked to Alzheimer's and Parkinson's diseases.
Background
Retromer is a heteropentameric complex, which in humans is composed of a less defined membrane-associated sorting nexin dimer (SNX1, SNX2, SNX5, SNX6), and a vacuolar protein sorting (Vps) heterotrimer containing Vps26, Vps29, and Vps35. Although the SNX dimer is required for the recruitment of retromer to the endosomal membrane, the cargo binding function of this complex is contributed by the core heterotrimer through the binding of Vps26 and Vps35 subunits to various cargo molecules including M6PR, wntless, SORL1 (which is also a receptor for other cargo proteins such as APP), and sortilin. Early study on sorting of acid hydrolases such as carboxypeptidase Y (CPY) in S. cerevisiae mutants has led to the identification of retromer in mediating the retrograde trafficking of the pro-CPY receptor (Vps10) from the endosomes to the TGN.
Structure
The retromer complex is highly conserved: homologs have been found in C. elegans, mouse and human. The retromer complex consists of 5 proteins in yeast: Vps35p, Vps26p, Vps29p, Vps17p, Vps5p. The mammalian retromer consists of Vps26, Vps29, Vps35, SNX1 and SNX2, and possibly SNX5 and SNX6. It is proposed to act in two subcomplexes: (1) A cargo recognition heterotrimeric complex that consist of Vps35, Vps29 and Vps26, and (2) SNX-BAR dimers, which consist of SNX1 or SNX2 and SNX5 or SNX6 that facilitate endosomal membrane remodulation and curvature, resulting in the formation of tubules/vesicles that transport cargo molecules to the trans-golgi network (TGN). Humans have two orthologs of VPS26: VPS26A, which is ubiquitous, and VPS26B, which is found in the central nervous system, where it forms a unique retr
|
https://en.wikipedia.org/wiki/Ralph-Johan%20Back
|
Ralph-Johan Back is a Finnish computer scientist. Back originated the refinement calculus, an important approach to the formal development of programs using stepwise refinement, in his 1978 PhD thesis at the University of Helsinki, On the Correctness of Refinement Steps in Program Development. He has undertaken much subsequent research in this area. He has held positions at CWI Amsterdam, the Academy of Finland and the University of Tampere.
Since 1983, he has been Professor of Computer Science at the Åbo Akademi University in Turku. For 2002–2007, he was an Academy Professor at the Academy of Finland. He is Director of CREST (Center for Reliable Software Technology) at Åbo Akademi.
Back is a member of Academia Europaea.
References
External links
Ralph-Johan Back home page
Year of birth missing (living people)
Living people
University of Helsinki alumni
Academic staff of the University of Tampere
Academic staff of Åbo Akademi University
Finnish computer scientists
Formal methods people
|
https://en.wikipedia.org/wiki/Keynesian%20beauty%20contest
|
A Keynesian beauty contest describes a beauty contest where judges are rewarded for selecting the most popular faces among all judges, rather than those they may personally find the most attractive. This idea is often applied in financial markets, whereby investors could profit more by buying whichever stocks they think other investors will buy, rather than the stocks that have fundamentally the best value. Because when other people buy a stock, they bid up the price, allowing an earlier investor to cash out with a profit, regardless of whether the price increases are supported by its fundamentals.
The concept was developed by John Maynard Keynes and introduced in Chapter 12 of his work, The General Theory of Employment, Interest and Money (1936), to explain price fluctuations in equity markets.
Overview
Keynes described the action of rational agents in a market using an analogy based on a fictional newspaper contest, in which entrants are asked to choose the six most attractive faces from a hundred photographs. Those who picked the most popular faces are then eligible for a prize.
A naive strategy would be to choose the face that, in the opinion of the entrant, is the most handsome. A more sophisticated contest entrant, wishing to maximize the chances of winning a prize, would think about what the majority perception of attractiveness is, and then make a selection based on some inference from their knowledge of public perceptions. This can be carried one step further to take into account the fact that other entrants would each have their own opinion of what public perceptions are. Thus the strategy can be extended to the next order and the next and so on, at each level attempting to predict the eventual outcome of the process based on the reasoning of other rational agents.
"It is not a case of choosing those [faces] that, to the best of one's judgment, are really the prettiest, nor even those that average opinion genuinely thinks the prettiest. We have reached
|
https://en.wikipedia.org/wiki/Adult%20stem%20cell
|
Adult stem cells are undifferentiated cells, found throughout the body after development, that multiply by cell division to replenish dying cells and regenerate damaged tissues. Also known as somatic stem cells (from Greek σωματικóς, meaning of the body), they can be found in juvenile, adult animals, and humans, unlike embryonic stem cells.
Scientific interest in adult stem cells is centered around two main characteristics. The first of which is their ability to divide or self-renew indefinitely, and the second their ability to generate all the cell types of the organ from which they originate, potentially regenerating the entire organ from a few cells. Unlike embryonic stem cells, the use of human adult stem cells in research and therapy is not considered to be controversial, as they are derived from adult tissue samples rather than human embryos designated for scientific research. The main functions of adult stem cells are to replace cells that are at risk of possibly dying as a result of disease or injury and to maintain a state of homeostasis within the cell. There are three main methods to determine if the adult stem cell is capable of becoming a specialized cell. The adult stem cell can be labeled in vivo and tracked, it can be isolated and then transplanted back into the organism, and it can be isolated in vivo and manipulated with growth hormones. They have mainly been studied in humans and model organisms such as mice and rats.
Structure
Defining properties
A stem cell possesses two properties:
Self-renewal is the ability to go through numerous cycles of cell division while still maintaining its undifferentiated state. Stem cells can replicate several times and can result in the formation of two stem cells, one stem cell more differentiated than the other, or two differentiated cells.
Multipotency or multidifferentiative potential is the ability to generate progeny of several distinct cell types, (for example glial cells and neurons) as opposed to u
|
https://en.wikipedia.org/wiki/Optimal%20maintenance
|
Optimal maintenance is the discipline within operations research concerned with maintaining a system in a manner that maximizes profit or minimizes cost. Cost functions depending on the reliability, availability and maintainability characteristics of the system of interest determine the parameters to minimize. Parameters often considered are the cost of failure, the cost per time unit of "downtime" (for example: revenue losses), the cost (per time unit) of corrective maintenance, the cost per time unit of preventive maintenance and the cost of repairable system replacement [Cassady and Pohl]. The foundation of any maintenance model relies on the correct description of the underlying deterioration process and failure behavior of the component, and on the relationships between maintained components in the product breakdown (system / sub-system / assembly / sub-assembly...).
Optimal Maintenance strategies are often constructed using stochastic models and focus on finding an optimal inspection time or the optimal acceptable degree of system degradation before maintenance and/or replacement. Cost considerations on an Asset scale may also lead to select a "run-to-failure" approach for specific components.
There are four main survey papers available accomplished to cover the spectrum of optimal maintenance:
Optimal maintenance models for systems subject to failure–a review by YS Sherif, ML Smith published in Naval Research Logistics Quarterly, 1981.
C. Valdez-Flores, R.M. Feldman, “A survey of preventive maintenance models for stochastically deteriorating single-unit systems”, Naval Research Logistics, vol 36, 1989 Aug, pp 419–446.
J.J. McCall, “Maintenance policies for stochastically failing equipment:a survey”, Management Science, vol 11, 1965 Mar, pp 493–524.
W.P. Pierskalla, J.A. Voelker, “A survey of maintenance models: The control and surveillance of deteriorating systems”, Naval Research Logistics Quarterly, vol 23, 1976 Sep, pp 353–388.
Operations research
|
https://en.wikipedia.org/wiki/Edward%20Dwelly
|
Edward Dwelly (1864–1939) was an English lexicographer and genealogist. He created the authoritative dictionary of Scottish Gaelic, and his work has had an influence on Irish Gaelic lexicography. He also practised as a professional genealogist and published transcripts of many original documents relating to Somerset.
Biography
Born in Twickenham, Middlesex, in England, he became interested in Scottish Gaelic after being stationed in Scotland with the army and working with the Ordnance Survey. He began collecting words at the age of seventeen and was also a keen bagpiper.
He released the dictionary in sections from 1901 onwards and the first full edition of his Illustrated Gaelic Dictionary in 1911 under the pen name of Eoghann MacDhòmhnaill (Ewen MacDonald) fearing that his work would not be well accepted under his own obviously English name.
He continued collating entries from older dictionaries and also recording thousands of new words, both from publications and from his travels in the Gaelic-speaking parts of Scotland. He illustrated, printed, bound and marketed his dictionary with help from his children and wife Mary McDougall (from Kilmadock) whom he had married in 1896, herself a native Gaelic speaker, teaching himself the skills required.
In 1912, Dwelly self-published his Compendium of Notes on the Dwelly Family, a 54-page genealogical work on the Dwelly family from a John Duelye in 1229, mainly covering Britain, but with an American section, and pedigrees and parish register extracts with supporting notes.
He subsequently gained a state pension from Edward VII for his work. In later life, alienated by the attitude of some people in Scotland, both Gaels and non-speakers, he returned to England, leaving behind his great legacy and dying in obscurity.
In 1991, the late Dr Douglas Clyne sourced several manuscripts in the National Library of Scotland which were published by him as Appendix to Dwelly's Gaelic-English Dictionary, over half of the entries be
|
https://en.wikipedia.org/wiki/Spin%20tensor
|
In mathematics, mathematical physics, and theoretical physics, the spin tensor is a quantity used to describe the rotational motion of particles in spacetime. The spin tensor has application in
general relativity and special relativity, as well as quantum mechanics, relativistic quantum mechanics, and quantum field theory.
The special Euclidean group SE(d) of direct isometries is generated by translations and rotations. Its Lie algebra is written .
This article uses Cartesian coordinates and tensor index notation.
Background on Noether currents
The Noether current for translations in space is momentum, while the current for increments in time is energy. These two statements combine into one in spacetime: translations in spacetime, i.e. a displacement between two events, is generated by the four-momentum P. Conservation of four-momentum is given by the continuity equation:
where is the stress–energy tensor, and ∂ are partial derivatives that make up the four-gradient (in non-Cartesian coordinates this must be replaced by the covariant derivative). Integrating over space:
gives the four-momentum vector at time t.
The Noether current for a rotation about the point y is given by a tensor of 3rd order, denoted . Because of the Lie algebra relations
where the 0 subscript indicates the origin (unlike momentum, angular momentum depends on the origin), the integral:
gives the angular momentum tensor at time t.
Definition
The spin tensor is defined at a point x to be the value of the Noether current at x of a rotation about x,
The continuity equation
implies:
and therefore, the stress–energy tensor is not a symmetric tensor.
The quantity S is the density of spin angular momentum (spin in this case is not only for a point-like particle, but also for an extended body), and M is the density of orbital angular momentum. The total angular momentum is always the sum of spin and orbital contributions.
The relation:
gives the torque density showing the rate of con
|
https://en.wikipedia.org/wiki/Phytosociology
|
Phytosociology, also known as phytocoenology or simply plant sociology, is the study of groups of species of plant that are usually found together. Phytosociology aims to empirically describe the vegetative environment of a given territory. A specific community of plants is considered a social unit, the product of definite conditions, present and past, and can exist only when such conditions are met. In phyto-sociology, such a unit is known as a phytocoenosis (or phytocoenose). A phytocoenosis is more commonly known as a plant community, and consists of the sum of all plants in a given area. It is a subset of a biocoenosis, which consists of all organisms in a given area. More strictly speaking, a phytocoenosis is a set of plants in area that are interacting with each other through competition or other ecological processes. Coenoses are not equivalent to ecosystems, which consist of organisms and the physical environment that they interact with. A phytocoensis has a distribution which can be mapped. Phytosociology has a system for describing and classifying these phytocoenoses in a hierarchy, known as syntaxonomy, and this system has a nomenclature. The science is most advanced in Europe, Africa and Asia.
In the United States this concept was largely rejected in favour of studying environments in more individualistic terms regarding species, where specific associations of plants occur randomly because of individual preferences and responses to gradients, and there are no sharp boundaries between phytocoenoses. The terminology 'plant community' is usually used in the US for a habitat consisting of a number of specific plant species.
It has been a successful approach in the scope of contemporary vegetation science because of its highly descriptive and predictive powers, and its usefulness in nature management issues.
History
The term 'phytosociology' was coined in 1896 by Józef Paczoski. The term 'phytocoenology' was coined by Helmut Gams in 1918. While the termin
|
https://en.wikipedia.org/wiki/CCSID
|
A CCSID (coded character set identifier) is a 16-bit number that represents a particular encoding of a specific code page. For example, Unicode is a code page that has several character encoding schemes (referred to as "transformation forms")—including UTF-8, UTF-16 and UTF-32—but which may or may not actually be accompanied by a CCSID number to indicate that this encoding is being used.
Difference between a code page and a CCSID
The terms code page and CCSID are often used interchangeably, even though they are not synonymous. A code page may be only part of what makes up a CCSID. The following definitions from IBM help to illustrate this point:
A glyph is the actual physical pattern of pixels or ink that shows up on a display or printout.
A character is a concept that covers all glyphs associated with a certain symbol. For instance, "F", "F", "F", "", "", and "" are all different glyphs, but use the same character. The various modifiers (bold, italic, underline, color, and font) do not change the F's essential F-ness.
A character set contains the characters necessary to allow a particular human to carry on a meaningful interaction with the computer. It does not specify how those characters are represented in a computer. This level is the first one to separate characters into various alphabets (Latin, Arabic, Hebrew, Cyrillic, and so on) or ideographic groups (e.g., Chinese, Korean). It corresponds to a "character repertoire" in the Unicode encoding model.
A code page represents a particular assignment of code point values to characters. It corresponds to a "coded character set" in the Unicode encoding model. A code point for a character is the computer's internal representation of that character in a given code page. Many characters are represented by different code points in different code pages. Certain character sets can be adequately represented with single-byte code pages (which have a maximum 256 code points, hence a maximum of 256 characters), but m
|
https://en.wikipedia.org/wiki/Computer%20network%20diagram
|
A computer network diagram is a schematic depicting the nodes and connections amongst nodes in a computer network or, more generally, any telecommunications network. Computer network diagrams form an important part of network documentation.
Symbolization
Readily identifiable icons are used to depict common network appliances, e.g. routers, and the style of lines between them indicates the type of connection. Clouds are used to represent networks external to the one pictured for the purposes of depicting connections between internal and external devices, without indicating the specifics of the outside network. For example, in the hypothetical local area network pictured to the right, three personal computers and a server are connected to a switch; the server is further connected to a printer and a gateway router, which is connected via a WAN link to the Internet.
Depending on whether the diagram is intended for formal or informal use, certain details may be lacking and must be determined from context. For example, the sample diagram does not indicate the physical type of connection between the PCs and the switch, but since a modern LAN is depicted, Ethernet may be assumed. If the same style of line was used in a WAN (wide area network) diagram, however, it may indicate a different type of connection.
At different scales diagrams may represent various levels of network granularity. At the LAN level, individual nodes may represent individual physical devices, such as hubs or file servers, while at the WAN level, individual nodes may represent entire cities. In addition, when the scope of a diagram crosses the common LAN/MAN/WAN boundaries, representative hypothetical devices may be depicted instead of showing all actually existing nodes. For example, if a network appliance is intended to be connected through the Internet to many end-user mobile devices, only a single such device may be depicted for the purposes of showing the general relationship between the ap
|
https://en.wikipedia.org/wiki/Ethnomedicine
|
Ethnomedicine is a study or comparison of the traditional medicine based on bioactive compounds in plants and animals and practiced by various ethnic groups, especially those with little access to western medicines, e.g., indigenous peoples. The word ethnomedicine is sometimes used as a synonym for traditional medicine.
Ethnomedical research is interdisciplinary; in its study of traditional medicines, it applies the methods of ethnobotany and medical anthropology. Often, the medicine traditions it studies are preserved only by oral tradition. In addition to plants, some of these traditions constitute significant interactions with insects on the Indian Subcontinent, in Africa, or elsewhere around the globe.
Scientific ethnomedical studies constitute either anthropological research or drug discovery research. Anthropological studies examine the cultural perception and context of a traditional medicine. Ethnomedicine has been used as a starting point in drug discovery, specifically those using reverse pharmacological techniques.
Ethnopharmacology
Ethnopharmacology is a related field which studies ethnic groups and their use of plant compounds. It is linked to pharmacognosy, phytotherapy (study of medicinal plants) use and ethnobotany, as this is a source of lead compounds for drug discovery. Emphasis has long been on traditional medicines, although the approach also has proven useful to the study of modern pharmaceuticals.
It involves studies of the:
identification and ethnotaxonomy (cognitive categorisation) of the (eventual) natural material, from which the candidate compound will be produced
traditional preparation of the pharmaceutical forms
bio-evaluation of the possible pharmacological action of such preparations (ethnopharmacology)
their potential for clinical effectiveness
socio-medical aspects implied in the uses of these compounds (medical anthropology).
See also
Ayurveda
Ethnobotany
Herbalism
Pharmacognosy
Shamanism
Traditional medicine
Refer
|
https://en.wikipedia.org/wiki/Behavior%20modification
|
Behavior modification is a treatment approach that uses respondent and operant conditioning to change behavior. Based on methodological behaviorism, overt behavior is modified with consequences, including positive and negative reinforcement contingencies to increase desirable behavior, or administering positive and negative punishment and/or extinction to reduce problematic behavior. It also uses flooding desensitization to combat phobias.
Applied behavior analysis (ABA)—the application of behavior analysis—is a contemporary application and is based on radical behaviorism, which refers to B. F. Skinner's viewpoint that cognition and emotions are covert behavior that are to be subjected to the same conditions as overt behavior.
Description and history
The first use of the term behavior modification appears to have been by Edward Thorndike in 1911. His article Provisional Laws of Acquired Behavior or Learning makes frequent use of the term "modifying behavior". Through early research in the 1940s and the 1950s the term was used by Joseph Wolpe's research group. The experimental tradition in clinical psychology used it to refer to psycho-therapeutic techniques derived from empirical research. In the 1960s, behavior modification operated on stimulus-response-reinforcement framework (S-R-SR), emphasizing the concept of 'transactional' explanations of behavior. It has since come to refer mainly to techniques for increasing adaptive behavior through reinforcement and decreasing maladaptive behavior through extinction or punishment (with emphasis on the former).
In recent years, the concept of punishment has had many critics, though these criticisms tend not to apply to negative punishment (time-outs) and usually apply to the addition of some aversive event. The use of positive punishment by board certified behavior analysts is restricted to extreme circumstances when all other forms of treatment have failed and when the behavior to be modified is a danger to the person
|
https://en.wikipedia.org/wiki/Dotfuscator
|
Dotfuscator is a tool performing a combination of code obfuscation, optimization, shrinking, and hardening on .NET, Xamarin and Universal Windows Platform apps. Ordinarily, .NET executables can easily be reverse engineered by free tools (such as ILSpy, dotPeek and JustDecompile), potentially exposing algorithms and intellectual property (trade secrets), licensing and security mechanisms. Also, code can be run through a debugger and its data inspected. Dotfuscator can make all of these things more difficult.
Dotfuscator was developed by PreEmptive Solutions. A free version of the .NET Obfuscator, called the Dotfuscator Community Edition, is distributed as part of Microsoft's Visual Studio. However, the current version is free for personal, non-commercial use only.
References
Further reading
"Why and how to use Obfuscation for .NET with Dotfuscator". Microsoft Visual Studio 2017 Documentation
"Obfuscation and .NET". The Journal of Object Technology. Vol. 4, No. 4, May–June 2005. pp. 79–83.
MSDN Magazine. Miller Freeman. pp. 11–12.
Reversing: Secrets of Reverse Engineering. John Wiley & Sons.
"Review: PreEmptive Way To Obfuscate .Net Apps". CRN Magazine
Windows Developer Power Tools. O'Reilly Media.
"Dotfuscator expands its functionality". InfoWorld.
Visual Basic 2008 For Dummies. John Wiley & Sons.
Professional Visual Studio 2010. John Wiley & Sons.
External links
https://news.microsoft.com/2004/07/19/preemptive-solutions-dotfuscator-will-ship-with-microsoft-visual-studio-2005/
https://msdn.microsoft.com/library/dd551417.aspx
http://www.dirkstrauss.com/visual-studio-2012-tips-part-5-protect-your-code-obfuscate/
http://www.drdobbs.com/windows/enhanced-dotfuscator-ce-for-visual-stud/199901475
https://web.archive.org/web/20110201004909/http://www.clevelandpress.com/dotfuscator2.htm
Software obfuscation
.NET programming_tools
Microsoft Visual Studio extensions
|
https://en.wikipedia.org/wiki/Eggshell
|
An eggshell is the outer covering of a hard-shelled egg and of some forms of eggs with soft outer coats.
Worm eggs
Nematode eggs present a two layered structure: an external vitellin layer made of chitin that confers mechanical resistance and an internal lipid-rich layer that makes the egg chamber impermeable.
Insect eggs
Insects and other arthropods lay a large variety of styles and shapes of eggs. Some of them have gelatinous or skin-like coverings, others have hard eggshells. Softer shells are mostly protein. It may be fibrous or quite liquid. Some arthropod eggs do not actually have shells, rather, their outer covering is actually the outermost embryonic membrane, the choroid, which protects inner layers. This can be a complex structure, and it may have different layers, including an outermost layer called an exochorion. Eggs which must survive in dry conditions usually have hard eggshells, made mostly of dehydrated or mineralized proteins with pore systems to allow respiration. Arthropod eggs can have extensive ornamentation on their outer surfaces.
Fish, amphibian and reptile eggs
Fish and amphibians generally lay eggs which are surrounded by the extraembryonic membranes but do not develop a shell, hard or soft, around these membranes. Some fish and amphibian eggs have thick, leathery coats, especially if they must withstand physical force or desiccation. These types of eggs can also be very small and fragile.
While many reptiles lay eggs with flexible, calcified eggshells, there are some that lay hard eggs. Eggs laid by snakes generally have leathery shells which often adhere to one another. Depending on the species, turtles and tortoises lay hard or soft eggs. Several species lay eggs which are nearly indistinguishable from bird eggs.
Bird eggs
The bird egg is a fertilized gamete (or, in the case of some birds, such as chickens, possibly unfertilized) located on the yolk surface and surrounded by albumen, or egg white. The albumen in turn is surro
|
https://en.wikipedia.org/wiki/Ergodic%20flow
|
In mathematics, ergodic flows occur in geometry, through the geodesic and horocycle flows of closed hyperbolic surfaces. Both of these examples have been understood in terms of the theory of unitary representations of locally compact groups: if Γ is the fundamental group of a closed surface, regarded as a discrete subgroup of the Möbius group G = PSL(2,R), then the geodesic and horocycle flow can be identified with the natural actions of the subgroups A of real positive diagonal matrices and N of lower unitriangular matrices on the unit tangent bundle G / Γ. The Ambrose-Kakutani theorem expresses every ergodic flow as the flow built from an invertible ergodic transformation on a measure space using a ceiling function. In the case of geodesic flow, the ergodic transformation can be understood in terms of symbolic dynamics; and in terms of the ergodic actions of Γ on the boundary S1 = G / AN and G / A = S1 × S1 \ diag S1. Ergodic flows also arise naturally as invariants in the classification of von Neumann algebras: the flow of weights for a factor of type III0 is an ergodic flow on a measure space.
Hedlund's theorem: ergodicity of geodesic and horocycle flows
The method using representation theory relies on the following two results:
If = acts unitarily on a Hilbert space and is a unit vector fixed by the subgroup of upper unitriangular matrices, then is fixed by .
If = acts unitarily on a Hilbert space and is a unit vector fixed by the subgroup of diagonal matrices of determinant , then is fixed by .
(1) As a topological space, the homogeneous space = can be identified with } with the standard action of as matrices. The subgroup of has two kinds of orbits: orbits parallel to the -axis with ; and points on the -axis. A continuous function on that is constant on -orbits must therefore be constant on the real axis with the origin removed. Thus the matrix coefficient = satisfies = for in . By unitarity, |||| = = , so that = for all in =
|
https://en.wikipedia.org/wiki/Fluor%20Corporation
|
Fluor Corporation is an American multinational engineering and construction firm headquartered in Irving, Texas. It is a holding company that provides services through its subsidiaries in the following areas: oil and gas, industrial and infrastructure, government and power. It is the largest publicly traded engineering & construction company in the Fortune 500 rankings and is listed as 259th overall.
Fluor was founded in 1912 by John Simon Fluor as Fluor Construction Company. It grew quickly, predominantly by building oil refineries, pipelines, and other facilities for the oil and gas industry, at first in California, and then in the Middle East and globally. In the late 1960s, it began diversifying into oil drilling, coal mining and other raw materials like lead. A global recession in the oil and gas industry and losses from its mining operation led to restructuring and layoffs in the 1980s. Fluor sold its oil operations and diversified its construction work into a broader range of services and industries.
In the 1990s, Fluor introduced new services like equipment rentals and staffing. Nuclear waste cleanup projects and other environmental work became a significant portion of Fluor's revenues. The company also did projects related to the Manhattan Project, rebuilding after the Iraq War, recovering from Hurricane Katrina and building the Trans-Alaska Pipeline System.
Corporate history
Early history
Fluor Corporation's predecessor, Rudolph Fluor & Brother, was founded in 1890 by John Simon Fluor and his two brothers in Oshkosh, Wisconsin as a saw and paper mill. John Fluor acted as its president and contributed $100 in personal savings to help the business get started. The company was renamed Fluor Bros. Construction Co. in 1903.
In 1912 John Fluor moved to Santa Ana, California for health reasons without his brothers and founded Fluor Corporation out of his garage under the name Fluor Construction Company. By 1924 the business had annual revenues of $100,000 ($
|
https://en.wikipedia.org/wiki/List%20of%20engineering%20schools
|
Engineering schools provide engineering education at the higher education level includes both undergraduate and graduate levels. Schools which provide such education are typically part of a university, institute of technology, or polytechnic institute. Such scholastic divisions for engineering are generally referred to by several different names, the most common being College of Engineering or School of Engineering, and typically consist of several departments, each of which has its own faculty and teaches a certain branch of engineering. Students frequently specialize in specific branches of engineering, such as mechanical engineering, electrical engineering, chemical engineering, or civil engineering, among others.
Bangladesh
Belgium
Canada
Engineering Schools in Canada are accredited by the Canadian Engineering Accreditation Board and their provincial professional association partners.
France
Grandes écoles d'ingénieurs
India
Indian Institutes of Technology - 23 institutes
National Institutes of Technology - 31 institutes
Indian Institutes of Information Technology - 25 institutes
Italy
Polytechnic University of Milan
Polytechnic University of Turin
Sapienza University of Rome
Second University of Naples
University of Bologna
University of Catania
University of Naples Federico II
University of Padua
University of Palermo
University of Pisa
University of Salento
Kenya
University of Nairobi- College of Architecture and Engineering
Jomo Kenyatta University of Agriculture and Technology (JKUAT) - College of Engineering and Technology (COETEC)
Malaysia
Morocco
Nepal
Institute of Engineering - 4 institutes
Philippines
Russia
Moscow Aviation Institute (National Research University)
Bauman Moscow State Technical University
Far Eastern Federal University - Engineering School
Irkutsk State Technical University
Military Engineering-Technical University
MISA National University of Science and Technology or MISiS, Moscow Institute of
|
https://en.wikipedia.org/wiki/IBM%20Spectrum%20LSF
|
IBM Spectrum LSF (LSF, originally Platform Load Sharing Facility) is a workload management platform, job scheduler, for distributed high performance computing (HPC) by IBM.
Details
It can be used to execute batch jobs on networked Unix and Windows systems on many different architectures. LSF was based on the Utopia research project at the University of Toronto.
In 2007, Platform released Platform Lava, which is a simplified version of LSF based on an old version of LSF release, licensed under GNU General Public License v2. The project was discontinued in 2011, succeeded by OpenLava.
In January, 2012, Platform Computing was acquired by IBM. The product is now called IBM Spectrum LSF.
IBM Spectrum LSF Community Edition is a no-charge community edition of the IBM Spectrum LSF workload management platform.
References
Also See
Sun Grid Engine
HTCondor
Job scheduling
Grid computing
|
https://en.wikipedia.org/wiki/National%20Association%20of%20Biology%20Teachers
|
The National Association of Biology Teachers (NABT) is an incorporated association of biology educators in the United States. It was initially founded in response to the poor understanding of biology and the decline in the teaching of the subject in the 1930s. It has grown to become a national representative organisation which promotes the teaching of biology, supports the learning of biology based on scientific principles and advocates for biology within American society. The National Conference and the journal, The American Biology Teacher, are two mechanisms used to achieve those goals.
The NABT has also been an advocate for the teaching of evolution in the debate about creation and evolution in public education in the United States, playing a role in a number of court cases and hearings throughout the country.
History
The NABT was formed in 1938 in New York City. The journal of the organisation (The American Biology Teacher) was created in the same year.
In 1944, Helen Trowbridge, the first female president, was elected. The Outstanding Teacher Awards were first presented in 1960 and the first independent National Convention was held in 1968.
The seventies marked an era of activism in the teaching of evolution with legal action against a state code amendment in Tennessee which required equal amounts of time to teach evolution and creationism.
In 1987 NABT helped develop the first National High School Biology test which established a list of nine core principles in the teaching of biology.
In the year 2005, NABT was involved in the Kitzmiller v. Dover Area School District case which established the principle that Intelligent Design had no place in the Science Curriculum.
2017 was the Year of the March for Science, which the NABT endorsed, and in 2018, it held its annual four-day conference in San Diego, California.
Purpose
The purpose of the NABT is to "empower educators to provide the best possible biology and life science education for all students". The org
|
https://en.wikipedia.org/wiki/Rational%20mapping
|
In mathematics, in particular the subfield of algebraic geometry, a rational map or rational mapping is a kind of partial function between algebraic varieties. This article uses the convention that varieties are irreducible.
Definition
Formal definition
Formally, a rational map between two varieties is an equivalence class of pairs in which is a morphism of varieties from a non-empty open set to , and two such pairs and are considered equivalent if and coincide on the intersection (this is, in particular, vacuously true if the intersection is empty, but since is assumed irreducible, this is impossible). The proof that this defines an equivalence relation relies on the following lemma:
If two morphisms of varieties are equal on some non-empty open set, then they are equal.
is said to be birational if there exists a rational map which is its inverse, where the composition is taken in the above sense.
The importance of rational maps to algebraic geometry is in the connection between such maps and maps between the function fields of and . Even a cursory examination of the definitions reveals a similarity between that of rational map and that of rational function; in fact, a rational function is just a rational map whose range is the projective line. Composition of functions then allows us to "pull back" rational functions along a rational map, so that a single rational map induces a homomorphism of fields . In particular, the following theorem is central: the functor from the category of projective varieties with dominant rational maps (over a fixed base field, for example ) to the category of finitely generated field extensions of the base field with reverse inclusion of extensions as morphisms, which associates each variety to its function field and each map to the associated map of function fields, is an equivalence of categories.
Examples
Rational maps of projective spaces
There is a rational map sending a ratio . Since the point cannot h
|
https://en.wikipedia.org/wiki/HEPnet
|
HEPnet or the High-Energy Physics Network is a telecommunications network for researchers in high-energy physics. It originated in the United States, but that has spread to most places involved in such research. Well-known sites include Argonne National Laboratory, Brookhaven National Laboratory and Lawrence Berkeley.
See also
Energy Sciences Network
External links
HEPnet site
Computational particle physics
|
https://en.wikipedia.org/wiki/Complex%20dimension
|
In mathematics, complex dimension usually refers to the dimension of a complex manifold or a complex algebraic variety. These are spaces in which the local neighborhoods of points (or of non-singular points in the case of a variety) are modeled on a Cartesian product of the form for some , and the complex dimension is the exponent in this product. Because can in turn be modeled by , a space with complex dimension will have real dimension . That is, a smooth manifold of complex dimension has real dimension ; and a complex algebraic variety of complex dimension , away from any singular point, will also be a smooth manifold of real dimension .
However, for a real algebraic variety (that is a variety defined by equations with real coefficients), its dimension refers commonly to its complex dimension, and its real dimension refers to the maximum of the dimensions of the manifolds contained in the set of its real points. The real dimension is not greater than the dimension, and equals it if the variety is irreducible and has real points that are nonsingular.
For example, the equation defines a variety of (complex) dimension 2 (a surface), but of real dimension 0 — it has only one real point, (0, 0, 0), which is singular.
The same considerations apply to codimension. For example a smooth complex hypersurface in complex projective space of dimension n will be a manifold of dimension 2(n − 1). A complex hyperplane does not separate a complex projective space into two components, because it has real codimension 2.
References
Complex manifolds
Algebraic geometry
Dimension
|
https://en.wikipedia.org/wiki/Infraspecific%20name
|
In botany, an infraspecific name is the scientific name for any taxon below the rank of species, i.e. an infraspecific taxon or infraspecies. A "taxon", plural "taxa", is a group of organisms to be given a particular name. The scientific names of botanical taxa are regulated by the International Code of Nomenclature for algae, fungi, and plants (ICN). This specifies a three part name for infraspecific taxa, plus a connecting term to indicate the rank of the name. An example of such a name is Astrophytum myriostigma subvar. glabrum, the name of a subvariety of the species Astrophytum myriostigma (bishop's hat cactus).
Names below the rank of species of cultivated kinds of plants and of animals are regulated by different codes of nomenclature and are formed somewhat differently.
Construction of infraspecific names
Article 24 of the ICN describes how infraspecific names are constructed. The order of the three parts of an infraspecific name is:
genus name, specific epithet, connecting term indicating the rank (not part of the name, but required), infraspecific epithet.
It is customary to italicize all three parts of such a name, but not the connecting term. For example:
Acanthocalycium klimpelianum var. macranthum
genus name = Acanthocalycium, specific epithet = klimpelianum, connecting term = var. (short for "varietas" or variety), infraspecific epithet = macranthum
Astrophytum myriostigma subvar. glabrum
genus name = Astrophytum, specific epithet = myriostigma, connecting term = subvar. (short for "subvarietas" or subvariety), infraspecific epithet = glabrum
The recommended abbreviations for ranks below species are:
subspecies - recommended abbreviation: subsp. (but "ssp." is also in use although not recognised by Art 26)
varietas (variety) - recommended abbreviation: var.
subvarietas (subvariety) - recommended abbreviation: subvar.
forma (form) - recommended abbreviation: f.
subforma (subform) - recommended abbreviation: subf.
Although the connecting t
|
https://en.wikipedia.org/wiki/Noetherian%20scheme
|
In algebraic geometry, a noetherian scheme is a scheme that admits a finite covering by open affine subsets , noetherian rings. More generally, a scheme is locally noetherian if it is covered by spectra of noetherian rings. Thus, a scheme is noetherian if and only if it is locally noetherian and quasi-compact. As with noetherian rings, the concept is named after Emmy Noether.
It can be shown that, in a locally noetherian scheme, if is an open affine subset, then A is a noetherian ring. In particular, is a noetherian scheme if and only if A is a noetherian ring. Let X be a locally noetherian scheme. Then the local rings are noetherian rings.
A noetherian scheme is a noetherian topological space. But the converse is false in general; consider, for example, the spectrum of a non-noetherian valuation ring.
The definitions extend to formal schemes.
Properties and Noetherian hypotheses
Having a (locally) Noetherian hypothesis for a statement about schemes generally makes a lot of problems more accessible because they sufficiently rigidify many of its properties.
Dévissage
One of the most important structure theorems about Noetherian rings and Noetherian schemes is the Dévissage theorem. This theorem makes it possible to decompose arguments about coherent sheaves into inductive arguments. It is because given a short exact sequence of coherent sheavesproving one of the sheaves has some property is equivalent to proving the other two have the property. In particular, given a fixed coherent sheaf and a sub-coherent sheaf , showing has some property can be reduced to looking at and . Since this process can only be applied a finite number of times in a non-trivial manner, this makes many induction arguments possible.
Number of irreducible components
Every Noetherian scheme can only have finitely many components.
Morphisms from Noetherian schemes are quasi-compact
Every morphism from a Noetherian scheme is quasi-compact.
Homological properties
There are ma
|
https://en.wikipedia.org/wiki/Poincar%C3%A9%E2%80%93Bendixson%20theorem
|
In mathematics, the Poincaré–Bendixson theorem is a statement about the long-term behaviour of orbits of continuous dynamical systems on the plane, cylinder, or two-sphere.
Theorem
Given a differentiable real dynamical system defined on an open subset of the plane, every non-empty compact ω-limit set of an orbit, which contains only finitely many fixed points, is either
a fixed point,
a periodic orbit, or
a connected set composed of a finite number of fixed points together with homoclinic and heteroclinic orbits connecting these.
Moreover, there is at most one orbit connecting different fixed points in the same direction. However, there could be countably many homoclinic orbits connecting one fixed point.
A weaker version of the theorem was originally conceived by , although he lacked a complete proof which was later given by .
Discussion
The condition that the dynamical system be on the plane is necessary to the theorem. On a torus, for example, it is possible to have a recurrent non-periodic orbit.
In particular, chaotic behaviour can only arise in continuous dynamical systems whose phase space has three or more dimensions. However the theorem does not apply to discrete dynamical systems, where chaotic behaviour can arise in two- or even one-dimensional systems.
Applications
One important implication is that a two-dimensional continuous dynamical system cannot give rise to a strange attractor. If a strange attractor C did exist in such a system, then it could be enclosed in a closed and bounded subset of the phase space. By making this subset small enough, any nearby stationary points could be excluded. But then the Poincaré–Bendixson theorem says that C is not a strange attractor at all—it is either a limit cycle or it converges to a limit cycle.
See also
Rotation number
References
Theorems in dynamical systems
|
https://en.wikipedia.org/wiki/Evolutionarily%20significant%20unit
|
An evolutionarily significant unit (ESU) is a population of organisms that is considered distinct for purposes of conservation. Delineating ESUs is important when considering conservation action.
This term can apply to any species, subspecies, geographic race, or population. Often the term "species" is used rather than ESU, even when an ESU is more technically considered a subspecies or variety rather than a biological species proper. In marine animals the term "stock" is often used as well.
Definition
Definitions of an ESU generally include at least one of the following criteria:
Current geographic separation,
Genetic differentiation at neutral markers among related ESUs caused by past restriction of gene flow, or
Locally adapted phenotypic traits caused by differences in selection.
Criterion 2 considers the gene flow between populations, measured by FST. A high degree of differentiation between two populations among genes that provide no adaptive advantage to either population (known as neutral markers) implies a lack of gene flow, showing that random drift has occurred in isolation from other populations. Very few migrants per generation are needed to prevent strong differentiation of neutral markers. Even a single migrant per generation may be enough for neutral markers to show gene flow between populations, making it difficult to differentiate the populations through neutral markers.
Criterion 3 does not consider neutral genetic markers, instead looking at locally adapted traits of the population. Local adaptations may be present even with some gene flow from other populations, and even when there is little differentiation at neutral markers among ESUs. Reciprocal transplantation experiments are necessary to test for genetic differentiation for phenotypic traits, and differences in selection gradients across habitats. Such experiments are generally more difficult than the fixation index tests of criterion 2, and may be impossible for very rare or endange
|
https://en.wikipedia.org/wiki/Reference%20design
|
Reference design refers to a technical blueprint of a system that is intended for others to copy. It contains the essential elements of the system; however, third parties may enhance or modify the design as required. When discussing computer designs, the concept is generally known as a reference platform.
The main purpose of reference design is to support companies in development of next generation products using latest technologies. The reference product is proof of the platform concept and is usually targeted for specific applications. Reference design packages enable a fast track to market thereby cutting costs and reducing risk in the customer's integration project.
As the predominant customers for reference designs are OEMs, many reference designs are created by technology component vendors, whether hardware or software, as a means to increase the likelihood that their product will be designed into the OEM's product, giving them a competitive advantage.
Examples
NanoBook, a reference design of a miniature laptop
Open source hardware (also :Category:Open source hardware)
RONJA, a free and open telecommunication technology ("free Internet")
VIA OpenBook, a free and open reference design of a laptop
References
Electronics manufacturing
|
https://en.wikipedia.org/wiki/Imbibition
|
Imbibition is a special type of diffusion that takes place when liquid is absorbed by solids-colloids causing an increase in volume. Water surface potential movement takes place along a concentration gradient; some dry materials absorb water. A gradient between the absorbent and the liquid is essential for imbibition. For a substance to imbibe a liquid, there must first be some attraction between them. Imbibition occurs when a wetting fluid displaces a non-wetting fluid, the opposite of drainage in which a non-wetting phase displaces the wetting fluid. The two processes are governed by different mechanisms. Imbibition is also a type of diffusion since water movement is along the concentration gradient. Seeds and other such materials have almost no water hence they absorb water easily. Water potential gradient between the absorbent and liquid imbibed is essential for imbibition.
Examples
One example of imbibition in nature is the absorption of water by hydrophilic colloids. Matrix potential contributes significantly to water in such substances. Dry seeds germinate in part by imbibition. Imbibition can also control circadian rhythms in Arabidopsis thaliana and (probably) other plants. The Amott test employs imbibition.
Proteins have high imbibition capacities, so proteinaceous pea seeds swell more than starchy wheat seeds.
Imbibition of water increases imbibant volume, which results in imbibitional pressure (IP). The magnitude of such pressure can be demonstrated by the splitting of rocks by inserting dry wooden stalks in their crevices and soaking them in water, a technique used by early Egyptians to cleave stone blocks.
Skin grafts (split thickness and full thickness) receive oxygenation and nutrition via imbibition, maintaining cellular viability until the processes of inosculation and revascularisation have re-established a new blood supply within these tissues.
Germination
Examples include the absorption of water by seeds and dry wood. If there is no pre
|
https://en.wikipedia.org/wiki/Radio%20pack
|
A radio pack is mainly used for musicians such as guitarists and singers for live performances. It is a small radio transmitter that is either placed in the strap or in the pocket. The receiver is connected to an amp or PA system and the user simply connects the transmitter into the instrument. By using a wireless system, musicians are free to move around the stage. This has meant that more elaborate stage shows are now possible, with musicians performing a long way from the amplifier or speakers.
As with any radio device interference is possible, although modern systems are more stable. An example of a performer who has made use of a radio pack is AC/DC guitarist Angus Young, whose stage antics are legendary.
See Also
Schaffer–Vega diversity system - one wireless guitar system
Sound production technology
Radio technology
|
https://en.wikipedia.org/wiki/RF%20front%20end
|
In a radio receiver circuit, the RF front end, short for radio frequency front end, is a generic term for all the circuitry between a receiver's antenna input up to and including the mixer stage. It consists of all the components in the receiver that process the signal at the original incoming radio frequency (RF), before it is converted to a lower intermediate frequency (IF). In microwave and satellite receivers it is often called the low-noise block downconverter (LNB) and is often located at the antenna, so that the signal from the antenna can be transferred to the rest of the receiver at the more easily handled intermediate frequency.
Superheterodyne receiver
For most superheterodyne architectures, the RF front end consists of:
A band-pass filter (BPF) to reduce image response. This removes any signals at the image frequency, which would otherwise interfere with the desired signal. It also prevents strong out-of-band signals from saturating the input stages.
An RF amplifier, often called the low-noise amplifier (LNA). Its primary responsibility is to increase the sensitivity of the receiver by amplifying weak signals without contaminating them with noise, so that they can stay above the noise level in succeeding stages. It must have a very low noise figure (NF). The RF amplifier may not be needed and is often omitted (or switched off) for frequencies below 30 MHz, where the signal-to-noise ratio is defined by atmospheric and human-made noise.
A local oscillator (LO) which generates a radio frequency signal at an offset from the incoming signal, which is mixed with the incoming signal.
The mixer, which mixes the incoming signal with the signal from the local oscillator to convert the signal to the intermediate frequency (IF).
Digital receiver
In digital receivers, particularly those in wireless devices such as cell phones and Wifi receivers, the intermediate frequency is digitized; sampled and converted to a binary digital form, and the rest of t
|
https://en.wikipedia.org/wiki/Microsoft%20Virtual%20Server
|
Microsoft Virtual Server was a virtualization solution that facilitated the creation of virtual machines on the Windows XP, Windows Vista and Windows Server 2003 operating systems. Originally developed by Connectix, it was acquired by Microsoft prior to release. Virtual PC is Microsoft's related desktop virtualization software package.
Virtual machines are created and managed through a Web-based interface that relies on Internet Information Services (IIS) or through a Windows client application tool called VMRCplus.
The last version using this name was Microsoft Virtual Server 2005 R2 SP1. New features in R2 SP1 include Linux guest operating system support, Virtual Disk Precompactor, SMP (but not for the guest OS), x64 host operating system support, the ability to mount virtual hard drives on the host machine and additional operating systems support, including Windows Vista. It also provides a Volume Shadow Copy writer that enables live backups of the Guest OS on a Windows Server 2003 or Windows Server 2008 host. A utility to mount VHD images has also been included since SP1. Virtual Machine Additions for Linux are available as a free download. Officially supported Linux guest operating systems include Red Hat Enterprise Linux versions 2.1-5.0, Red Hat Linux 9.0, SUSE Linux and SUSE Linux Enterprise Server versions 9 and 10.
Virtual Server has been discontinued and replaced by Hyper-V.
Differences from Virtual PC
VPC has multimedia support and Virtual Server does not (e.g. no sound driver support).
VPC uses a single thread whereas Virtual Server is multi-threaded.
VPC will install on Windows 7, but Virtual Server is restricted from install on NT 6.1 or higher operating systems i.e. Server 2008 R2 and Windows 7.
VPC is limited to 127GB .vhd (per IDE CHS specification), however Virtual Server can be made to access .vhd up to 2048GB (NTFS max file size).
Version history
Microsoft acquired an unreleased Virtual Server from Connectix in February 2003.
The initia
|
https://en.wikipedia.org/wiki/Bridge%20tap
|
Bridged tap or bridge tap is a long-used method of cabling for telephone lines. One cable pair (of wires) will "appear" in several different terminal locations (poles or pedestals). This allows the telephone company to use or "assign" that pair to any subscriber near those terminal locations. Once that customer disconnects, that pair becomes usable at any of the terminals. In the days of party lines, 2, 4, 6, or 8 users were commonly connected on the same pair which appeared at several different locations.
A bridge tap has no hybrid coil or other impedance matching components, just a “T” (or branch) in the cable. Thus the bridge presents an impedance mismatch. The unused branch of the T is usually left with no device connected to its end, thus has no electrical termination. Both the tap and its unterminated branch cause unwanted signal reflections, also called echoes.
Digital subscriber lines (DSL) can be affected by a bridged tap, depending on where the tap is bridged. DSL signals reflect from the discontinuities, sending the signal back through the cable pair, much like a tennis ball against a brick wall. The echoed signal is now out of phase and mixed with the original, creating, among other impairments, attenuation distortion. The modem receives both signals, gets confused and "takes errors" or cannot sync. If the bridged tap is long, the signal bounces back only in very attenuated form. Therefore, the modem will ignore the weaker signal and show no problem.
Line in bridge tap technology can be much less performing caused to the high attenuation added and to the length of the cable itself.
Bridged tap technology does not add latency. But it affects the performance of the line in general, it affects the bandwidth a lot.
A bridge tap can also be referred to as a "multiple" or a telephone pair "in multiple".
References
External links
Local loop
Signal cables
|
https://en.wikipedia.org/wiki/Western%20Russian%20fortresses
|
The Western Russian fortresses are a system of fortifications built by the Russian Empire in Eastern Europe in the early 19th century. The fortifications were constructed in three chains at strategic locations along Russia's western border, primarily to combat the threat of Prussia (later Germany) and Austria-Hungary, and to establish Russian rule in new western territories. By the late 19th century the fortifications were obsolete and the system became defunct by the collapse of the Russian Empire in 1917.
1830 Polish threat
During 1830–1831, the Russian Empire under the rule of Tsar Nicholas I crushed the November Uprising, a Polish revolt against Russian authority over the Kingdom of Poland, at the time Russia's westernmost territory that shared borders with other powerful European empires such as Austria-Hungary and Prussia. The Kingdom of Poland, which until then maintained a large degree of autonomy, had its constitution abolished and was placed under the direct rule of Russia. To maintain secure control over the lands and to suppress any future revolts that might occur here, Nicholas I assigned his prominent military engineers to design a reliable system of fortifications in this part of Europe. The endorsed project included construction of new fortifications and reconstruction of the old fortresses within 10 to 15 years.
Construction and development
The project included three lines of fortresses:
The first line, called the Defense line of the Kingdom of Poland, crossed Poland north-south, consisting of the Modlin fortress, the Warsaw Citadel, and the fortress in Ivangorod (presently Dęblin).
The second line along the Bug River included Brest-Litovsk fortress.
The third line was running north–south over east of the first one across the present-day Latvia, Belarus and Ukraine, it consisted of the Dinaburg fortress in Dvinsk (now Daugavpils), the Babruysk fortress, and the Kyiv fortress.
The extensive size of the Russian system led to high costs of construc
|
https://en.wikipedia.org/wiki/Infraparticle
|
An infraparticle is an electrically charged particle and its surrounding cloud of soft photons—of which there are infinite number, by virtue of the infrared divergence of quantum electrodynamics. That is, it is a dressed particle rather than a bare particle. Whenever electric charges accelerate they emit Bremsstrahlung radiation, whereby an infinite number of the virtual soft photons become real particles. However, only a finite number of these photons are detectable, the remainder falling below the measurement threshold.
The form of the electric field at infinity, which is determined by the velocity of a point charge, defines superselection sectors for the particle's Hilbert space. This is unlike the usual Fock space description, where the Hilbert space includes particle states with different velocities.
Because of their infraparticle properties, charged particles do not have a sharp delta function density of states like an ordinary particle, but instead the density of states rises like an inverse power at the mass of the particle. This collection of states which are very close in mass to m consist of the particle together with low-energy excitation of the electromagnetic field.
Noether's theorem for gauge transformations
In electrodynamics and quantum electrodynamics, in addition to the global U(1) symmetry related to the electric charge, there are also position dependent gauge transformations. Noether's theorem states that for every infinitesimal symmetry transformation that is local (local in the sense that the transformed value of a field at a given point only depends on the field configuration in an arbitrarily small neighborhood of that point), there is a corresponding conserved charge called the Noether charge, which is the space integral of a Noether density (assuming the integral converges and there is a Noether current satisfying the continuity equation).
If this is applied to the global U(1) symmetry, the result
(over all of space)
is the conserve
|
https://en.wikipedia.org/wiki/Platform%20Computing
|
Platform Computing was a privately held software company primarily known for its job scheduling product, Load Sharing Facility (LSF). It was founded in 1992 in Toronto, Ontario, Canada and headquartered in Markham, Ontario with 11 branch offices across the United States, Europe and Asia.
In January 2012, Platform Computing was acquired by IBM.
History
Platform Computing was founded by Songnian Zhou, Jingwen Wang, and Bing Wu in 1992. Its first product, LSF, was based on the Utopia research project at the University of Toronto. The LSF software was developed partially with funding from CANARIE (Canadian Advanced Network and Research for Industry and Education).
Platform's revenue was approximately $300,000 in 1993, and reached $12 million in 1997. Revenue grew by 34% (YoY) to US$46.2 million in 2001, US$50 million in 2003.
In 1999, the SiteAssure suite was announced by Platform to address website availability and monitoring market.
On October 29, 2007, Platform Computing acquired the Scali Manage business from Norway-based Scali AS. Scali was cluster management software. On August 1, 2008, Platform acquired the rest of the Scali business, taking on the industry-standard Message Passing Interface (MPI), Scali MPI, and rebranding it Platform MPI.
On June 22, 2009, Platform Computing announced its first software to serve the cloud computing space. Platform ISF (Infrastructure Sharing Facility) enables organizations to set up and manage private clouds, controlling both physical and virtual resources.
In August 2009, Platform acquired HP-MPI from Hewlett-Packard.
In January 2012, Platform Computing was acquired by IBM.
Open-source participation
Platform joined the Hadoop project in 2011, and is focused on enhancing the Hadoop Distributed File System
Platform Lava - based on Platform LSF, licensed under GPLv2. The Lava scheduler is part of Red Hat HPC. Discontinued in 2011.
OpenLava - successor to Platform Lava.
Platform FTA - File Transfer Agent for HPC
|
https://en.wikipedia.org/wiki/Stream%20processing
|
In computer science, stream processing (also known as event stream processing, data stream processing, or distributed stream processing) is a programming paradigm which views streams, or sequences of events in time, as the central input and output objects of computation. Stream processing encompasses dataflow programming, reactive programming, and distributed data processing. Stream processing systems aim to expose parallel processing for data streams and rely on streaming algorithms for efficient implementation. The software stack for these systems includes components such as programming models and query languages, for expressing computation; stream management systems, for distribution and scheduling; and hardware components for acceleration including floating-point units, graphics processing units, and field-programmable gate arrays.
The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed. Given a sequence of data (a stream), a series of operations (kernel functions) is applied to each element in the stream. Kernel functions are usually pipelined, and optimal local on-chip memory reuse is attempted, in order to minimize the loss in bandwidth, associated with external memory interaction. Uniform streaming, where one kernel function is applied to all elements in the stream, is typical. Since the kernel and stream abstractions expose data dependencies, compiler tools can fully automate and optimize on-chip management tasks. Stream processing hardware can use scoreboarding, for example, to initiate a direct memory access (DMA) when dependencies become known. The elimination of manual DMA management reduces software complexity, and an associated elimination for hardware cached I/O, reduces the data area expanse that has to be involved with service by specialized computational units such as arithmetic logic units.
During the 1980s stream processing was explored within dataflow programming.
|
https://en.wikipedia.org/wiki/Use-case%20analysis
|
Use case analysis is a technique used to identify the requirements of a system (normally associated with software/process design) and the information used to both define processes used and classes (which are a collection of actors and processes) which will be used both in the use case diagram and the overall use case in the development or redesign of a software system or program. The use case analysis is the foundation upon which the system will be built.
Background
A use case analysis is the primary form for gathering usage requirements for a new software program or task to be completed. The primary goals of a use case analysis are: designing a system from the user's perspective, communicating system behavior in the user's terms, and specifying all externally visible behaviors. Another set of goals for a use case analysis is to clearly communicate: system requirements, how the system is to be used, the roles the user plays in the system, what the system does in response to the user stimulus, what the user receives from the system, and what value the customer or user will receive from the system.
Process
There are several steps involved in a use-case analysis.
Realization
A Use-case realization describes how a particular use case was realized within the design model, in terms of collaborating objects.
The Realization step sets up the framework within which an emerging system is analysis. This is where the first, most general, outline of what is required by the system is documented. This entails rough breakdown of the processes, actors, and data required for the system. These are what comprise the classes of the analysis.
Description
Once the general outline is completed, the next step is to describe the behavior of the system visible to the potential user of the system. While internal behaviors can be described as well, this is more related to designing a system rather than gathering requirements for it. The benefit of briefly describing internal behaviors woul
|
https://en.wikipedia.org/wiki/Reciprocating%20motion
|
Reciprocating motion, also called reciprocation, is a repetitive up-and-down or back-and-forth linear motion. It is found in a wide range of mechanisms, including reciprocating engines and pumps. The two opposite motions that comprise a single reciprocation cycle are called strokes.
A crank can be used to convert into reciprocating motion, or conversely turn reciprocating motion into circular motion.
For example, inside an internal combustion engine (a type of reciprocating engine), the expansion of burning fuel in the cylinders periodically pushes the piston down, which, through the connecting rod, turns the crankshaft. The continuing rotation of the crankshaft drives the piston back up, ready for the next cycle. The piston moves in a reciprocating motion, which is converted into circular motion of the crankshaft, which ultimately propels the vehicle or does other useful work.
The reciprocating motion of a pump piston is close to, but different from, sinusoidal simple harmonic motion. Assuming the wheel is driven at a perfect constant rotational velocity, the point on the crankshaft which connects to the connecting rod rotates smoothly at a constant velocity in a circle. Thus, the displacement of that point, is indeed exactly sinusoidal by definition. However, during the cycle, the angle of the connecting rod changes continuously. So, the horizontal displacement of the "far" end of the connecting rod (i.e., connected to the piston) differs slightly from sinusoidal. Circumstances where the wheel is not spinning with perfect constant rotational velocity, such as a steam locomotive starting up from a stop, are very much not sinusoidal. So basically a up and down a repetitive up-and-down or back-and-forth linear motion.
See also
References
Mechanical engineering
|
https://en.wikipedia.org/wiki/Software%20build
|
In software development, a build is the process of converting source code files into standalone software artifact(s) that can be run on a computer, or the result of doing so.
Functions
Building software is an end-to-end process that involves many distinct functions. Some of these functions are described below.
Version control
The version control function carries out activities such as workspace creation and updating, baselining and reporting. It creates an environment for the build process to run in and captures metadata about the inputs and output of the build process to ensure repeatability and reliability.
Tools such as Git, AccuRev or StarTeam help with these tasks by offering tools to tag specific points in history as being important, and more.
Code quality
Also known as static program analysis/static code analysis this function is responsible for checking that developers have adhered to the seven axes of code quality: comments, unit tests, duplication, complexity, coding rules, potential bugs and architecture & design.
Ensuring a project has high-quality code results in fewer bugs and influences nonfunctional requirements such as maintainability, extensibility and readability; which have a direct impact on the ROI for a business.
Compilation
This is only a small feature of managing the build process. The compilation function turns source files into directly executable or intermediate objects. Not every project will require this function.
While for simple programs the process consists of a single file being compiled, for complex software the source code may consist of many files and may be combined in different ways to produce many different versions.
Build tools
The process of building a computer program is usually managed by a build tool, a program that coordinates and controls other programs. Examples of such a program are make, Gradle, Meister by OpenMake Software, Ant, Maven, Rake, SCons and Phing. The build utility typically needs to com
|
https://en.wikipedia.org/wiki/Subscriber%20loop%20carrier
|
A subscriber loop carrier or subscriber line carrier (SLC) provides telephone exchange-like telephone interface functionality. SLC remote terminals are typically located in areas with a high density of telephone subscribers, such as a residential neighborhood, or very rural areas with widely dispersed customers, that are remote from the telephone company's central office (CO). Two or four T1 circuits (depending on the configuration) connect the SLC remote terminal to the central office terminal (COT), in the case of a universal subscriber loop carrier (USLC). An integrated subscriber loop carrier (ISLC) has its T-spans terminating directly in time division switching equipment in the telephone exchange.
One system serves up to 96 customers. This configuration is more efficient than the alternative of having separate copper pairs between each service termination point (the subscriber's location) and the central telephone exchange.
These systems are generally installed in cabinets that have some form of uninterruptible power supply or other backup battery arrangements, standby generators, and sometimes with additional equipment such as remote DSLAMs.
Reliability
SLCs have been criticized for reducing the reliability of local loops due to their increased reliance on utility power. Historically, all loop power was provided by the CO and was backed up by battery power and, for longer power outages, stand-by diesel generators housed at the office. However, telephone companies have increasingly been using SLCs, which are notorious for poorly functioning or short-lived battery backup systems, some lasting as little as four hours. Many do not have on-site standby generators, which requires the telephone company to bring out a portable generator before the battery power fails. This may not happen in time if there are obstructions caused by a natural or man-made disaster, causing service outages for anyone served by that unit. Often, the air conditioning units, sump p
|
https://en.wikipedia.org/wiki/Host%20model
|
In computer networking, a host model is an option of designing the TCP/IP stack of a networking operating system like Microsoft Windows or Linux. When a unicast packet arrives at a host, IP must determine whether the packet is locally destined (its destination matches an address that is assigned to an interface of the host). If the IP stack is implemented with a weak host model, it accepts any locally destined packet regardless of the network interface on which the packet was received. If the IP stack is implemented with a strong host model, it only accepts locally destined packets if the destination IP address in the packet matches an IP address assigned to the network interface on which the packet was received.
The weak host model provides better network connectivity (for example, it can be easy to find any packet arriving at the host using ordinary tools), but it also makes hosts susceptible to multihome-based network attacks. For example, in some configurations when a system running a weak host model is connected to a VPN, other systems on the same subnet can compromise the security of the VPN connection. Systems running the strong host model are not susceptible to this type of attack.
The IPv4 implementation in Microsoft Windows versions prior to Windows Vista uses the weak host model. The Windows Vista and Windows Server 2008 TCP/IP stack supports the strong host model for both IPv4 and IPv6 and is configured to use it by default. However, it can also be configured to use a weak host model.
The IPv4 implementation in Linux defaults to the weak host model. Source validation by reversed path, as specified in RFC 1812 can be enabled (the rp_filter option), and some distributions do so by default. This is not quite the same as the strong host model, but defends against the same class of attacks for typical multihomed hosts. arp_ignore and arp_announce can also be used to tweak this behaviour.
Modern BSDs (FreeBSD, NetBSD, OpenBSD, and DragonflyBSD) all defau
|
https://en.wikipedia.org/wiki/Raymond%20Paley
|
Raymond Edward Alan Christopher Paley (7 January 1907 – 7 April 1933) was an English mathematician who made significant contributions to mathematical analysis before dying young in a skiing accident.
Life
Paley was born in Bournemouth, England, the son of an artillery officer who died of tuberculosis before Paley was born. He was educated at Eton College as a King's Scholar and at Trinity College, Cambridge. He became a wrangler in 1928, and with J. A. Todd, he was one of two winners of the 1930 Smith's Prize examination.
He was elected a Research Fellow of Trinity College in 1930, edging out Todd for the position, and continued at Cambridge as a postgraduate student, advised by John Edensor Littlewood. After the 1931 return of G. H. Hardy to Cambridge he participated in weekly joint seminars with the other students of Hardy and Littlewood. He traveled to the US in 1932 to work with Norbert Wiener at the Massachusetts Institute of Technology and with George Pólya at Princeton University, and as part of the same trip also planned to work with Lipót Fejér at a seminar in Chicago organized as part of the Century of Progress exposition.
He was killed on 7 April 1933 in a skiing trip to the Canadian Rockies, by an avalanche on Deception Pass.
Contributions
Paley's contributions include the following.
His mathematical research with Littlewood began in 1929, with his work towards a fellowship at Trinity, and Hardy writes that "Littlewood's influence dominates nearly all his earliest work". Their work became the foundation for Littlewood–Paley theory, an application of real-variable techniques in complex analysis.
The Walsh–Paley numeration, a standard method for indexing the Walsh functions, came from a 1932 suggestion of Paley.
Paley collaborated with Antoni Zygmund on Fourier series, continuing the work on this topic that he had already done with Littlewood. His work in this area also led to the Paley–Zygmund inequality in probability theory.
In a 1933 paper, he pub
|
https://en.wikipedia.org/wiki/Schwartz%E2%80%93Zippel%20lemma
|
In mathematics, the Schwartz–Zippel lemma (also called the DeMillo–Lipton–Schwartz–Zippel lemma) is a tool commonly used in probabilistic polynomial identity testing, i.e. in the problem of determining whether a given multivariate polynomial is the
0-polynomial (or identically equal to 0). It was discovered independently by Jack Schwartz, Richard Zippel, and Richard DeMillo and Richard J. Lipton, although DeMillo and Lipton's version was shown a year prior to Schwartz and Zippel's result. The finite field version of this bound was proved by Øystein Ore in 1922.
Statement and proof of the lemma
Theorem 1 (Schwartz, Zippel). Let
be a non-zero polynomial of total degree over an integral domain R. Let S be a finite subset of R and let be selected at random independently and uniformly from S. Then
Equivalently, the Lemma states that for any finite subset S of R, if Z(P) is the zero set of P, then
Proof. The proof is by mathematical induction on n. For , as was mentioned before, P can have at most d roots. This gives us the base case.
Now, assume that the theorem holds for all polynomials in variables. We can then consider P to be a polynomial in x1 by writing it as
Since is not identically 0, there is some such that is not identically 0. Take the largest such . Then , since the degree of is at most d.
Now we randomly pick from . By the induction hypothesis,
If , then is of degree (and thus not identically zero) so
If we denote the event by , the event by , and the complement of by , we have
Applications
The importance of the Schwartz–Zippel Theorem and Testing Polynomial Identities follows
from algorithms which are obtained to problems that can be reduced to the problem
of polynomial identity testing.
Zero testing
For example, is
To solve this, we can multiply it out and check that all the coefficients are 0. However, this takes exponential time. In general, a polynomial can be algebraically represented by an arithmetic
|
https://en.wikipedia.org/wiki/Roof%20pitch
|
Roof pitch is the steepness of a roof expressed as a ratio of inch(es) rise per horizontal foot (or their metric equivalent), or as the angle in degrees its surface deviates from the horizontal. A flat roof has a pitch of zero in either instance; all other roofs are pitched.
A roof that rises 3 inches per foot, for example, would be described as having a pitch of 3 (or “3 in 12”).
Description
The pitch of a roof is its vertical 'rise' over its horizontal 'run’ (i.e. its span), also known as its 'slope'.
In the imperial measurement systems, "pitch" is usually expressed with the rise first and run second (in the US, run is held to number 12; e.g., 3:12, 4:12, 5:12). In metric systems either the angle in degrees or rise per unit of run, expressed as a '1 in _' slope (where a '1 in 1' equals 45°) is used. Where convenient, the least common multiple is used (e.g., a '3 in 4' slope, for a '9 in 12' or '1 in 1 1/3').
Selection
Considerations involved in selecting a roof pitch include availability and cost of materials, aesthetics, ease or difficulty of construction, climatic factors such as wind and potential snow load, and local building codes.
The primary purpose of pitching a roof is to redirect wind and precipitation, whether in the form of rain or snow. Thus, pitch is typically greater in areas of high rain or snowfall, lower in areas of high wind. The steep roof of the tropical Papua New Guinea longhouse, for example, sweeps almost to the ground. The high, steeply-pitched gabled roofs of Northern Europe are typical in regions of heavy snowfall. In some areas building codes require a minimum slope. Buffalo, New York and Montreal, Quebec, Canada, specify 6 in 12, a pitch of approximately 26.6 degrees.
A flat roof includes pitches as low as 1/2:12 to 2:12 (1 in 24 to 1 in 6), which are barely capable of properly shedding water. Such low-slope roofs (up to 4:12 (1 in 3)) require special materials and techniques to avoid leaks. Conventional describes pitches fro
|
https://en.wikipedia.org/wiki/Anonymous%20matching
|
Anonymous matching is a matchmaking method facilitated by computer databases, in which each user confidentially selects people they are interested in dating and the computer identifies and reports matches to pairs of users who share a mutual attraction. Protocols for anonymous matchmaking date back to the 1980s, and one of the earliest papers on the topic is by Baldwin and Gramlich, published in 1985. From a technical perspective, the problem and solution are trivial and likely predate even this paper. The problem becomes interesting and requires more sophisticated cryptography when the matchmaker (central server) isn't trusted.
The purpose of the protocol is to allow people to initiate romantic relationships while avoiding the risk of embarrassment, awkwardness, and other negative consequences associated with unwanted romantic overtures and rejection. The general concept was patented on September 7, 1999, by David J. Blumberg and DoYouDo chief executive officer Gil S. Sudai, but several websites were already employing the methodology by that date, and thus apparently were allowed to continue using it. United States Patent 5,950,200 points out several potential flaws in traditional courtship and in conventional dating systems in which strangers meet online, promoting anonymous matching of friends and acquaintances as a better alternative:
Implementations
Some of the most notable implementations of the idea have been:
Baldwin and Gramlich, as cited above.
eCRUSH. launched Valentine's Day, 1999, is the most successful implementation of the concept. Targeted to the teen market, it has more than 1.6 million users and claims more than 600,000 legitimate matches
DoYOU2.com. The website's owner, DoYouDo, Inc., was incorporated 23 September 1999 and acquired by MatchNet in September 2000 in exchange for stock valued at $1,820,000. According to MatchNet's 2003 annual report, "The acquisition was made primarily for the purpose of acquiring the patent on this business mo
|
https://en.wikipedia.org/wiki/Rustle%20noise
|
Rustle noise is noise consisting of aperiodic pulses characterized by the average time between those pulses (such as the mean time interval between clicks of a Geiger counter), known as rustle time (Schouten ?). Rustle time is determined by the fineness of sand, seeds, or shot in rattles, contributes heavily to the sound of sizzle cymbals, drum snares, drum rolls, and string drums, and makes subtle differences in string instrument sounds. Rustle time in strings is affected by different weights and widths of bows and by types of hair and rosin in strings. The concept is also applicable to flutter-tonguing, brass and woodwind growls, resonated vocal fry in woodwinds, and eructation sounds in some woodwinds. Robert Erickson suggests the exploration of accelerando-ritardando scales producible on some acoustic instruments and further variations in rustle noise "because this apparently minor aspect of musical sounds has a disproportionately large importance for higher levels--textures, ensemble timbres, [and] contrasts between music events." (Erickson 1975, p. 71-72)
Sources
Erickson, Robert (1975). Sound Structure in Music. University of California Press. .
Timbre
Noise (electronics)
|
https://en.wikipedia.org/wiki/Trusted%20Computing%20Group
|
The Trusted Computing Group is a group formed in 2003 as the successor to the Trusted Computing Platform Alliance which was previously formed in 1999 to implement Trusted Computing concepts across personal computers. Members include Intel, AMD, IBM, Microsoft, and Cisco.
The core idea of trusted computing is to give hardware manufacturers control over what software does and does not run on a system by refusing to run unsigned software.
History
On October 11, 1999, the Trusted Computing Platform Alliance (abbreviated as TCPA), a consortium of various technology companies including Compaq, Hewlett-Packard, IBM, Intel, and Microsoft, was formed in an effort to promote trust and security in the personal computing platform. In November 1999, the TCPA announced that over 70 leading hardware and software companies joined the alliance in the first month. On January 30, 2001, version 1.0 of the Trusted Computing Platform Specifications was released IBM was the first original equipment manufacturer to incorporate hardware features based on the specifications with the introduction of its ThinkPad T30 mobile computer in 2002.
In 2003, the TCPA was succeeded by the Trusted Computing Group, with an increased emphasis on mobile devices.
Membership fees vary by level. Promoters pay annual membership fees of $30,000, contributors pay $15,000, and depending upon company size, adopters pay annual membership fees of either $2,500 or $7,500.
Overview
TCG's most successful effort was the development of a Trusted Platform Module (TPM), a semiconductor intellectual property core or integrated circuit that conforms to the specification to enable trusted computing features in computers and mobile devices. Related efforts involved Trusted Network Connect, to bring trusted computing to network connections, and Storage Core Architecture / Security Subsystem Class, to bring trusted computing to disk drives and other storage devices. These efforts have not achieved the same level of widesp
|
https://en.wikipedia.org/wiki/Quintrix
|
Quintrix is a name given to a flat and wide television tube made by Panasonic. Quintrix tubes were first introduced to the market in 1974. The word originates from the Latin word "quintum", which means "fifth". So far there are three models of Quintrix available:
Quintrix,
Quintrix F, and
Quintrix SR (SR = Super Resolution)
The first Quintrix cathode ray tubes featured a prefocus lens that reduced beam diffusion, giving a sharper picture.
Manufactured in Malaysia and also in Wales with an MX-6 core, the Quintrix model was the standard television type for Hong Kong's etv project in 1999.
Panasonic products
Television technology
Vacuum tube displays
|
https://en.wikipedia.org/wiki/Madhava%20of%20Sangamagrama
|
Mādhava of Sangamagrāma (Mādhavan) () was an Indian mathematician and astronomer who is considered as the founder of the Kerala school of astronomy and mathematics. One of the greatest mathematician-astronomers of the Late Middle Ages, Madhava made pioneering contributions to the study of infinite series, calculus, trigonometry, geometry, and algebra. He was the first to use infinite series approximations for a range of trigonometric functions, which has been called the "decisive step onward from the finite procedures of ancient mathematics to treat their limit-passage to infinity".
Biography
Little is known about Mādhava's life with certainty. However, from scattered references to Mādhava found in diverse manuscripts, historians of Kerala school have pieced together informations about the mathematician. In a manuscript preserved in the Oriental Institute, Baroda, Madhava has been referred to as Mādhavan vēṇvārōhādīnām karttā ... Mādhavan Ilaññippaḷḷi Emprān. It has been noted that the epithet 'Emprān' refers to the Emprāntiri community, to which Madhava might have belonged to.
The term "Ilaññippaḷḷi" has been identified as a reference to the residence of Mādhava. This is corroborated by Mādhava himself. In his short work on the moon's positions titled Veṇvāroha, Mādhava says that he was born in a house named bakuḷādhiṣṭhita . . . vihāra. This is clearly Sanskrit for Ilaññippaḷḷi. Ilaññi is the Malayalam name of the evergreen tree Mimusops elengi and the Sanskrit name for the same is Bakuḷa. Palli is a term for village. The Sanskrit house name bakuḷādhiṣṭhita . . . vihāra has also been interpreted as a reference to the Malayalam house name Iraññi ninna ppaḷḷi and some historians have tried to identify it with one of two currently existing houses with names Iriññanavaḷḷi and Iriññārapaḷḷi both of which are located near Irinjalakuda town in central Kerala. This identification is far fetched because both names have neither phonetic similarity nor semantic equivalen
|
https://en.wikipedia.org/wiki/Emulex
|
Emulex Corporation is a provider of computer network connectivity, monitoring and management hardware and software. The company's I/O connectivity offerings, including its line of Ethernet and Fibre Channel-based connectivity products, are or were used in server and storage products from OEMs, including Cisco, Dell, EMC Corporation, Fujitsu, Hitachi, HP, Huawei, IBM, NetApp, and Oracle Corporation.
History
1979–1999
Emulex was founded in 1979 by Fred B. Cox "as a supplier of data storage products and data communications equipment for the computer industry." By 1983, Emulex was able to advertise its products as if it were grocery items: a 2-page spread headlined "One stop shopping for VAX users? Emulex, of course" showed 3 paper bags, each with the Emulex name and logo and each holding a large computer board. One bag also said, "Disk Controllers" while the second bag said, "Communication Controllers;" the third said "Tape Controllers."
In 1992, Emulex spun off what became QLogic.
Much of Emulex's early market was for Digital Equipment Corporation's VAX and PDP-11 systems. Computer History Museum's collections include an Emulex disk drive.
2000 to present
Headquartered in Costa Mesa, California, Emulex employed more than 1,200 people in 2013. In 2000, Emulex acquired Giganet for $645 million, and in 2013, it acquired Endace, based in New Zealand. On April 21, 2009, Broadcom made a proposal to the Emulex board of directors to buy all existing shares of Emulex for $764 million, or $9.25 per share, a 40% premium over the stock's closing price on April 20, 2009. After Emulex's board of directors recommended against the sale, Broadcom increased their offer to $11 per share on June 30, which valued the company at $925 million. On July 9, 2009, it too was rejected Broadcom subsequently withdrew its offer.
In February 2015, Avago Technologies Limited announced it would acquire Emulex for $8 per share, in cash. Avago, a spinoff of Hewlett Packard, merged with Broadcom i
|
https://en.wikipedia.org/wiki/Circle%20bundle
|
In mathematics, a circle bundle is a fiber bundle where the fiber is the circle .
Oriented circle bundles are also known as principal U(1)-bundles, or equivalently, as principal SO(2)-bundles. In physics, circle bundles are the natural geometric setting for electromagnetism. A circle bundle is a special case of a sphere bundle.
As 3-manifolds
Circle bundles over surfaces are an important example of 3-manifolds. A more general class of 3-manifolds is Seifert fiber spaces, which may be viewed as a kind of "singular" circle bundle, or as a circle bundle over a two-dimensional orbifold.
Relationship to electrodynamics
The Maxwell equations correspond to an electromagnetic field represented by a 2-form F, with being cohomologous to zero, i.e. exact. In particular, there always exists a 1-form A, the electromagnetic four-potential, (equivalently, the affine connection) such that
Given a circle bundle P over M and its projection
one has the homomorphism
where is the pullback. Each homomorphism corresponds to a Dirac monopole; the integer cohomology groups correspond to the quantization of the electric charge. The Aharonov–Bohm effect can be understood as the holonomy of the connection on the associated line bundle describing the electron wave-function. In essence, the Aharonov–Bohm effect is not a quantum-mechanical effect (contrary to popular belief), as no quantization is involved or required in the construction of the fiber bundles or connections.
Examples
The Hopf fibration is an example of a non-trivial circle bundle.
The unit tangent bundle of a surface is another example of a circle bundle.
The unit tangent bundle of a non-orientable surface is a circle bundle that is not a principal bundle. Only orientable surfaces have principal unit tangent bundles.
Another method for constructing circle bundles is using a complex line bundle and taking the associated sphere (circle in this case) bundle. Since this bundle has an orientation induced from we hav
|
https://en.wikipedia.org/wiki/Subtelomere
|
Subtelomeres are segments of DNA between telomeric caps and chromatin.
Structure
Telomeres are specialized protein–DNA constructs present at the ends of eukaryotic chromosomes, which prevent them from degradation and end-to-end chromosomal fusion. Most vertebrate telomeric DNA consists of long (TTAGGG)n repeats of variable length, often around 3-20kb. Subtelomeres are segments of DNA between telomeric caps and chromatin. In vertebrates, each chromosome has two subtelomeres immediately adjacent to the long (TTAGGG)n repeats. Subtelomeres are considered to be the most distal (farthest from the centromere) region of unique DNA on a chromosome, and they are unusually dynamic and variable mosaics of multichromosomal blocks of sequence. The subtelomeres of such diverse species as humans, Plasmodium falciparum, Drosophila melanogaster, and Saccharomyces cerevisiae are structurally similar in that they are composed of various repeated elements, but the extent of the subtelomeres and the sequence of the elements vary greatly among organisms. In yeast (S. cerevisiae), subtelomeres are composed of two domains: the proximal and distal (telomeric) domains. The two domains differ in sequence content and extent of homology to other chromosome ends, and they are often separated by a stretch of degenerate telomere repeats (TTAGGG) and an element called 'core X', which is found at all chromosome ends and contains an autonomously replicating sequence (ARS) and an ABF1 binding site. The proximal domain is composed of variable interchromosomal duplications (<1-30 kb); this region can contain genes such Pho, Mel, and Mal. The distal domain is composed of 0-4 tandem copies of the highly conserved Y' element; the number and chromosomal distribution of Y′ elements varies among yeast strains. Between the core X and the Y' element or the core X and TTAGGG sequence there is often a set of 4 subtelomeric repeats elements (STR): STR-A, STR-B, STR-C and STR-D which consists of multiple copies o
|
https://en.wikipedia.org/wiki/Exploded-view%20drawing
|
An exploded-view drawing is a diagram, picture, schematic or technical drawing of an object, that shows the relationship or order of assembly of various parts.
It shows the components of an object slightly separated by distance, or suspended in surrounding space in the case of a three-dimensional exploded diagram. An object is represented as if there had been a small controlled explosion emanating from the middle of the object, causing the object's parts to be separated an equal distance away from their original locations.
The exploded-view drawing is used in parts catalogs, assembly and maintenance manuals and other instructional material.
The projection of an exploded view is usually shown from above and slightly in diagonal from the left or right side of the drawing. (See exploded-view drawing of a gear pump to the right: it is slightly from above and shown from the left side of the drawing in diagonal.)
Overview
An exploded-view drawing is a type of drawing, that shows the intended assembly of mechanical or other parts. It shows all parts of the assembly and how they fit together. In mechanical systems usually the component closest to the center are assembled first, or is the main part in which the other parts get assembled. This drawing can also help to represent the disassembly of parts, where the parts on the outside normally get removed first.
Exploded diagrams are common in descriptive manuals showing parts placement, or parts contained in an assembly or sub-assembly. Usually such diagrams have the part identification number and a label indicating which part fills the particular position in the diagram. Many spreadsheet applications can automatically create exploded diagrams, such as exploded pie charts.
In patent drawings in an exploded views the separated parts should be embraced by a bracket, to show the relationship or order of assembly of various parts are permissible, see image. When an exploded view is shown in a figure that is on the sam
|
https://en.wikipedia.org/wiki/Airborne%20wind%20turbine
|
An airborne wind turbine is a design concept for a wind turbine with a rotor supported in the air without a tower, thus benefiting from the higher velocity and persistence of wind at high altitudes, while avoiding the expense of tower construction, or the need for slip rings or yaw mechanism. An electrical generator may be on the ground or airborne. Challenges include safely suspending and maintaining turbines hundreds of meters off the ground in high winds and storms, transferring the harvested and/or generated power back to earth, and interference with aviation.
Airborne wind turbines may operate in low or high altitudes; they are part of a wider class of Airborne Wind Energy Systems (AWES) addressed by high-altitude wind power and crosswind kite power. When the generator is on the ground, then the tethered aircraft need not carry the generator mass or have a conductive tether. When the generator is aloft, then a conductive tether would be used to transmit energy to the ground or used aloft or beamed to receivers using microwave or laser. Kites and helicopters come down when there is insufficient wind; kytoons and blimps may resolve the matter with other disadvantages. Also, bad weather such as lightning or thunderstorms, could temporarily suspend use of the machines, probably requiring them to be brought back down to the ground and covered. Some schemes require a long power cable and, if the turbine is high enough, a prohibited airspace zone. As of 2022, few commercial airborne wind turbines are in regular operation.
Aerodynamic variety
An aerodynamic airborne wind power system relies on the wind for support.
In one class, the generator is aloft; an aerodynamic structure resembling a kite, tethered to the ground, extracts wind energy by supporting a wind turbine. In another class of devices, such as crosswind kite power, generators are on the ground; one or more airfoils or kites exert force on a tether, which is converted to electrical energy. An airborne tur
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.