source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/DCE%20Distributed%20File%20System
The DCE Distributed File System (DCE/DFS) is the remote file access protocol used with the Distributed Computing Environment. It was a variant of Andrew File System (AFS), based on the AFS Version 3.0 protocol that was developed commercially by Transarc Corporation. AFS Version 3.0 was in turn based on the AFS Version 2.0 protocol (also used by the Coda disconnected file system) originally developed at Carnegie Mellon University. DCE/DFS consisted of multiple cooperative components that provided a network file system with strong file system semantics, attempting to mimic the behavior of POSIX local file systems while taking advantage of performance optimizations when possible. A DCE/DFS client system utilized a locally managed cache that would contain copies (or regions) of the original file. The client system would coordinate with a server system where the original copy of the file was stored to ensure that multiple clients accessing the same file would re-fetch a cached copy of the file data when the original file had changed. The advantage of this approach is that it provided very good performance even over slow network connections because most of the file access was actually done to the local cached regions of the file. If the server failed, the client could continue making changes to the file locally, storing it back to the server when it became available again. DCE/DFS also divorced the concept of logical units of management (Filesets) from the underlying volume on which the fileset was stored. In doing this it allowed administrative control of the location for the fileset in a manner that was transparent to the end user. To support this and other advanced DCE/DFS features, a local journaling file system (DCE/LFS also known as Episode) was developed to provide the full range of support options. IBM has not maintained it since 2005: https://web.archive.org/web/20071009171709/http://www-306.ibm.com/software/stormgmt/dfs/ IBM was working on a replacement
https://en.wikipedia.org/wiki/TRAIL
In the field of cell biology, TNF-related apoptosis-inducing ligand (TRAIL), is a protein functioning as a ligand that induces the process of cell death called apoptosis. TRAIL is a cytokine that is produced and secreted by most normal tissue cells. It causes apoptosis primarily in tumor cells, by binding to certain death receptors. TRAIL and its receptors have been used as the targets of several anti-cancer therapeutics since the mid-1990s, such as Mapatumumab. However, as of 2013, these have not shown significant survival benefit. TRAIL has also been implicated as a pathogenic or protective factor in various pulmonary diseases, particularly pulmonary arterial hypertension. TRAIL has also been designated CD253 (cluster of differentiation 253) and TNFSF10 (tumor necrosis factor (ligand) superfamily, member 10). Gene In humans, the gene that encodes TRAIL is located at chromosome 3q26, which is not close to other TNF family members. The genomic structure of the TRAIL gene spans approximately 20 kb and is composed of five exonic segments 222, 138, 42, 106, and 1245 nucleotides and four introns of approximately 8.2, 3.2, 2.3 and 2.3 kb. The TRAIL gene lacks TATA and CAAT boxes and the promoter region contains putative response elements for transcription factors GATA, AP-1, C/EBP, SP-1, OCT-1, AP3, PEA3, CF-1, and ISRE. The TRAIL gene as a drug target TIC10 (which causes expression of TRAIL) was investigated in mice with various tumour types. Small molecule ONC201 causes expression of TRAIL which kills some cancer cells. Structure TRAIL shows homology to other members of the tumor necrosis factor superfamily. It is composed of 281 amino acids and has characteristics of a type II transmembrane protein. The N-terminal cytoplasmic domain is not conserved across family members, however, the C-terminal extracellular domain is conserved and can be proteolytically cleaved from the cell surface. TRAIL forms a homotrimer that binds three receptor molecules. Function
https://en.wikipedia.org/wiki/Quantization%20%28music%29
In digital music processing technology, quantization is the studio-software process of transforming performed musical notes, which may have some imprecision due to expressive performance, to an underlying musical representation that eliminates the imprecision. The process results in notes being set on beats and on exact fractions of beats. The purpose of quantization in music processing is to provide a more beat-accurate timing of sounds. Quantization is frequently applied to a record of MIDI notes created by the use of a musical keyboard or drum machine. Additionally, the phrase "pitch quantization" can refer to pitch correction used in audio production, such as using Auto-Tune. Description A frequent application of quantization in this context lies within MIDI application software or hardware. MIDI sequencers typically include quantization in their manifest of edit commands. In this case, the dimensions of this timing grid are set beforehand. When one instructs the music application to quantize a certain group of MIDI notes in a song, the program moves each note to the closest point on the timing grid. Quantization in MIDI is usually applied to Note On messages and sometimes Note Off messages; some digital audio workstations shift the entire note by moving both messages together. Sometimes quantization is applied in terms of a percentage, to partially align the notes to a certain beat. Using a percentage of quantization allows for the subtle preservation of some natural human timing nuances. The most difficult problem in quantization is determining which rhythmic fluctuations are imprecise or expressive (and should be removed by the quantization process) and which should be represented in the output score. For instance, a simple children's song should probably have very coarse quantization, resulting in few different notes in output. On the other hand, quantizing a performance of a piano piece by Arnold Schoenberg, for instance, should result in many smaller no
https://en.wikipedia.org/wiki/Boustrophedon%20transform
In mathematics, the boustrophedon transform is a procedure which maps one sequence to another. The transformed sequence is computed by an "addition" operation, implemented as if filling a triangular array in a boustrophedon (zigzag- or serpentine-like) manner—as opposed to a "Raster Scan" sawtooth-like manner. Definition The boustrophedon transform is a numerical, sequence-generating transformation, which is determined by an "addition" operation. Generally speaking, given a sequence: , the boustrophedon transform yields another sequence: , where is likely defined equivalent to . The entirety of the transformation itself can be visualized (or imagined) as being constructed by filling-out the triangle as shown in Figure 1. Boustrophedon Triangle To fill-out the numerical Isosceles triangle (Figure 1), you start with the input sequence, , and place one value (from the input sequence) per row, using the boustrophedon scan (zigzag- or serpentine-like) approach. The top vertex of the triangle will be the input value , equivalent to output value , and we number this top row as row 0. The subsequent rows (going down to the base of the triangle) are numbered consecutively (from 0) as integers—let denote the number of the row currently being filled. These rows are constructed according to the row number () as follows: For all rows, numbered , there will be exactly values in the row. If is odd, then put the value on the right-hand end of the row. Fill-out the interior of this row from right-to-left, where each value (index: ) is the result of "addition" between the value to right (index: ) and the value to the upper right (index: ). The output value will be on the left-hand end of an odd row (where is odd). If is even, then put the input value on the left-hand end of the row. Fill-out the interior of this row from left-to-right, where each value (index: ) is the result of "addition" between the value to its left (index: ) and the value to its upper left
https://en.wikipedia.org/wiki/786%20%28number%29
786 (seven hundred [and] eighty-six) is the natural number following 785 and preceding 787. In mathematics 786 is: a sphenic number. a Harshad number in bases 4, 5, 7, 14 and 16. the aliquot sum of 510. part of the 321329-aliquot tree. The complete aliquot sequence starting at 498 is: 498, 510, 786, 798, 1122, 1470, 2634, 2646, 4194, 4932, 7626, 8502, 9978, 9990, 17370, 28026, 35136, 67226, 33616, 37808, 40312, 35288, 37072, 45264, 79728, 146448, 281166, 281178, 363942, 424638, 526338, 722961, 321329, 1, 0 50 can be partitioned into powers of two in 786 different ways . 786 might be the largest n for which the value of the central binomial coefficient is not divisible by an odd prime squared. If there is a larger such number, it would have to be at least 157450 (see ). Area code 786 is a United States telephone area code in Miami-Dade County. As an overlay area code, it shares the same geographic numbering plan area with other codes for a larger pool of telephone numbers. In other fields 80786 - 7th generation x86 like Athlon and Intel Pentium 4 The USSD code 786, typically dialed as ##786# or *#786#, opens the RTN dialog on some cell phones. "RTN" is 786 when dialed on an E.161 telephone pad. In the New General Catalogue, NGC786 is a magnitude 13.5 spiral galaxy in the constellation Aries. Additionally, 786 Bredichina is an asteroid. In juggling, 786 as fourhanded Siteswap is also known as French threecount. In Islam, 786 is often used to represent the Arabic phrase Bismillah. In films The number is often featured in films, mostly due to its auspiciousness in Islamic culture. Vijay Verma's (Amitabh Bachchan) coolie number in the 1975 Hindi film Deewaar. Raja's (Rajnikanth) coolie number in the 1981 Tamil film Thee, a remake of Deewaar. Iqbal Khan's (Amitabh Bachchan) coolie number in the 1983 Hindi film Coolie. Bachchan has indicated that he believes the number is auspicious, as he survived a serious injury while wearing this number during the shoot
https://en.wikipedia.org/wiki/DejaGnu
DejaGnu is a software framework for testing other programs. It has a main script called runtest that goes through a directory looking at configuration files and then runs some tests with given criteria. The purpose of the DejaGnu package is to provide a single front end for all tests. It is a part of the GNU Project and is licensed under the GPL. It is based on Expect, which is in turn based on Tcl. The current maintainers are Rob Savoye and Ben Elliston. Testing DejaGnu has a very strong history in testing due to its Tcl base. Tcl is used extensively by companies such as Oracle and Sybase to test their products. DejaGnu allows this work to be much more structured. The tests can be grouped according to the tool they are testing. The test is run by merely calling in the root project directory. runtest --tool program_to_test This will look in the directory for any folders starting with and will run all .exp files in that folder. Embedded design One field for which DejaGnu is particularly well suited is that of embedded system design. It allows for testing to be done remotely on development boards; separate initialization files can be created for each operating system and board. This mainly focuses on embedded targets and remote hosts. DejaGnu is thus popular with many GNU projects, at universities, and for private companies. Files Essential Files Each directory in testsuite should contain tests for a specific tool. In this example, the tool being tested is the Apache webserver. This will be the file containing tests, which in this fictional case might change configuration options, and then connect to the network and check to make sure the changes have taken effect. This file will be run as a tool init file for the tool called toolname. Other Files This file is a directory specific configuration file for . Options can be placed in this file rather than retyped on each invocation; these options can include any variable passed as a command line arg
https://en.wikipedia.org/wiki/Rotronics%20Wafadrive
The Rotronics Wafadrive is a magnetic tape storage peripheral launched in late 1984 for the ZX Spectrum home computer. Each tape is a continuous loop, unlike cassette tape. It was intended to compete with Sinclair's ZX Interface 1 and ZX Microdrive. The Wafadrive comprises two continuous loop stringy floppy tape drives, an RS-232 interface and Centronics parallel port. The drives can run at two speeds: High speed (for seeking) and low speed (for reading/writing, which was significantly slower than that of Microdrives). The cartridges (or "wafers"), the same as those used in Entrepo stringy floppy devices for other microcomputers, are physically larger than Microdrive cartridges. They were available in three different capacities, nominally 16 kB, 64 kB or 128 kB. The larger sizes had the disadvantage of slower access, due to the longer length of tape. The same drive mechanism, manufactured by BSR, and cartridges were used in at least the following similar devices: Quick Data Drive (QDD), designed to connect to the cassette port of Commodore 64 and VIC-20 home computers. A&J Micro Drive System 100, for TRS-80 Model 100 and it's clones (Kyotronic KC-85, NEC PC-8201 & PC-8300, Olivetti M10), connected via the RS-232 port. External links Rotronics Wafadrive User Manual meulie.net Rotronics Wafadrive User Manual archive.org/sincuser.f9.co.uk Review of Wafadrive in Sinclair User, December 1984 Review of Waferdrive in Your Sinclair, Issue 5, May 1986 Computer storage devices Home computer peripherals ZX Spectrum
https://en.wikipedia.org/wiki/Hasse%E2%80%93Witt%20matrix
In mathematics, the Hasse–Witt matrix H of a non-singular algebraic curve C over a finite field F is the matrix of the Frobenius mapping (p-th power mapping where F has q elements, q a power of the prime number p) with respect to a basis for the differentials of the first kind. It is a g × g matrix where C has genus g. The rank of the Hasse–Witt matrix is the Hasse or Hasse–Witt invariant. Approach to the definition This definition, as given in the introduction, is natural in classical terms, and is due to Helmut Hasse and Ernst Witt (1936). It provides a solution to the question of the p-rank of the Jacobian variety J of C; the p-rank is bounded by the rank of H, specifically it is the rank of the Frobenius mapping composed with itself g times. It is also a definition that is in principle algorithmic. There has been substantial recent interest in this as of practical application to cryptography, in the case of C a hyperelliptic curve. The curve C is superspecial if H = 0. That definition needs a couple of caveats, at least. Firstly, there is a convention about Frobenius mappings, and under the modern understanding what is required for H is the transpose of Frobenius (see arithmetic and geometric Frobenius for more discussion). Secondly, the Frobenius mapping is not F-linear; it is linear over the prime field Z/pZ in F. Therefore the matrix can be written down but does not represent a linear mapping in the straightforward sense. Cohomology The interpretation for sheaf cohomology is this: the p-power map acts on H1(C,OC), or in other words the first cohomology of C with coefficients in its structure sheaf. This is now called the Cartier–Manin operator (sometimes just Cartier operator), for Pierre Cartier and Yuri Manin. The connection with the Hasse–Witt definition is by means of Serre duality, which for a curve relates that group to H0(C, ΩC) where ΩC = Ω1C is the sheaf of Kähler differentials on C. Abelian varieties and their p-rank The p-rank of an abel
https://en.wikipedia.org/wiki/Topographical%20code
In medicine, "topographical codes" (or "topography codes") are codes that indicate a specific location in the body. Examples Only the first of these is a system dedicated only to topography. The others are more generalized systems that contain topographic axes. Nomina Anatomica (updated to Terminologia Anatomica) ICD-O SNOMED MeSH (the 'A' axis) See also Medical classification References Anatomy
https://en.wikipedia.org/wiki/Remote%20File%20Sharing
Remote File Sharing (RFS) is a Unix operating system component for sharing resources, such as files, devices, and file system directories, across a network, in a network-independent manner, similar to a distributed file system. It was developed at Bell Laboratories of AT&T in the 1980s, and was first delivered with UNIX System V Release 3 (SVR3). RFS relied on the STREAMS Transport Provider Interface feature of this operating system. It was also included in UNIX System V Release 4, but as that also included the Network File System (NFS) which was based on TCP/IP and more widely supported in the computing industry, RFS was little used. Some licensees of AT&T UNIX System V Release 4 did not include RFS support in SVR4 distributions, and Sun Microsystems removed it from Solaris 2.4. Features The basic application architecture of RFS is the client–server model, in which a participating host may be a server as well as a client, simultaneously. It was based on different design decisions, in comparison to the Network File System (NFS). Instead of focusing on reliable operation in the presence of failures, it focused on preserving UNIX file system semantics across the network. This enabled the system to provide remote access to hardware resources located on an RFS server. Unlike NFS (before version 4), the RFS server maintains state to keep track of how many times a file has been opened, or the locks established on a file or device. RFS provides complete UNIX/POSIX file semantics for all file types, including special devices, and named pipes. It supports access controls and record and file locking of remote files in a transparent manner as if the shared files are local. This permitted binary application compatibility when involving network resources. It allows the mounting of devices across the network. For example, /dev/cdrom can be accessed remotely, as if it were a local resource. Access to any specific file or a file system directory is transparent across the network,
https://en.wikipedia.org/wiki/Enriques%E2%80%93Kodaira%20classification
In mathematics, the Enriques–Kodaira classification is a classification of compact complex surfaces into ten classes. For each of these classes, the surfaces in the class can be parametrized by a moduli space. For most of the classes the moduli spaces are well understood, but for the class of surfaces of general type the moduli spaces seem too complicated to describe explicitly, though some components are known. Max Noether began the systematic study of algebraic surfaces, and Guido Castelnuovo proved important parts of the classification. described the classification of complex projective surfaces. later extended the classification to include non-algebraic compact surfaces. The analogous classification of surfaces in positive characteristic was begun by and completed by ; it is similar to the characteristic 0 projective case, except that one also gets singular and supersingular Enriques surfaces in characteristic 2, and quasi-hyperelliptic surfaces in characteristics 2 and 3. Statement of the classification The Enriques–Kodaira classification of compact complex surfaces states that every nonsingular minimal compact complex surface is of exactly one of the 10 types listed on this page; in other words, it is one of the rational, ruled (genus > 0), type VII, K3, Enriques, Kodaira, toric, hyperelliptic, properly quasi-elliptic, or general type surfaces. For the 9 classes of surfaces other than general type, there is a fairly complete description of what all the surfaces look like (which for class VII depends on the global spherical shell conjecture, still unproved in 2009). For surfaces of general type not much is known about their explicit classification, though many examples have been found. The classification of algebraic surfaces in positive characteristic (, ) is similar to that of algebraic surfaces in characteristic 0, except that there are no Kodaira surfaces or surfaces of type VII, and there are some extra families of Enriques surfaces in characterist
https://en.wikipedia.org/wiki/Acute%20interstitial%20pneumonitis
Acute interstitial pneumonitis (also known as acute interstitial pneumoniais a rare, severe lung disease that usually affects otherwise healthy individuals. There is no known cause or cure. Acute interstitial pneumonitis is often categorized as both an interstitial lung disease and a form of acute respiratory distress syndrome (ARDS). In uncommon instances, if ARDS appears acutely, in the absence of known triggers, and follows a rapidly progressing clinical course, the term "Acute interstitial pneumonia" is used. ARDS is distinguished from the chronic forms of interstitial pneumonia such as idiopathic pulmonary fibrosis. Symptoms and signs The most common symptoms of acute interstitial pneumonitis are highly productive cough with expectoration of thick mucus, fever, and difficulties breathing. These often occur over a period of one to two weeks before medical attention is sought. The presence of fluid means the person experiences a feeling similar to 'drowning'. Difficulties breathing can quickly progress to an inability to breathe without support (respiratory failure). Acute interstitial pneumonitis typically progresses rapidly, with hospitalization and mechanical ventilation often required only days to weeks after initial symptoms of cough, fever, and difficulties breathing develop. Diagnosis Rapid progression from initial symptoms to respiratory failure is a key feature. An X-ray that shows ARDS is necessary for diagnosis (fluid in the small air sacs (alveoli) in both lungs). In addition, a biopsy of the lung that shows organizing diffuse alveolar damage is required for diagnosis. This type of alveolar damage can be attributed to nonconcentrated and nonlocalized alveoli damage, marked alveolar septal edema with inflammatory cell infiltration, fibroblast proliferation, occasional hyaline membranes, and thickening of the alveolar walls. The septa are lined with atypical, hyperplastic type II pneumocytes, thus leading to the collapse of airspaces. Other diagn
https://en.wikipedia.org/wiki/Phoenix%20Technologies
Phoenix Technologies Ltd is an American company that designs, develops and supports core system software for personal computers and other computing devices. The company's products commonly referred to as BIOS (Basic Input/Output System) or firmware support and enable the compatibility, connectivity, security and management of the various components and technologies used in such devices. Phoenix Technologies and IBM developed the El Torito standard. Phoenix was incorporated in Massachusetts in September 1979, and its headquarters are in Campbell, California. History In 1979, Neil Colvin formed what was then called Phoenix Software Associates after his prior employer, Xitan, went out of business. Neil hired Dave Hirschman, a former Xitan employee. During 1980–1981, they rented office space for the first official Phoenix location at 151 Franklin Street, Boston, Massachusetts. In this same time period Phoenix purchased a non-exclusive license for Seattle Computer Products 86-DOS. Phoenix developed customized versions of 86-DOS (or sometimes called PDOS for Phoenix DOS) for various microprocessor platforms. Phoenix also provided PMate as a replacement for Edlin as the DOS file editor. Phoenix also developed C language libraries, called PForCe, along with Plink-86/Plink-86plus, overlay linkers, and Pfix-86, a windowed Debugger for DOS. These products only provided a small revenue stream to Phoenix during the early 1980s and the company did not significantly expand in size. Cloning the IBM PC BIOS After the success of the IBM PC, many companies began making PC clones. Some, like Compaq, developed their own compatible ROM BIOS, but others violated copyright by directly copying the PC's BIOS from the IBM PC Technical Reference Manual. After Apple Computer, Inc. v. Franklin Computer Corp. IBM sued companies that it claimed infringed IBM's copyright. Clone manufacturers needed a legal, fully compatible BIOS. To develop a legal BIOS, Phoenix used a clean room design. Eng
https://en.wikipedia.org/wiki/INT%2013H
INT 13h is shorthand for BIOS interrupt call 13hex, the 20th interrupt vector in an x86-based (IBM PC-descended) computer system. The BIOS typically sets up a real mode interrupt handler at this vector that provides sector-based hard disk and floppy disk read and write services using cylinder-head-sector (CHS) addressing. Modern PC BIOSes also include INT 13h extension functions, originated by IBM and Microsoft in 1992, that provide those same disk access services using 64-bit LBA addressing; with minor additions, these were quasi-standardized by Phoenix Technologies and others as the EDD (Enhanced Disk Drive) BIOS extensions. INT is an x86 instruction that triggers a software interrupt, and 13hex is the interrupt number (as a hexadecimal value) being called. Modern computers come with both BIOS INT 13h and UEFI functionality that provides the same services and more, with the exception of UEFI Class 3 that completely removes CSM thus lacks INT 13h and other interrupts. Typically, UEFI drivers use LBA-addressing instead of CHS-addressing. Overview Under real mode operating systems, such as DOS, calling INT 13h would jump into the computer's ROM-BIOS code for low-level disk services, which would carry out physical sector-based disk read or write operations for the program. In DOS, it serves as the low-level interface for the built-in block device drivers for hard disks and floppy disks. This allows INT 25h and INT 26h to provide absolute disk read/write functions for logical sectors to the FAT file system driver in the DOS kernel, which handles file-related requests through DOS API (INT 21h) functions. Under protected mode operating systems, such as Microsoft Windows NT derivatives (e.g. NT4, 2000, XP, and Server 2003) and Linux with dosemu, the OS intercepts the call and passes it to the operating system's native disk I/O mechanism. Windows 9x and Windows for Workgroups 3.11 also bypass BIOS routines when using 32-bit Disk Access. Besides performing low-level d
https://en.wikipedia.org/wiki/Wine%20fault
A wine fault is a sensory-associated (organoleptic) characteristic of a wine that is unpleasant, and may include elements of taste, smell, or appearance, elements that may arise from a "chemical or a microbial origin", where particular sensory experiences (e.g., an off-odor) might arise from more than one wine fault. Wine faults may result from poor winemaking practices or storage conditions that lead to wine spoilage. In the case of a chemical origin, many compounds causing wine faults are already naturally present in wine, but at insufficient concentrations to be of issue, and in fact may impart positive characters to the wine; however, when the concentration of such compounds exceed a sensory threshold, they replace or obscure desirable flavors and aromas that the winemaker wants the wine to express. The ultimate result is that the quality of the wine is reduced (less appealing, sometimes undrinkable), with consequent impact on its value.<ref name="Baldy pp 37-39, et al">M. Baldy: "The University Wine Course", Third Edition, pp. 37-39, 69-80, 134-140. The Wine Appreciation Guild 2009 .</ref> There are many underlying causes of wine faults, including poor hygiene at the winery, excessive or insufficient exposure of the wine to oxygen, excessive or insufficient exposure of the wine to sulphur, overextended maceration of the wine either pre- or post-fermentation, faulty fining, filtering and stabilization of the wine, the use of dirty oak barrels, over-extended barrel aging and the use of poor quality corks. Outside of the winery, other factors within the control of the retailer or end user of the wine can contribute to the perception of flaws in the wine. These include poor storage of the wine that exposes it to excessive heat and temperature fluctuations as well as the use of dirty stemware during wine tasting that can introduce materials or aromas to what was previously a clean and fault-free wine. Differences between flaws and faults In wine tasting, there is
https://en.wikipedia.org/wiki/WWWJDIC
WWWJDIC is an online Japanese dictionary based on the electronic dictionaries compiled and collected by Australian academic Jim Breen. The main Japanese–English dictionary file (EDICT) contains over 180,000 entries, and the ENAMDICT dictionary contains over 720,000 Japanese surnames, first names, place names and product names. WWWJDIC also contains several specialized dictionaries covering topics such as life sciences, law, computing, engineering, etc. For example sentences with Japanese words, WWWJDIC makes use of a sentence database from the Tatoeba project, largely based on the Tanaka Corpus. Unlike the original Tanaka Corpus, the sentences from the Tatoeba project are not public domain, but are available under the non-restrictive CC-BY license. The sentence collection contains over 150,000 sentence pairs in Japanese and English. In addition to Japanese–English, the dictionary has Japanese paired with German, French, Russian, Hungarian, Swedish, Spanish and Dutch. However, currently there are no example sentences for these languages. The dictionary is updated freely and may be copied under its own licence arrangements. Several mirror sites of the main WWWJDIC also exist around the world. These sites update daily from the home site at the Electronic Dictionary Research and Development Group (EDRDG). See also Japanese language education References External links WWWJDIC – Main site (EDRDG) Japanese dictionaries Online dictionaries
https://en.wikipedia.org/wiki/Point%20of%20beginning
The point of beginning is a surveyor's mark at the beginning location for the wide-scale surveying of land. An example is the Beginning Point of the U.S. Public Land Survey that led to the opening of the Northwest Territory, and is the starting point of the surveys of almost all other lands to the west, reaching all the way to the Pacific Ocean. On September 30, 1785, Thomas Hutchins, first and only Geographer of the United States, began surveying the Seven Ranges at the point of beginning. Points of beginning Beginning Point of the U.S. Public Land Survey – East Liverpool, Ohio See also Initial point References External links Point of Beginning in Wisconsin Surveying
https://en.wikipedia.org/wiki/Edward%20Ginzton
Edward Leonard Ginzton (December 27, 1915 – August 13, 1998) was a Ukrainian-American engineer. Education Ginzton completed his B.S. (1936) and M.S. (1937) in Electrical Engineering at the University of California, Berkeley, and his Ph.D. in electrical engineering from Stanford University in 1941. Career As a student at Stanford University, Ginzton worked with William Hansen and brothers Russell and Sigurd Varian. In 1941 he became a member of the Varian–Hansen group at the Sperry Gyroscope Company. Ginzton was appointed assistant professor in physics at Stanford University in 1945 and remained on the faculty until 1961. In 1949, Ginzton and Marvin Chodorow developed the 1 BeV 220-foot accelerator at Stanford University. After completion of the 1 BeV accelerator, Ginzton became director of the Microwave Laboratory, which was later renamed the Ginzton Laboratory. Ginzton, along with Russell and Sigurd Varian, was one of the original board members of Varian Associates, founded in 1948. The nine initial directors of the company were Ginzton, Russell, Sigurd, and Dorothy Varian, H. Myrl Stearns, Stanford University faculty members William Webster Hansen, and Leonard I. Schiff, legal counsel Richard M. Leonard, and patent attorney Paul B. Hunter. Ginzton became CEO and chairman of Varian Associates after Russell Varian died of a heart attack and Sigurd Varian died in a plane crash. Ginzton was awarded the IEEE Medal of Honor in 1969 for "his outstanding contributions in advancing the technology of high power klystrons and their application, especially to linear particle accelerators." Ginzton was a member of the National Academy of Engineering and in the National Academy of Sciences. Ginzton's biography is available online. Family Ginzton was born in Ukraine and lived in China before moving to California in 1929. On June 16, 1939, Ginzton and Artemas Alma McCann (1913–2000) married. Artemas was the daughter of James Arthur and Alma (Hawes) McCann. The Ginz
https://en.wikipedia.org/wiki/Ciprian%20Manolescu
Ciprian Manolescu (born December 24, 1978) is a Romanian-American mathematician, working in gauge theory, symplectic geometry, and low-dimensional topology. He is currently a professor of mathematics at Stanford University. Biography Manolescu completed his first eight classes at School no. 11 Mihai Eminescu and his secondary education at Ion Brătianu High School in Piteşti. He completed his undergraduate studies and PhD at Harvard University under the direction of Peter B. Kronheimer. He was the winner of the Morgan Prize, awarded jointly by AMS-MAA-SIAM, in 2002. His undergraduate thesis was on Finite dimensional approximation in Seiberg–Witten theory, and his PhD thesis topic was A spectrum valued TQFT from the Seiberg–Witten equations. In early 2013, he released a paper detailing a disproof of the triangulation conjecture for manifolds of dimension 5 and higher. For this paper, he received the E. H. Moore Prize from the American Mathematical Society. Awards and honors He was among the recipients of the Clay Research Fellowship (2004–2008). In 2012, he was awarded one of the ten prizes of the European Mathematical Society for his work on low-dimensional topology, and particularly for his role in the development of combinatorial Heegaard Floer homology. He was elected as a member of the 2017 class of Fellows of the American Mathematical Society "for contributions to Floer homology and the topology of manifolds". In 2018, he was an invited speaker at the International Congress of Mathematicians (ICM) in Rio de Janeiro. In 2020, he received a Simons Investigator Award. The citation reads: "Ciprian Manolescu works in low-dimensional topology and gauge theory. His research is centered on constructing new versions of Floer homology and applying them to questions in topology. With collaborators, he showed that many Floer-theoretic invariants are algorithmically computable. He also developed a new variant of Seiberg-Witten Floer homology, which he used to prove th
https://en.wikipedia.org/wiki/Electronic%20lock
An electronic lock (or electric lock) is a locking device which operates by means of electric current. Electric locks are sometimes stand-alone with an electronic control assembly mounted directly to the lock. Electric locks may be connected to an access control system, the advantages of which include: key control, where keys can be added and removed without re-keying the lock cylinder; fine access control, where time and place are factors; and transaction logging, where activity is recorded. Electronic locks can also be remotely monitored and controlled, both to lock and to unlock. Operation Electric locks use magnets, solenoids, or motors to actuate the lock by either supplying or removing power. Operating the lock can be as simple as using a switch, for example an apartment intercom door release, or as complex as a biometric based access control system. There are two basic types of locks: "preventing mechanism" or operation mechanism. Types Electromagnetic lock The most basic type of electronic lock is a magnetic lock (informally called a "mag lock"). A large electro-magnet is mounted on the door frame and a corresponding armature is mounted on the door. When the magnet is powered and the door is closed, the armature is held fast to the magnet. Mag locks are simple to install and are very attack-resistant. One drawback is that improperly installed or maintained mag locks can fall on people, and also that one must unlock the mag lock to both enter and to leave. This has caused fire marshals to impose strict rules on the use of mag locks and access control practice in general. Additionally, NFPA 101 (Standard for Life Safety and Security), as well as the ADA (Americans with Disability Act) require "no prior knowledge" and "one simple movement" to allow "free egress". This means that in an emergency, a person must be able to move to a door and immediately exit with one motion (requiring no push buttons, having another person unlock the door, reading a sign, or
https://en.wikipedia.org/wiki/GENERIC%20formalism
In non-equilibrium thermodynamics, GENERIC is an acronym for General Equation for Non-Equilibrium Reversible-Irreversible Coupling. It is the general form of dynamic equation for a system with both reversible and irreversible dynamics (generated by energy and entropy, respectively). GENERIC formalism is the theory built around the GENERIC equation, which has been proposed in its final form in 1997 by Miroslav Grmela and Hans Christian Öttinger. GENERIC equation The GENERIC equation is usually written as Here: denotes a set of variables used to describe the state space. The vector can also contain variables depending on a continuous index like a temperature field. In general, is a function , where the set can contain both discrete and continuous indexes. Example: for a gas with nonuniform temperature, contained in a volume () , are the system's total energy and entropy. For purely discrete state variables, these are simply functions from to , for continuously indexed , they are functionals , are the derivatives of and . In the discrete case, it is simply the gradient, for continuous variables, it is the functional derivative (a function ) the Poisson matrix is an antisymmetric matrix (possibly depending on the continuous indexes) describing the reversible dynamics of the system according to Hamiltonian mechanics. The related Poisson bracket fulfills the Jacobi identity. the friction matrix is a positive semidefinite (and hence symmetric) matrix describing the system's irreversible behaviour. In addition to the above equation and the properties of its constituents, systems that ought to be properly described by the GENERIC formalism are required to fulfill the degeneracy conditions which express the conservation of entropy under reversible dynamics and of energy under irreversible dynamics, respectively. The conditions on (antisymmetry and some others) express that the energy is reversibly conserved, and the condition on (positive semidefiniten
https://en.wikipedia.org/wiki/Ganbare%20Goemon%21%20Karakuri%20D%C5%8Dch%C5%AB
is a video game produced by Konami. It is the second game in the Ganbare Goemon series (sometimes known in English as Mystical Ninja) and the first to be released on a video game console and home computer. It was initially released for the Family Computer on July 30, 1986 and later released for the MSX2 a year later. The Famicom version was re-released in Japan only for the Game Boy Advance under the Famicom Mini label and for the Wii, Nintendo 3DS and Wii U under the Virtual Console service. A direct sequel, Ganbare Goemon 2, was released for the Famicom on January 4, 1989. Gameplay The game revolves around the main character, Goemon, and his exploits. As the name suggests, his character was based on Ishikawa Goemon, the noble thief of Japanese folklore. Unlike its sequels, this game still doesn't feature the comic situation and strange characters that define the series, and Goemon is portrayed as a noble thief rather than a plain hero. The game plays as a top view action/adventure game (similar to The Legend of Zelda) though it is separated by stages. In each level Goemon must find three passes in order to advance. Some of these passes are found in boxes, secret passages or can be bought. After finishing all the stages, the game will present the player with a new Japanese province (eight in total), but all the levels will remain the same. The ending, however, will be different. Like the rest of the series, Goemon can be powered-up if certain items are found and/or bought, which can be lost after a few hits. The MSX version has the option to be played in turns by two players, with the second player playing as a ninja named Nezumi Kozō, which is the basis of Goemon's sidekick Ebisumaru. In addition, unlike the Family Computer version, the game has six more provinces with completely new levels after finishing the game once. References External links Ganbare Goemon! Karakuri Dōchū at MobyGames Ganbare Goemon 2 at MobyGames 1986 video games Game Boy Advance ga
https://en.wikipedia.org/wiki/.NET%20My%20Services
.NET My Services (codenamed Hailstorm) is an abandoned collection of XML-based Web services by Microsoft for storing and retrieving information. NET My Services was announced on March 19, 2001 as part of Microsoft's .NET initiative and was intended to rely on what was then known as a Microsoft Passport, a single sign-in web service now referred to as a Microsoft account. .NET My Services was a platform intended to facilitate the storage and retrieval of user-related information, such as contacts, calendar information, and e-mail messages, by allowing it to be accessed from a centralized repository across various applications and device types, including traditional desktop PCs, and mobile devices such as laptops, mobile phones, PDAs, and tablet PCs; access to this stored information would be based solely on user discretion. The technology would rely on a subscription-based business model. Although the technology required a Microsoft Passport, it was based on cross-platform, open standard web services, including SOAP, UDDI, and WS-Discovery, which enabled interoperability with compatible systems without requiring Microsoft Windows. After .NET My Services was announced on March 19, 2001, Microsoft intended for it to reach broad developer availability at that year's Professional Developers Conference, with a subsequent end-user release scheduled for 2002. However, due to industry concerns related to anti-competitive behavior and end-user privacy, the company ultimately abandoned the initiative before it could fully materialize. See also Microsoft Office XP Smart tags Windows Communication Foundation WinFS References External links .NET My Services home page .NET Discontinued Microsoft products Microsoft Microsoft initiatives Web services
https://en.wikipedia.org/wiki/Weil%27s%20conjecture%20on%20Tamagawa%20numbers
In mathematics, the Weil conjecture on Tamagawa numbers is the statement that the Tamagawa number of a simply connected simple algebraic group defined over a number field is 1. In this case, simply connected means "not having a proper algebraic covering" in the algebraic group theory sense, which is not always the topologists' meaning. History calculated the Tamagawa number in many cases of classical groups and observed that it is an integer in all considered cases and that it was equal to 1 in the cases when the group is simply connected. The first observation does not hold for all groups: found examples where the Tamagawa numbers are not integers. The second observation, that the Tamagawa numbers of simply connected semisimple groups seem to be 1, became known as the Weil conjecture. Robert Langlands (1966) introduced harmonic analysis methods to show it for Chevalley groups. K. F. Lai (1980) extended the class of known cases to quasisplit reductive groups. proved it for all groups satisfying the Hasse principle, which at the time was known for all groups without E8 factors. V. I. Chernousov (1989) removed this restriction, by proving the Hasse principle for the resistant E8 case (see strong approximation in algebraic groups), thus completing the proof of Weil's conjecture. In 2011, Jacob Lurie and Dennis Gaitsgory announced a proof of the conjecture for algebraic groups over function fields over finite fields. Applications used the Weil conjecture to calculate the Tamagawa numbers of all semisimple algebraic groups. For spin groups, the conjecture implies the known Smith–Minkowski–Siegel mass formula. See also Tamagawa number References . Further reading Aravind Asok, Brent Doran and Frances Kirwan, "Yang-Mills theory and Tamagawa Numbers: the fascination of unexpected links in mathematics", February 22, 2013 J. Lurie, The Siegel Mass Formula, Tamagawa Numbers, and Nonabelian Poincaré Duality posted June 8, 2012. Conjectures Theorems in group the
https://en.wikipedia.org/wiki/List%20of%20eponyms%20of%20special%20functions
This is a list of special function eponyms in mathematics, to cover the theory of special functions, the differential equations they satisfy, named differential operators of the theory (but not intended to include every mathematical eponym). Named symmetric functions, and other special polynomials, are included. A Niels Abel: Abel polynomials - Abelian function - Abel–Gontscharoff interpolating polynomial Sir George Biddell Airy: Airy function Waleed Al-Salam (1926–1996): Al-Salam polynomial - Al Salam–Carlitz polynomial - Al Salam–Chihara polynomial C. T. Anger: Anger–Weber function Kazuhiko Aomoto: Aomoto–Gel'fand hypergeometric function - Aomoto integral Paul Émile Appell (1855–1930): Appell hypergeometric series, Appell polynomial, Generalized Appell polynomials Richard Askey: Askey–Wilson polynomial, Askey–Wilson function (with James A. Wilson) B Ernest William Barnes: Barnes G-function E. T. Bell: Bell polynomials Bender–Dunne polynomial Jacob Bernoulli: Bernoulli polynomial Friedrich Bessel: Bessel function, Bessel–Clifford function H. Blasius: Blasius functions R. P. Boas, R. C. Buck: Boas–Buck polynomial Böhmer integral Erland Samuel Bring: Bring radical de Bruijn function Buchstab function Burchnall, Chaundy: Burchnall–Chaundy polynomial C Leonard Carlitz: Carlitz polynomial Arthur Cayley, Capelli: Cayley–Capelli operator Celine's polynomial Charlier polynomial Pafnuty Chebyshev: Chebyshev polynomials Elwin Bruno Christoffel, Darboux: Christoffel–Darboux relation Cyclotomic polynomials D H. G. Dawson: Dawson function Charles F. Dunkl: Dunkl operator, Jacobi–Dunkl operator, Dunkl–Cherednik operator Dickman–de Bruijn function E Engel: Engel expansion Erdélyi Artúr: Erdelyi–Kober operator Leonhard Euler: Euler polynomial, Eulerian integral, Euler hypergeometric integral F V. N. Faddeeva: Faddeeva function (also known as the complex error function; see error function) G C. F. Gauss: Gaussian polynomial, Gaussian distribution, etc. Leopold Bernhar
https://en.wikipedia.org/wiki/Whittaker%20function
In mathematics, a Whittaker function is a special solution of Whittaker's equation, a modified form of the confluent hypergeometric equation introduced by to make the formulas involving the solutions more symmetric. More generally, introduced Whittaker functions of reductive groups over local fields, where the functions studied by Whittaker are essentially the case where the local field is the real numbers and the group is SL2(R). Whittaker's equation is It has a regular singular point at 0 and an irregular singular point at ∞. Two solutions are given by the Whittaker functions Mκ,μ(z), Wκ,μ(z), defined in terms of Kummer's confluent hypergeometric functions M and U by The Whittaker function is the same as those with opposite values of , in other words considered as a function of at fixed and it is even functions. When and are real, the functions give real values for real and imaginary values of . These functions of play a role in so-called Kummer spaces. Whittaker functions appear as coefficients of certain representations of the group SL2(R), called Whittaker models. References . . . . Further reading Special hypergeometric functions E. T. Whittaker Special functions
https://en.wikipedia.org/wiki/List%20of%20organisms%20by%20chromosome%20count
The list of organisms by chromosome count describes ploidy or numbers of chromosomes in the cells of various plants, animals, protists, and other living organisms. This number, along with the visual appearance of the chromosome, is known as the karyotype, and can be found by looking at the chromosomes through a microscope. Attention is paid to their length, the position of the centromeres, banding pattern, any differences between the sex chromosomes, and any other physical characteristics. The preparation and study of karyotypes is part of cytogenetics. References Further reading (table with a compilation of haploid chromosome number of many algae and protozoa, in column "HAP"). Supporting Data Set, with information on ploidy level and number of chromosomes of several protists) External links List of pages in English from Russian bionet site The dog through evolution (archived 1 March 2012) Shared synteny of human chromosome 17 loci in Canids (archived 24 September 2015) An atlas of the chromosome numbers in animals (1951); PDF downloads of each chapter on Internet Archive Chromosomes Classical genetics Chromosome
https://en.wikipedia.org/wiki/Gouy%20balance
The Gouy balance, invented by the French physicist Louis Georges Gouy, is a device for measuring the magnetic susceptibility of a sample. The Gouy balance operates on magnetic torque, by placing the sample on a horizontal arm or beam suspended by a thin fiber, and placing either a permanent magnet or electromagnet on the other end of the arm, there is a magnetic field applied to the system, causing the coil to experience a torque causing the arm to twist or rotated. The angle of rotation can then be calculated. Background Amongst a wide range of interest in optics, Brownian motion, and experimental physics, Gouy also had a strong intrigue for the phenomena of magnetism. Gouy derived a mathematical expression showing that force is proportional to volume susceptibility for the interaction of material in a uniform magnetic field in 1889. From this derivation, Gouy proposed that balance measurements taken for tubes of material suspended in a magnetic field could evaluate his expression for volume susceptibility. Though Gouy never tested the scientific suggestion himself, this simple and inexpensive method became the foundation for measuring magnetic susceptibility and the blueprint for the Gouy balance. Procedure The Gouy balance measures the apparent change in the mass of the sample as it is repelled or attracted by the region of high magnetic field between the poles. Some commercially available balances have a port at their base for this application. In use, a long, cylindrical sample to be tested is suspended from a balance, partially entering between the poles of a magnet. The sample can be in solid or liquid form, and is often placed in a cylindrical container such as a test tube. Solid compounds are generally ground into a fine powder to allow for uniformity within the sample. The sample is suspended between the magnetic poles through an attached thread or string. The experimental procedure requires two separate reading to be performed. An initial balance re
https://en.wikipedia.org/wiki/Spatial%20frequency
In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier transform) of the structure repeat per unit of distance. The SI unit of spatial frequency is the reciprocal metre (m-1), although cycles per meter (c/m) is also common. In image-processing applications, spatial frequency is often expressed in units of cycles per millimeter (c/mm) or also line pairs per millimeter (LP/mm). In wave propagation, the spatial frequency is also known as wavenumber. Ordinary wavenumber is defined as the reciprocal of wavelength and is commonly denoted by or sometimes : Angular wavenumber , expressed in radian per metre (rad/m), is related to ordinary wavenumber and wavelength by Visual perception In the study of visual perception, sinusoidal gratings are frequently used to probe the capabilities of the visual system, such as contrast sensitivity. In these stimuli, spatial frequency is expressed as the number of cycles per degree of visual angle. Sine-wave gratings also differ from one another in amplitude (the magnitude of difference in intensity between light and dark stripes), orientation, and phase. Spatial-frequency theory The spatial-frequency theory refers to the theory that the visual cortex operates on a code of spatial frequency, not on the code of straight edges and lines hypothesised by Hubel and Wiesel on the basis of early experiments on V1 neurons in the cat. In support of this theory is the experimental observation that the visual cortex neurons respond even more robustly to sine-wave gratings that are placed at specific angles in their receptive fields than they do to edges or bars. Most neurons in the primary visual cortex respond best when a sine-wave grating of a particular frequency is presented at a particular angle in a particular location in the visual field. (However, a
https://en.wikipedia.org/wiki/Point%20groups%20in%20two%20dimensions
In geometry, a two-dimensional point group or rosette group is a group of geometric symmetries (isometries) that keep at least one point fixed in a plane. Every such group is a subgroup of the orthogonal group O(2), including O(2) itself. Its elements are rotations and reflections, and every such group containing only rotations is a subgroup of the special orthogonal group SO(2), including SO(2) itself. That group is isomorphic to R/Z and the first unitary group, U(1), a group also known as the circle group. The two-dimensional point groups are important as a basis for the axial three-dimensional point groups, with the addition of reflections in the axial coordinate. They are also important in symmetries of organisms, like starfish and jellyfish, and organism parts, like flowers. Discrete groups There are two families of discrete two-dimensional point groups, and they are specified with parameter n, which is the order of the group of the rotations in the group. Intl refers to Hermann–Mauguin notation or international notation, often used in crystallography. In the infinite limit, these groups become the one-dimensional line groups. If a group is a symmetry of a two-dimensional lattice or grid, then the crystallographic restriction theorem restricts the value of n to 1, 2, 3, 4, and 6 for both families. There are thus 10 two-dimensional crystallographic point groups: C1, C2, C3, C4, C6, D1, D2, D3, D4, D6 The groups may be constructed as follows: Cn. Generated by an element also called Cn, which corresponds to a rotation by angle 2π/n. Its elements are E (the identity), Cn, Cn2, ..., Cnn−1, corresponding to rotation angles 0, 2π/n, 4π/n, ..., 2(n − 1)π/n. Dn. Generated by element Cn and reflection σ. Its elements are the elements of group Cn, with elements σ, Cnσ, Cn2σ, ..., Cnn−1σ added. These additional ones correspond to reflections across lines with orientation angles 0, π/n, 2π/n, ..., (n − 1)π/n. Dn is thus a semidirect product of Cn and the group (E,
https://en.wikipedia.org/wiki/Hamiltonian%20fluid%20mechanics
Hamiltonian fluid mechanics is the application of Hamiltonian methods to fluid mechanics. Note that this formalism only applies to nondissipative fluids. Irrotational barotropic flow Take the simple example of a barotropic, inviscid vorticity-free fluid. Then, the conjugate fields are the mass density field ρ and the velocity potential φ. The Poisson bracket is given by and the Hamiltonian by: where e is the internal energy density, as a function of ρ. For this barotropic flow, the internal energy is related to the pressure p by: where an apostrophe ('), denotes differentiation with respect to ρ. This Hamiltonian structure gives rise to the following two equations of motion: where is the velocity and is vorticity-free. The second equation leads to the Euler equations: after exploiting the fact that the vorticity is zero: As fluid dynamics is described by non-canonical dynamics, which possess an infinite amount of Casimir invariants, an alternative formulation of Hamiltonian formulation of fluid dynamics can be introduced through the use of Nambu mechanics See also Luke's variational principle Hamiltonian field theory Notes References Fluid dynamics Hamiltonian mechanics Dynamical systems
https://en.wikipedia.org/wiki/Victorian%20Web
The Victorian Web is a hypertext project derived from hypermedia environments, Intermedia and Storyspace, that anticipated the World Wide Web. Initially created between 1988 and 1990 with 1,500 documents, it has grown to over 128,500 items in July 2023. In contrast to archives and web-based libraries, the Victorian Web presents its images and documents, including entire books, as nodes in a network of complex connections. It emphasizes links rather than the searches. In 2020 victorianweb.org became a 501(3)c non-profit corporation. The Victorian Web Foundation’s Board of Directors are Jacqueline Banerjee (President and Secretary); Noah M. Landow (Treasurer); Diane Josefowicz (Board Member); and Simon Cooke (Board Member). The Victorian Web has many contributors, but unlike wikis, it is edited. Originally conceived in 1987 as a means of helping scholars and students in see connections between different fields, the site has expanded in its scope and vision. For example, commentary on the works of Charles Dickens is linked to his life and to contemporary social and political history, drama, religion, book illustration, and economics. Translations of this and earlier versions: Italian, Japanese, Korean, Spanish. The Victorian Web incorporates primary and secondary texts (including book reviews) in the areas of economics, literature, philosophy, religion, political and social history, science, technology, and the visual arts. The visual arts section ranges widely over painting, photography, book design and illustration, sculpture, and the decorative arts, including ceramics, furniture, stained glass and metalwork. Jewelry, textiles, and costume are amongst other topics discussed and illustrated on its website. Awards indicate that it is particularly strong in literature, painting, architecture, sculpture, book illustration, history and religion. History The 1,500 or so documents that constitute its kernel were created in 1988–90 by its former webmaster and editor-
https://en.wikipedia.org/wiki/Four-tensor
In physics, specifically for special relativity and general relativity, a four-tensor is an abbreviation for a tensor in a four-dimensional spacetime. Generalities General four-tensors are usually written in tensor index notation as with the indices taking integer values from 0 to 3, with 0 for the timelike components and 1, 2, 3 for spacelike components. There are n contravariant indices and m covariant indices. In special and general relativity, many four-tensors of interest are first order (four-vectors) or second order, but higher-order tensors occur. Examples are listed next. In special relativity, the vector basis can be restricted to being orthonormal, in which case all four-tensors transform under Lorentz transformations. In general relativity, more general coordinate transformations are necessary since such a restriction is not in general possible. Examples First-order tensors In special relativity, one of the simplest non-trivial examples of a four-tensor is the four-displacement a four-tensor with contravariant rank 1 and covariant rank 0. Four-tensors of this kind are usually known as four-vectors. Here the component x0 = ct gives the displacement of a body in time (coordinate time t is multiplied by the speed of light c so that x0 has dimensions of length). The remaining components of the four-displacement form the spatial displacement vector x = (x1, x2, x3). The four-momentum for massive or massless particles is combining its energy (divided by c) p0 = E/c and 3-momentum p = (p1, p2, p3). For a particle with invariant mass , also known as rest mass, four momentum is defined by with the proper time of the particle. The relativistic mass is with Lorentz factor Second-order tensors The Minkowski metric tensor with an orthonormal basis for the (−+++) convention is used for calculating the line element and raising and lowering indices. The above applies to Cartesian coordinates. In general relativity, the metric tensor is given by much m
https://en.wikipedia.org/wiki/Torsion%20%28algebra%29
In mathematics, specifically in ring theory, a torsion element is an element of a module that yields zero when multiplied by some non-zero-divisor of the ring. The torsion submodule of a module is the submodule formed by the torsion elements. A torsion module is a module that equals its torsion submodule. A module is torsion-free if its torsion submodule comprises only the zero element. This terminology is more commonly used for modules over a domain, that is, when the regular elements of the ring are all its nonzero elements. This terminology applies to abelian groups (with "module" and "submodule" replaced by "group" and "subgroup"). This is allowed by the fact that the abelian groups are the modules over the ring of integers (in fact, this is the origin of the terminology, that has been introduced for abelian groups before being generalized to modules). In the case of groups that are noncommutative, a torsion element is an element of finite order. Contrary to the commutative case, the torsion elements do not form a subgroup, in general. Definition An element m of a module M over a ring R is called a torsion element of the module if there exists a regular element r of the ring (an element that is neither a left nor a right zero divisor) that annihilates m, i.e., In an integral domain (a commutative ring without zero divisors), every non-zero element is regular, so a torsion element of a module over an integral domain is one annihilated by a non-zero element of the integral domain. Some authors use this as the definition of a torsion element, but this definition does not work well over more general rings. A module M over a ring R is called a torsion module if all its elements are torsion elements, and torsion-free if zero is the only torsion element. If the ring R is commutative then the set of all torsion elements forms a submodule of M, called the torsion submodule of M, sometimes denoted T(M). If R is not commutative, T(M) may or may not be a submodule. I
https://en.wikipedia.org/wiki/List%20of%20topology%20topics
In mathematics, topology (from the Greek words , and ) is concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling and bending, but not tearing or gluing. A topological space is a set endowed with a structure, called a topology, which allows defining continuous deformation of subspaces, and, more generally, all kinds of continuity. Euclidean spaces, and, more generally, metric spaces are examples of a topological space, as any distance or metric defines a topology. The deformations that are considered in topology are homeomorphisms and homotopies. A property that is invariant under such deformations is a topological property. Basic examples of topological properties are: the dimension, which allows distinguishing between a line and a surface; compactness, which allows distinguishing between a line and a circle; connectedness, which allows distinguishing a circle from two non-intersecting circles. The ideas underlying topology go back to Gottfried Leibniz, who in the 17th century envisioned the and . Leonhard Euler's Seven Bridges of Königsberg problem and polyhedron formula are arguably the field's first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, although it was not until the first decades of the 20th century that the idea of a topological space was developed. This is a list of topology topics. See also: Topology glossary List of topologies List of general topology topics List of geometric topology topics List of algebraic topology topics List of topological invariants (topological properties) Publications in topology Topology and physics Quantum topology Topological defect Topological entropy in physics Topological order Topological quantum field theory Topological quantum number Topological string theory Topology of the universe Topology and dynamical systems Milnor–Thurston kneading theory Topological conjugacy Topological
https://en.wikipedia.org/wiki/Phenotypic%20plasticity
Phenotypic plasticity refers to some of the changes in an organism's behavior, morphology and physiology in response to a unique environment. Fundamental to the way in which organisms cope with environmental variation, phenotypic plasticity encompasses all types of environmentally induced changes (e.g. morphological, physiological, behavioural, phenological) that may or may not be permanent throughout an individual's lifespan. The term was originally used to describe developmental effects on morphological characters, but is now more broadly used to describe all phenotypic responses to environmental change, such as acclimation (acclimatization), as well as learning. The special case when differences in environment induce discrete phenotypes is termed polyphenism. Generally, phenotypic plasticity is more important for immobile organisms (e.g. plants) than mobile organisms (e.g. most animals), as mobile organisms can often move away from unfavourable environments. Nevertheless, mobile organisms also have at least some degree of plasticity in at least some aspects of the phenotype. One mobile organism with substantial phenotypic plasticity is Acyrthosiphon pisum of the aphid family, which exhibits the ability to interchange between asexual and sexual reproduction, as well as growing wings between generations when plants become too populated. Water fleas (Daphnia magna) have shown both phenotypic plasticity and the ability to genetically evolve to deal with the heat stress of warmer, urban pond waters. Examples Plants Phenotypic plasticity in plants includes the timing of transition from vegetative to reproductive growth stage, the allocation of more resources to the roots in soils that contain low concentrations of nutrients, the size of the seeds an individual produces depending on the environment, and the alteration of leaf shape, size, and thickness. Leaves are particularly plastic, and their growth may be altered by light levels. Leaves grown in the light ten
https://en.wikipedia.org/wiki/Sound%20Blaster%20X-Fi
Sound Blaster X-Fi is a lineup of sound cards in Creative Technology's Sound Blaster series. History The series was launched in August 2005 as a lineup of PCI sound cards, which served as the introduction for their X-Fi audio processing chip, with models ranging from XtremeMusic (lower end), to Platinum, Fatal1ty FPS, and Elite Pro (top of the range). The top-end Elite Pro model was aimed at musicians, bundled with the X-Fi external I/O box (offering phono with preamp inputs for turntables, high-impedance input for guitars,  inch mic input, headphone output, line-in, and full size MIDI I/O, as well as optical and RCA Coaxial digital inputs and outputs), and remote control. The Platinum and Fatal1ty FPS models both offer a front-panel drive-bay control unit and remote control, while the base model was supplied without any such accessories. All but the top model claimed 109 dB signal-to-noise ratio, while the Elite Pro model uses a higher-end DAC, with 116 dB claimed. The bottom two models feature 2 MB onboard X-RAM, while the top models offer 64 MB of X-RAM, designed for use in games to store sound samples for improved gaming performance. Launch reviews did not support Creative's claims of higher performance, however, with even the top-end 64 MB equipped model falling slightly behind the older Audigy cards. October 2006 saw a minor rebranding: the X-Fi XtremeMusic edition, which was in fact a highly capable gaming card, as it offers hardware decoding and EAX support, was replaced with the XtremeGamer model. The revised model featured half-width PCB, non-gold-plated connectors, optical out instead of the digital out and digital I/O module jack, and lacked the connector for users wishing to purchase a separate X-Fi I/O box. Functionality is otherwise the same. The market segment occupied by the XtremeMusic was moved downwards, with the introduction of the (cheaper) 'Xtreme Audio' and 'Xtreme Audio Notebook' products, which, despite the "X-Fi" label, are the only
https://en.wikipedia.org/wiki/American%20Institute%20of%20Chemical%20Engineers
The American Institute of Chemical Engineers (AIChE) is a professional organization for chemical engineers. AIChE was established in 1908 to distinguish chemical engineers as professionals independent of chemists and mechanical engineers. Currently, AIChE has over 60,000 members from over 110 countries. There are over 350 active student chapters at universities worldwide. Student chapters aim to provide networking opportunities in academia and industry as well as increase student involvement locally and nationally. History of formation This section consists of excerpts from a historical pamphlet written for the Silver Anniversary of the AICHE in 1932. In 1905, The Chemical Engineer rounded out its first year of publication with an editorial by its founder and prominent engineer, Richard K. Meade, that propounded the question: "Why not the American Society of Chemical Engineers?" He went on to say: "The profession is now a recognized one and there are probably at least five hundred chemical engineers in this country". The mechanical, civil, electrical, and mining engineers in the United States each had already established a national society, so Meade's editorial was quite pertinent. But it took time for the idea to take root and Meade kept promoting it for the next two years. Finally, in 1907, he issued a call for a preliminary meeting to be held in Atlantic City in June 1907. Some early leaders of the profession, Charles F. McKenna, William H. Walker, William Miller Booth, Samuel P. Sadtler, and Thorn Smith along with about a dozen others answered Meade's call and met in Atlantic City on June 21, 1907. The meeting concluded with the formation of an organizing committee of six members: Charles F. McKenna (chairman), Richard K. Meade, William M. Booth, J.C. Olsen, William H. Walker, and Arthur D. Little. The organizing committee sent a letter in September 1908 to 600 men in the chemical profession in the United States and Canada asking for their opinions about fo
https://en.wikipedia.org/wiki/International%20Mathematics%20Competition
The International Mathematics Competition (IMC) for University Students is an annual mathematics competition open to all undergraduate students of mathematics. Participating students are expected to be at most twenty three years of age at the time of the IMC. The IMC is primarily a competition for individuals, although most participating universities select and send one or more teams of students. The working language is English. The IMC is a residential competition and all student participants are required to stay in the accommodation provided by the organisers. It aims to provide a friendly, comfortable and secure environment for university mathematics students to enjoy mathematics with their peers from all around the world, to broaden their world perspective and to be inspired to set mathematical goals for themselves that might not have been previously imaginable or thought possible. Notably, in 2018 Caucher Birkar (born Fereydoun Derakhshani), an Iranian Kurdish mathematician, who participated in the 7th IMC held at University College London in 2000, received mathematics' most prestigious award, the Fields Medal. He is now a professor at Tsinghua University and at the University of Cambridge. In 2022 a Kyiv-born mathematician, Maryna Viazovska, was also awarded the Fields Medal. She participated in the IMC as a student four times, in 2002, 2003, 2004 and 2005. She is now a Professor and the Chair of Number Theory at the Institute of Mathematics of the École Polytechnique Fédérale de Lausanne in Switzerland. Students from over 200 universities from over 50 countries have participated over the first thirty competitions. At the 29th IMC in 2022 participants were awarded Individual Result Prizes, Fair Play Prizes and Most Efficient Team Leader Prizes. University College London has been involved in the organisation of the IMC and Professor John E. Jayne has served as the President from the beginning in 1994. The IMC runs over five or six days during which the compe
https://en.wikipedia.org/wiki/Fixed%20points%20of%20isometry%20groups%20in%20Euclidean%20space
A fixed point of an isometry group is a point that is a fixed point for every isometry in the group. For any isometry group in Euclidean space the set of fixed points is either empty or an affine space. For an object, any unique centre and, more generally, any point with unique properties with respect to the object is a fixed point of its symmetry group. In particular this applies for the centroid of a figure, if it exists. In the case of a physical body, if for the symmetry not only the shape but also the density is taken into account, it applies to the centre of mass. If the set of fixed points of the symmetry group of an object is a singleton then the object has a specific centre of symmetry. The centroid and centre of mass, if defined, are this point. Another meaning of "centre of symmetry" is a point with respect to which inversion symmetry applies. Such a point needs not be unique; if it is not, there is translational symmetry, hence there are infinitely many of such points. On the other hand, in the cases of e.g. C3h and D2 symmetry there is a centre of symmetry in the first sense, but no inversion. If the symmetry group of an object has no fixed points then the object is infinite and its centroid and centre of mass are undefined. If the set of fixed points of the symmetry group of an object is a line or plane then the centroid and centre of mass of the object, if defined, and any other point that has unique properties with respect to the object, are on this line or plane. 1D Line Only the trivial isometry group leaves the whole line fixed. Point The groups generated by a reflection leave a point fixed. 2D Plane Only the trivial isometry group C1 leaves the whole plane fixed. Line Cs with respect to any line leaves that line fixed. Point The point groups in two dimensions with respect to any point leave that point fixed. 3D Space Only the trivial isometry group C1 leaves the whole space fixed. Plane Cs with respect to a plane leaves that plane fix
https://en.wikipedia.org/wiki/Generic%20point
In algebraic geometry, a generic point P of an algebraic variety X is a point in a general position, at which all generic properties are true, a generic property being a property which is true for almost every point. In classical algebraic geometry, a generic point of an affine or projective algebraic variety of dimension d is a point such that the field generated by its coordinates has transcendence degree d over the field generated by the coefficients of the equations of the variety. In scheme theory, the spectrum of an integral domain has a unique generic point, which is the zero ideal. As the closure of this point for the Zariski topology is the whole spectrum, the definition has been extended to general topology, where a generic point of a topological space X is a point whose closure is X. Definition and motivation A generic point of the topological space X is a point P whose closure is all of X, that is, a point that is dense in X. The terminology arises from the case of the Zariski topology on the set of subvarieties of an algebraic set: the algebraic set is irreducible (that is, it is not the union of two proper algebraic subsets) if and only if the topological space of the subvarieties has a generic point. Examples The only Hausdorff space that has a generic point is the singleton set. Any integral scheme has a (unique) generic point; in the case of an affine integral scheme (i.e., the prime spectrum of an integral domain) the generic point is the point associated to the prime ideal (0). History In the foundational approach of André Weil, developed in his Foundations of Algebraic Geometry, generic points played an important role, but were handled in a different manner. For an algebraic variety V over a field K, generic points of V were a whole class of points of V taking values in a universal domain Ω, an algebraically closed field containing K but also an infinite supply of fresh indeterminates. This approach worked, without any need to deal dire
https://en.wikipedia.org/wiki/Series%2080%20%28software%20platform%29
Nokia's Series 80 (formerly Crystal) was a short-lived mobile software platform for their enterprise and professional level smartphones, introduced in 2000. It uses the Symbian OS. Common physical properties of this Symbian OS user interface platform are a screen resolution of 640×200 pixels and a full QWERTY keyboard. Series 80 used the large size of the Communicator screens to good effect, but software had to be developed specifically for it, for a relatively small market. The final Series 80 device was the Nokia 9300i, announced in 2005 and shipped in 2006. Nokia used S60 3rd Edition instead of the Series 80 platform on its final "Communicator" branded device, the Nokia E90 Communicator, released in 2007. Features Support for editing popular office documents Full QWERTY keyboard Integrated mouse for navigation SSL/TLS support Full web browser based on Opera VPN support Devices S80 v1.0: Jun 2001 – Nokia 9210 Communicator Jun 2001 – Nokia 9290 Communicator May 2002 – Nokia 9210i Communicator S80 v2.0: Feb 2005 – Nokia 9500 Communicator Jul 2005 – Nokia 9300 (not branded as "Communicator") Mar 2006 – Nokia 9300i (not branded as "Communicator") References Smartphones Mobile software Embedded operating systems Series 80
https://en.wikipedia.org/wiki/E350%20%28food%20additive%29
E350 is an EU recognised food additive. It comes in two forms, E350 (i) Sodium malate E350 (ii) Sodium hydrogen malate Sodium malate is a sodium salt of malic acid (E296), a natural acid present in fruit, its alate is used as a buffer and flavouring in soft drinks, confectionery and other foods. The D,L - and D-isomers are not allowed for infants - who lack the enzymes to metabolise these compounds. References Food additives E-number additives
https://en.wikipedia.org/wiki/Riverstone%20Networks
Riverstone Networks, was a provider of networking switching hardware based in Santa Clara, California. Originally part of Cabletron Systems, and based on an early acquisition of YAGO, it was one of the many Gigabit Ethernet startups in the mid-1990s. It is now a part of Alcatel-Lucent and its operations are being wound down via a Chapter 11 filing by their current owners. Company history 7 February 2006 - Riverstone's partner Lucent Technologies signed an Asset Purchase agreement to acquire Riverstone Networks 21 March 2006 - Lucent Technologies wins the auction for Riverstone Networks over rival Ericsson. The final price was $207 million 18 April 2006 - Lucent Technologies are currently in the process of a merger of equals with Alcatel 1 December 2006 - Lucent Technologies completed the process of a merger of equals with Alcatel. Assets of Riverstone Networks are now part of Alcatel-Lucent Products All of Riverstone Networks products were geared towards IP over Ethernet, often for a Metro Ethernet solution. All the products were multilayer switches (or switch-routers) and specialized in MPLS VPNs. 15000 Family The 15000 Family (referred to as the 15K) differed from the RS family as the 15K is not flow-based. Flow-based routers use the main CPU to process new flows and packets through the switch. The 15K differed by letting the line card processors do the work for the network traffic, leaving the main CPU to work on the system itself. This type of network processing is similar to Cisco's dCEF. The 15K products were based on a different operating system than other Riverstone products, called ROS-X. It was designed to be modular and more like the common command line interface of Cisco. 15008 - The highest performance product from Riverstone. It supported a 96 port 10/100 Ethernet card, 12 or 24 port 1GB Ethernet cards and 1 or 2 port 10GB Ethernet cards. Support for ATM and PoS was planned. 15100/15200 - Designed with the same architecture and operating s
https://en.wikipedia.org/wiki/Semantic%20URL%20attack
In a semantic URL attack, a client manually adjusts the parameters of its request by maintaining the URL's syntax but altering its semantic meaning. This attack is primarily used against CGI driven websites. A similar attack involving web browser cookies is commonly referred to as cookie poisoning. Example Consider a web-based e-mail application where users can reset their password by answering the security question correctly, and allows the users to send the password to the e-mail address of their choosing. After they answer the security question correctly, the web page will arrive to the following web form where the users can enter their alternative e-mail address: <form action="resetpassword.php" method="GET"> <input type="hidden" name="username" value="user001" /> <p>Please enter your alternative e-mail address:</p> <input type="text" name="altemail" /><br /> <input type="submit" value="Submit" /> </form> The receiving page, resetpassword.php, has all the information it needs to send the password to the new e-mail. The hidden variable username contains the value user001, which is the username of the e-mail account. Because this web form is using the GET data method, when the user submits alternative@emailexample.com as the e-mail address where the user wants the password to be sent to, the user then arrives at the following URL: http://semanticurlattackexample.com/resetpassword.php?username=user001&altemail=alternative%40emailexample.com This URL appears in the location bar of the browser, so the user can identify the username and the e-mail address through the URL parameters. The user may decide to steal other people's (user002) e-mail address by visiting the following URL as an experiment: http://semanticurlattackexample.com/resetpassword.php?username=user002&altemail=alternative%40emailexample.com If the resetpassword.php accepts these values, it is vulnerable to a semantic URL attack. The new password of the user002 e-mail address will be ge
https://en.wikipedia.org/wiki/AMD%20Horus
The Horus system, designed by Newisys for AMD, was created to enable AMD Opteron machines to extend beyond the current limit of 8-way (CPU sockets) architectures. The Opteron CPUs feature a cache-coherent HyperTransport (ccHT) bus to permit glueless, multiprocessor interconnect between physical CPU packages but as there is a maximum of three ccHT interfaces per chip, the systems are limited to a maximum of 8 sockets. The HyperTransport bus is also distance restricted and does not permit off-system interconnect. The Horus system overcomes these limitations by creating a pseudo-Opteron, the Horus chip, which connects to four real Opterons via the HyperTransport bus. As far as the Opterons are concerned they are in a five-way system and this is the basic Horus node (as called 'quad'). The Horus chip then provides an additional off-board interface (based on the InfiniBand standards) which can link to additional Horus nodes (up to 8). The chip handles the necessary translation between local and off-board ccHT communications. By building the CPUs around the Horus chip with 12-bit lanes running at 3125 MHz with InfiniBand technology (8b/10b encoding), this system has an effective internal speed of 30 Gbit/s. With 8 'quads' connected together, each with the maximum of four Opteron sockets per node, the Horus system allows a total of 32 CPU sockets in a single machine. Dual and future quad-core chips will also be supported, allowing a single system to scale to over a hundred processing cores. See also Heterogeneous System Architecture External links Horus white paper. Google groups discussion by engineer. AMD technologies Computer buses
https://en.wikipedia.org/wiki/Incremental%20build%20model
The incremental build model is a method of software development where the product is designed, implemented and tested incrementally (a little more is added each time) until the product is finished. It involves both development and maintenance. The product is defined as finished when it satisfies all of its requirements. This model combines the elements of the waterfall model with the iterative philosophy of prototyping. According to the Project Management Institute, an incremental approach is an "adaptive development approach in which the deliverable is produced successively, adding functionality until the deliverable contains the necessary and sufficient capability to be considered complete." The product is decomposed into a number of components, each of which is designed and built separately (termed as builds). Each component is delivered to the client when it is complete. This allows partial utilization of the product and avoids a long development time. It also avoids a large initial capital outlay and subsequent long waiting period. This model of development also helps ease the traumatic effect of introducing a completely new system all at once. Incremental model The incremental model applies the waterfall model incrementally. The series of releases is referred to as “increments”, with each increment providing more functionality to the customers. After the first increment, a core product is delivered, which can already be used by the customer. Based on customer feedback, a plan is developed for the next increments, and modifications are made accordingly. This process continues, with increments being delivered until the complete product is delivered. The incremental philosophy is also used in the agile process model (see agile modeling). The Incremental model can be applied to DevOps. In DevOps it centers around the idea of minimizing risk and cost of a DevOps adoption whilst building the necessary in-house skillset and momentum. Characteristics of Increment
https://en.wikipedia.org/wiki/Credit%20card%20interest
Credit card interest is a way in which credit card issuers generate revenue. A card issuer is a bank or credit union that gives a consumer (the cardholder) a card or account number that can be used with various payees to make payments and borrow money from the bank simultaneously. The bank pays the payee and then charges the cardholder interest over the time the money remains borrowed. Banks suffer losses when cardholders do not pay back the borrowed money as agreed. As a result, optimal calculation of interest based on any information they have about the cardholder's credit risk is key to a card issuer's profitability. Before determining what interest rate to offer, banks typically check national, and international (if applicable), credit bureau reports to identify the borrowing history of the card holder applicant with other banks and conduct detailed interviews and documentation of the applicant's finances. Interest rates Interest rates vary widely. Some credit card loans are secured by real estate, and can be as low as 6to 12% in the U.S. (2005). Typical credit cards have interest rates between 7and 36% in the U.S., depending largely upon the bank's risk evaluation methods and the borrower's credit history. Brazil has much higher interest rates, about 50% over that of most developing countries, which average about 200% (Economist, May 2006). A Brazilian bank-issued Visa or MasterCard to a new account holder can have annual interest as high as 240% even though inflation seems to have gone up per annum (Economist, May 2006). Banco do Brasil offered its new checking account holders Visa and MasterCard credit accounts for 192% annual interest, with somewhat lower interest rates reserved for people with dependable income and assets (July 2005). These high-interest accounts typically offer very low credit limits (US$40 to $400). They also often offer a grace period with no interest until the due date, which makes them more popular for use as liquidity accounts, whic
https://en.wikipedia.org/wiki/Gibbs%20algorithm
In statistical mechanics, the Gibbs algorithm, introduced by J. Willard Gibbs in 1902, is a criterion for choosing a probability distribution for the statistical ensemble of microstates of a thermodynamic system by minimizing the average log probability subject to the probability distribution satisfying a set of constraints (usually expectation values) corresponding to the known macroscopic quantities. in 1948, Claude Shannon interpreted the negative of this quantity, which he called information entropy, as a measure of the uncertainty in a probability distribution. In 1957, E.T. Jaynes realized that this quantity could be interpreted as missing information about anything, and generalized the Gibbs algorithm to non-equilibrium systems with the principle of maximum entropy and maximum entropy thermodynamics. Physicists call the result of applying the Gibbs algorithm the Gibbs distribution for the given constraints, most notably Gibbs's grand canonical ensemble for open systems when the average energy and the average number of particles are given. (See also partition function). This general result of the Gibbs algorithm is then a maximum entropy probability distribution. Statisticians identify such distributions as belonging to exponential families. References Statistical mechanics Particle statistics Entropy and information
https://en.wikipedia.org/wiki/Moishezon%20manifold
In mathematics, a Moishezon manifold is a compact complex manifold such that the field of meromorphic functions on each component has transcendence degree equal the complex dimension of the component: Complex algebraic varieties have this property, but the converse is not true: Hironaka's example gives a smooth 3-dimensional Moishezon manifold that is not an algebraic variety or scheme. showed that a Moishezon manifold is a projective algebraic variety if and only if it admits a Kähler metric. showed that any Moishezon manifold carries an algebraic space structure; more precisely, the category of Moishezon spaces (similar to Moishezon manifolds, but are allowed to have singularities) is equivalent with the category of algebraic spaces that are proper over . References Algebraic geometry Analytic geometry
https://en.wikipedia.org/wiki/Konami%27s%20Ping%20Pong
Konami's Ping Pong is a sports arcade game created in 1985 by Konami. It is the first video game to accurately reflect the gameplay of table tennis, as opposed to earlier simplifications like Pong. It was ported to the Amstrad CPC, Commodore 64, Famicom Disk System, MSX, and ZX Spectrum. Gameplay Konami's Ping Pong can be played singleplayer or multiplayer, using 11 point scoring rules; the first player to attain a score of 11 or higher, leading by two points, wins the game (to a maximum of 14-14, at which point the next point wins). The player must win the best of two out of three games in order to beat the match. The playfield is shown from an isometric perspective with the players shown as disembodied hands; players placed on the far-side of the table will find hitting the ball is much more difficult. However, the player is always positioned on the near side during the single player mode. All the essential moves are represented: forehand, backhand, lob, and smash. The game includes the penguin protagonist from Konami's earlier title Antarctic Adventure on the title screen and as a member of the audience in the game. This penguin would be later be known as Penta. In the introductory animation, a pingpong ball bounces along the table, and finally hits Penta on the head, who appears to faint. Reception In Japan, Game Machine listed Konami's Ping Pong on their September 1, 1985 issue as being the nineteenth most-successful table arcade unit of the month. Ports In 1985 the game was released by Konami for MSX computers and in 1986 the game was ported to the Amstrad CPC, Commodore 64 and ZX Spectrum by Imagine Software and Bernie Duggs, under the name Ping Pong. Apart from scaled-down graphics and sound due to limited system capabilities, the ports perfectly replicate the arcade gameplay. In 1987 the game was ported to the Famicom Disk System as Smash Ping Pong and published by Nintendo. Nintendo's character Donkey Kong Jr. replaces Konami's Penta in the crowd. D
https://en.wikipedia.org/wiki/Kelly%20criterion
In probability theory, the Kelly criterion (or Kelly strategy or Kelly bet) is a formula for sizing a bet. The Kelly bet size is found by maximizing the expected value of the logarithm of wealth, which is equivalent to maximizing the expected geometric growth rate. It assumes that the expected returns are known and is optimal for a bettor who values their wealth logarithmically. J. L. Kelly Jr, a researcher at Bell Labs, described the criterion in 1956. Under the stated assumptions, the Kelly criterion leads to higher wealth than any other strategy in the long run (i.e., the theoretical maximum return as the number of bets goes to infinity). The practical use of the formula has been demonstrated for gambling, and the same idea was used to explain diversification in investment management. In the 2000s, Kelly-style analysis became a part of mainstream investment theory and the claim has been made that well-known successful investors including Warren Buffett and Bill Gross use Kelly methods. Also see Intertemporal portfolio choice. Gambling formula Where losing the bet involves losing the entire wager, the Kelly bet is: where: is the fraction of the current bankroll to wager. is the probability of a win. is the probability of a loss (). is the proportion of the bet gained with a win. E.g., if betting $10 on a 2-to-1 odds bet (upon win you are returned $30, winning you $20), then . As an example, if a gamble has a 60% chance of winning (, ), and the gambler receives 1-to-1 odds on a winning bet (), then to maximize the long-run growth rate of the bankroll, the gambler should bet 20% of the bankroll at each opportunity (). If the gambler has zero edge, i.e., if , then the criterion recommends for the gambler to bet nothing. If the edge is negative () the formula gives a negative result, indicating that the gambler should take the other side of the bet. For example, in American roulette, the bettor is offered an even money payoff () on red, when there are
https://en.wikipedia.org/wiki/Sodium%20trimetaphosphate
Sodium trimetaphosphate (also STMP), with formula Na3P3O9, is one of the metaphosphates of sodium. It has the formula but the hexahydrate is also well known. It is the sodium salt of trimetaphosphoric acid. It is a colourless solid that finds specialised applications in food and construction industries. Although drawn with a particular resonance structure, the trianion has high symmetry. Synthesis and reactions Trisodium trimetaphosphate is produced industrially by heating sodium dihydrogen phosphate to 550 °C, a method first developed in 1955: The trimetaphosphate dissolves in water and is precipitated by the addition of sodium chloride (common ion effect), affording the hexahydrate. STMP can also prepared by heating samples of sodium polyphosphate, or by a thermal reaction of orthophosphoric acid and sodium chloride at 600°C. Hydrolysis of the ring leads to the acyclic sodium triphosphate: Na3P3O9 + H2O → H2Na3P3O10 The analogous reaction of the metatriphosphate anion involves ring-opening by amine nucleophiles. References Food additives Sodium compounds Metaphosphates
https://en.wikipedia.org/wiki/Peppermint%20extract
Peppermint extract is a herbal extract of peppermint (Mentha × piperita) made from the essential oil of peppermint leaves. Peppermint is a hybrid of water mint and spearmint. The oil has been used for various purposes over centuries. Peppermint extract is commonly used in cooking, as a dietary supplement, as an herbal or alternative medicine, as a pest repellent, and a flavor or fragrance agent for cleaning products, cosmetics, mouthwash, chewing gum, and candies. Its active ingredient menthol causes a cold sensation when peppermint extract is consumed or used topically. There is insufficient evidence to conclude it is effective in treating any medical condition. Extraction Peppermint extract is obtained through steam distillation, solvent extraction, and soxhlet extraction. Uses Peppermint extract is commonly used as a flavoring agent; it is also used in alternative medical treatments, although there is no sufficient evidence that peppermint extract is effective in treating any medical condition. Moderate levels can be safely mixed into food items, or applied topically, sprayed on surfaces as a household cleaner, or inhaled using aromatherapy. However, the menthol in peppermint oil may cause serious side effects in children and infants if inhaled. Peppermint oil may have adverse interactions with prescription drugs. Uses in cooking Peppermint extract can be used to add a peppermint flavor to baked goods, desserts, and candy, particularly candy canes, mints, and peppermint patties. Extracts for cooking may be labeled as pure, natural, imitation, or artificial. While pure and natural extracts contain peppermint oil specifically, imitation and artificial extracts generally use a mix of ingredients to achieve a flavor resembling peppermint. Peppermint extract can be substituted in recipes with peppermint oil (a stronger ingredient primarily used in candy-making), crème de menthe, or peppermint schnapps. If the food is not heated, the alcoholic properties of liqu
https://en.wikipedia.org/wiki/Keychain%20%28software%29
Keychain is the password management system in macOS, developed by Apple. It was introduced with Mac OS 8.6, and has been included in all subsequent versions of the operating system, now known as macOS. A Keychain can contain various types of data: passwords (for websites, FTP servers, SSH accounts, network shares, wireless networks, groupware applications, encrypted disk images), private keys, certificates, and secure notes. Storage and access In macOS, keychain files are stored in ~/Library/Keychains/ (and subdirectories), /Library/Keychains/, and /Network/Library/Keychains/, and the Keychain Access GUI application is located in the Utilities folder in the Applications folder. It is free, open source software released under the terms of the APSL-2.0. The command line equivalent of Keychain Access is /usr/bin/security. The keychain database is encrypted per-table and per-row with AES-256-GCM. The time which each credential is decrypted, how long it will remain decrypted, and whether the encrypted credential will be synced to iCloud varies depending on the type of data stored, and is documented on the Apple support website. Locking and unlocking The default keychain file is the login keychain, typically unlocked on login by the user's login password, although the password for this keychain can instead be different from a user's login password, adding security at the expense of some convenience. The Keychain Access application does not permit setting an empty password on a keychain. The keychain may be set to be automatically "locked" if the computer has been idle for a time, and can be locked manually from the Keychain Access application. When locked, the password has to be re-entered next time the keychain is accessed, to unlock it. Overwriting the file in ~/Library/Keychains/ with a new one (e.g. as part of a restore operation) also causes the keychain to lock and a password is required at next access. Password synchronization If the login keychain is protect
https://en.wikipedia.org/wiki/NetInfo
NetInfo is the system configuration database in NeXTSTEP and Mac OS X versions up through Mac OS X v10.4 "Tiger". NetInfo replaces most of the Unix system configuration files, though they are still present for running the machine in single user mode; most Unix APIs wrap around NetInfo instead. NetInfo stores system wide network-type configuration information, such as users and groups, in binary databases; while Mac OS X machine and application specific settings are stored as plist files. History NetInfo was introduced in NeXTSTEP version 0.9, and replaced both the Unix system configuration files and Sun Microsystems' Network Information Service (Yellow Pages) on NeXT computers. It immediately caused controversy, much unfavorable. Not only was NetInfo unique to NeXT computers (although NeXT later licensed NetInfo to Xedoc, an Australian software company who produced NetInfo for other UNIX systems), DNS queries went through NetInfo. This led to a situation where basic tasks such as translating a UNIX UID to a user name string would not complete because NetInfo was stalled on a DNS lookup. At first, it was possible to disable NetInfo and use the Unix system files, but as of NeXTSTEP version 2 disabling NetInfo also disabled DNS support. Thus, NeXT computers became notorious for locking a user out of everyday tasks because a DNS server had stopped responding. The Mac OS X version of NetInfo remedied this (and many other problems), but due to the early problems, NetInfo never took over the world of Unix system configuration. Apple has moved away from using NetInfo towards LDAP, particularly in Mac OS X Server. . Mac OS X v10.4 is the last version to support Netinfo. Beginning with Mac OS X v10.5, Netinfo has been completely phased out and replaced by a new local search node named dslocal, which files are located in /var/db/dslocal/ and are standard property list (XML-based) files. Files The NetInfo Database is stored in , and can only be accessed by root. It ca
https://en.wikipedia.org/wiki/Gerald%20Sacks
Gerald Enoch Sacks (1933 – October 4, 2019) was a logician whose most important contributions were in recursion theory. Named after him is Sacks forcing, a forcing notion based on perfect sets and the Sacks Density Theorem, which asserts that the partial order of the recursively enumerable Turing degrees is dense. Sacks had a joint appointment as a professor at the Massachusetts Institute of Technology and at Harvard University starting in 1972 and became emeritus at M.I.T. in 2006 and at Harvard in 2012. Sacks was born in Brooklyn in 1933. He earned his Ph.D. in 1961 from Cornell University under the direction of J. Barkley Rosser, with his dissertation On Suborderings of Degrees of Recursive Insolvability. Among his notable students are Lenore Blum, Harvey Friedman, Sy Friedman, Leo Harrington, Richard Shore, Steve Simpson and Theodore Slaman. Selected publications Degrees of unsolvability, Princeton University Press 1963, 1966 Saturated Model Theory, Benjamin 1972; 2nd edition, World Scientific 2010 Higher Recursion theory, Springer 1990 Selected Logic Papers, World Scientific 1999 Mathematical Logic in the 20th Century, World Scientific 2003 References Mathematical logicians American logicians Cornell University alumni 20th-century American mathematicians 21st-century American mathematicians Massachusetts Institute of Technology School of Science faculty Harvard University faculty 1933 births 2019 deaths
https://en.wikipedia.org/wiki/Mesh%20generation
Mesh generation is the practice of creating a mesh, a subdivision of a continuous geometric space into discrete geometric and topological cells. Often these cells form a simplicial complex. Usually the cells partition the geometric input domain. Mesh cells are used as discrete local approximations of the larger domain. Meshes are created by computer algorithms, often with human guidance through a GUI , depending on the complexity of the domain and the type of mesh desired. A typical goal is to create a mesh that accurately captures the input domain geometry, with high-quality (well-shaped) cells, and without so many cells as to make subsequent calculations intractable. The mesh should also be fine (have small elements) in areas that are important for the subsequent calculations. Meshes are used for rendering to a computer screen and for physical simulation such as finite element analysis or computational fluid dynamics. Meshes are composed of simple cells like triangles because, e.g., we know how to perform operations such as finite element calculations (engineering) or ray tracing (computer graphics) on triangles, but we do not know how to perform these operations directly on complicated spaces and shapes such as a roadway bridge. We can simulate the strength of the bridge, or draw it on a computer screen, by performing calculations on each triangle and calculating the interactions between triangles. A major distinction is between structured and unstructured meshing. In structured meshing the mesh is a regular lattice, such as an array, with implied connectivity between elements. In unstructured meshing, elements may be connected to each other in irregular patterns, and more complicated domains can be captured. This page is primarily about unstructured meshes. While a mesh may be a triangulation, the process of meshing is distinguished from point set triangulation in that meshing includes the freedom to add vertices not present in the input. "Facetting" (triangul
https://en.wikipedia.org/wiki/Fredholm%20alternative
In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a theorem on Fredholm operators. Part of the result states that a non-zero complex number in the spectrum of a compact operator is an eigenvalue. Linear algebra If V is an n-dimensional vector space and is a linear transformation, then exactly one of the following holds: For each vector v in V there is a vector u in V so that . In other words: T is surjective (and so also bijective, since V is finite-dimensional). A more elementary formulation, in terms of matrices, is as follows. Given an m×n matrix A and a m×1 column vector b, exactly one of the following must hold: Either: A x = b has a solution x Or: AT y = 0 has a solution y with yTb ≠ 0. In other words, A x = b has a solution if and only if for any y such that AT y = 0, it follows that yTb = 0 . Integral equations Let be an integral kernel, and consider the homogeneous equation, the Fredholm integral equation, and the inhomogeneous equation The Fredholm alternative is the statement that, for every non-zero fixed complex number , either the first equation has a non-trivial solution, or the second equation has a solution for all . A sufficient condition for this statement to be true is for to be square integrable on the rectangle (where a and/or b may be minus or plus infinity). The integral operator defined by such a K is called a Hilbert–Schmidt integral operator. Functional analysis Results about Fredholm operators generalize these results to complete normed vector spaces of infinite dimensions; that is, Banach spaces. The integral equation can be reformulated in terms of operator notation as follows. Write (somewhat informally) to mean with the Dirac delta function, considered as a distribution, or generalized function, in two variables. Then by
https://en.wikipedia.org/wiki/Pilot%20plant
A pilot plant is a pre-commercial production system that employs new production technology and/or produces small volumes of new technology-based products, mainly for the purpose of learning about the new technology. The knowledge obtained is then used for design of full-scale production systems and commercial products, as well as for identification of further research objectives and support of investment decisions. Other (non-technical) purposes include gaining public support for new technologies and questioning government regulations. Pilot plant is a relative term in the sense that pilot plants are typically smaller than full-scale production plants, but are built in a range of sizes. Also, as pilot plants are intended for learning, they typically are more flexible, possibly at the expense of economy. Some pilot plants are built in laboratories using stock lab equipment, while others require substantial engineering efforts, cost millions of dollars, and are custom-assembled and fabricated from process equipment, instrumentation and piping. They can also be used to train personnel for a full-scale plant. Pilot plants tend to be smaller compared to demonstration plants. Terminology A word similar to pilot plant is pilot line. Essentially, pilot plants and pilot lines perform the same functions, but 'pilot plant' is used in the context of (bio)chemical and advanced materials production systems, whereas 'pilot line' is used for new technology in general. The term 'kilo lab' is also used for small pilot plants referring to the expected output quantities. Risk management Pilot plants are used to reduce the risk associated with construction of large process plants. They do so in several ways: Computer simulations and semi-empirical methods are used to determine the limitations of the pilot scale system. These mathematical models are then tested in a physical pilot-scale plant. Various modeling methods are used for scale-up. These methods include: Chemical similitud
https://en.wikipedia.org/wiki/Homography
In projective geometry, a homography is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. It is a bijection that maps lines to lines, and thus a collineation. In general, some collineations are not homographies, but the fundamental theorem of projective geometry asserts that is not so in the case of real projective spaces of dimension at least two. Synonyms include projectivity, projective transformation, and projective collineation. Historically, homographies (and projective spaces) have been introduced to study perspective and projections in Euclidean geometry, and the term homography, which, etymologically, roughly means "similar drawing", dates from this time. At the end of the 19th century, formal definitions of projective spaces were introduced, which differed from extending Euclidean or affine spaces by adding points at infinity. The term "projective transformation" originated in these abstract constructions. These constructions divide into two classes that have been shown to be equivalent. A projective space may be constructed as the set of the lines of a vector space over a given field (the above definition is based on this version); this construction facilitates the definition of projective coordinates and allows using the tools of linear algebra for the study of homographies. The alternative approach consists in defining the projective space through a set of axioms, which do not involve explicitly any field (incidence geometry, see also synthetic geometry); in this context, collineations are easier to define than homographies, and homographies are defined as specific collineations, thus called "projective collineations". For sake of simplicity, unless otherwise stated, the projective spaces considered in this article are supposed to be defined over a (commutative) field. Equivalently Pappus's hexagon theorem and Desargues's theorem are supposed to be true. A large part of the res
https://en.wikipedia.org/wiki/Polyspermy
In biology, polyspermy describes the fertilization of an egg by more than one sperm. Diploid organisms normally contain two copies of each chromosome, one from each parent. The cell resulting from polyspermy, on the other hand, contains three or more copies of each chromosome—one from the egg and one each from multiple sperm. Usually, the result is an unviable zygote. This may occur because sperm are too efficient at reaching and fertilizing eggs due to the selective pressures of sperm competition. Such a situation is often deleterious to the female: in other words, the male–male competition among sperm spills over to create sexual conflict. Physiological polyspermy Physiological polyspermy happens when the egg normally accepts more than one sperm but only one of the multiple sperm will fuse its nucleus with the nucleus of the egg. Physiological polyspermy is present in some species of vertebrates and invertebrates. Some species utilize physiological polyspermy as the proper mechanism for developing their offspring. Some of these animals include birds, ctenophora, reptiles and amphibians. Some vertebrates that are both amniote or anamniote, including urodele amphibians, cartilaginous fish, birds and reptiles, undergo physiological polyspermy because of the internal fertilization of their yolky eggs. Sperm triggers egg activation by the induction of free calcium ion concentration in the cytoplasm of the egg. This induction plays a very critical role in both physiological polyspermy and monomeric polyspermy species. The rise in calcium causes activation of the egg. The egg will then be altered on both a biochemical and morphological level. In mammals as well as sea urchins, the sudden rise in calcium concentration occurs because of the influx of calcium ions within the egg. These calcium ions are responsible for the cortical granule reaction, and are also stored in the egg's endoplasmic reticulum. Unlike physiological polyspermy, monospermic fertilization deals wit
https://en.wikipedia.org/wiki/Glan%E2%80%93Thompson%20prism
A Glan–Thompson prism is a type of polarizing prism similar to the Nicol prism and Glan–Foucault prism. Design A Glan–Thompson prism consists of two right-angled calcite prisms that are cemented together by their long faces. The optical axes of the calcite crystals are parallel and aligned perpendicular to the plane of reflection. Birefringence splits light entering the prism into two rays, experiencing different refractive indices; the p-polarized ordinary ray is totally internally reflected from the calcite–cement interface, leaving the s-polarized extraordinary ray to be transmitted. The prism can therefore be used as a polarizing beam splitter. Traditionally Canada balsam was used as the cement in assembling these prisms, but this has largely been replaced by synthetic polymers. Characteristics Compared to the similar Glan–Foucault prism, the Glan–Thompson has a wider acceptance angle, but a much lower limit of maximal irradiance (due to optical damage limitations of the cement layer). See also Glan–Taylor prism References Polarization (waves) Prisms (optics)
https://en.wikipedia.org/wiki/Glan%E2%80%93Foucault%20prism
A Glan–Foucault prism (also called a Glan–air prism) is a type of prism which is used as a polarizer. It is similar in construction to a Glan–Thompson prism, except that two right-angled calcite prisms are spaced with an air gap instead of being cemented together. Total internal reflection of p-polarized light at the air gap means that only s-polarized light is transmitted straight through the prism. Design Compared to the Glan–Thompson prism, the Glan–Foucault has a narrower acceptance angle over which it works, but because it uses an air gap rather than cement, much higher irradiances can be used without damage. The prism can thus be used with laser beams. The prism is also shorter (for a given usable aperture) than the Glan–Thompson design, and the deflection angle of the rejected beam can be made close to 90°, which is sometimes useful. Glan–Foucault prisms are not typically used as polarizing beamsplitters because while the transmitted beam is completely polarized, the reflected beam is not. Polarization The Glan–Taylor prism is similar, except that the crystal axes and transmitted polarization direction are orthogonal to the Glan–Foucault design. This yields higher transmission and better polarization of the reflected light. Calcite Glan–Foucault prisms are now rarely used, having been mostly replaced by Glan–Taylor polarizers and other more recent designs. Yttrium orthovanadate (YVO4) prisms based on the Glan–Foucault design have superior polarization of the reflected beam and higher damage threshold, compared with calcite Glan–Foucault and Glan–Taylor prisms. YVO4 prisms are more expensive, however, and can accept beams over a very limited range of angles of incidence. References Polarization (waves) Prisms (optics)
https://en.wikipedia.org/wiki/Bateman%20Manuscript%20Project
The Bateman Manuscript Project was a major effort at collation and encyclopedic compilation of the mathematical theory of special functions. It resulted in the eventual publication of five important reference volumes, under the editorship of Arthur Erdélyi. Overview The theory of special functions was a core activity of the field of applied mathematics, from the middle of the nineteenth century to the advent of high-speed electronic computing. The intricate properties of spherical harmonics, elliptic functions and other staples of problem-solving in mathematical physics, astronomy and right across the physical sciences, are not easy to document completely, absent a theory explaining the inter-relationships. Mathematical tables to perform actual calculations needed to mesh with an adequate theory of how functions could be transformed into those already tabulated. Harry Bateman, a distinguished applied mathematician, undertook the somewhat quixotic task of trying to collate the content of the very large literature. On his death in 1946, his papers on this project were still in a uniformly rough state. The publication of the edited version provided special functions texts more up-to-date than, for example, the classic Whittaker & Watson. The volumes were out of print for many years, and copyright in the works reverted to the California Institute of Technology, who renewed them in the early 1980s. Dover planned to reprint them for publication in 2007, but this never occurred . In 2011, the California Institute of Technology gave permission for scans of the volumes to be made publicly available. Other mathematicians involved in the project include Wilhelm Magnus, Fritz Oberhettinger and Francesco Tricomi. Askey–Bateman project In 2007, the Askey–Bateman project was announced by Mourad Ismail as a five- or six-volume encyclopedic book series on special functions, based on the works of Harry Bateman and Richard Askey. Starting in 2020, Cambridge University Press bega
https://en.wikipedia.org/wiki/Crystal%20growth
A crystal is a solid material whose constituent atoms, molecules, or ions are arranged in an orderly repeating pattern extending in all three spatial dimensions. Crystal growth is a major stage of a crystallization process, and consists of the addition of new atoms, ions, or polymer strings into the characteristic arrangement of the crystalline lattice. The growth typically follows an initial stage of either homogeneous or heterogeneous (surface catalyzed) nucleation, unless a "seed" crystal, purposely added to start the growth, was already present. The action of crystal growth yields a crystalline solid whose atoms or molecules are close packed, with fixed positions in space relative to each other. The crystalline state of matter is characterized by a distinct structural rigidity and very high resistance to deformation (i.e. changes of shape and/or volume). Most crystalline solids have high values both of Young's modulus and of the shear modulus of elasticity. This contrasts with most liquids or fluids, which have a low shear modulus, and typically exhibit the capacity for macroscopic viscous flow. Overview After successful formation of a stable nucleus, a growth stage ensues in which free particles (atoms or molecules) adsorb onto the nucleus and propagate its crystalline structure outwards from the nucleating site. This process is significantly faster than nucleation. The reason for such rapid growth is that real crystals contain dislocations and other defects, which act as a catalyst for the addition of particles to the existing crystalline structure. By contrast, perfect crystals (lacking defects) would grow exceedingly slowly. On the other hand, impurities can act as crystal growth inhibitors and can also modify crystal habit. Nucleation Nucleation can be either homogeneous, without the influence of foreign particles, or heterogeneous, with the influence of foreign particles. Generally, heterogeneous nucleation takes place more quickly since the foreign pa
https://en.wikipedia.org/wiki/Glass%20recycling
Glass recycling is the processing of waste glass into usable products. Glass that is crushed or imploded and ready to be remelted is called cullet. There are two types of cullet: internal and external. Internal cullet is composed of defective products detected and rejected by a quality control process during the industrial process of glass manufacturing, transition phases of product changes (such as thickness and color changes) and production offcuts. External cullet is waste glass that has been collected or reprocessed with the purpose of recycling. External cullet (which can be pre- or post-consumer) is classified as waste. The word "cullet", when used in the context of end-of-waste, will always refer to external cullet. To be recycled, glass waste needs to be purified and cleaned of contamination. Then, depending on the end use and local processing capabilities, it might also have to be separated into different sizes and colours. Many recyclers collect different colors of glass separately since glass retains its color after recycling. The most common colours used for consumer containers are clear (flint) glass, green glass, and brown (amber) glass. Glass is ideal for recycling since none of the material is degraded by normal use. Many collection points have separate bins for clear (flint), green and brown (amber). Glass re-processors intending to make new glass containers require separation by color, because glass tends to retain its color after recycling. If the recycled glass is not going to be made into more glass, or if the glass re-processor uses newer optical sorting equipment, separation by color at the collection point may not be required. Heat-resistant glass, such as Pyrex or borosilicate glass, must not be part of the glass recycling stream, because even a small piece of such material will alter the viscosity of the fluid in the furnace at remelt. Processing of external cullet To be able to use external cullet in production, any contaminants should
https://en.wikipedia.org/wiki/Apodization
In signal processing, apodization (from Greek "removing the foot") is the modification of the shape of a mathematical function. The function may represent an electrical signal, an optical transmission, or a mechanical structure. In optics, it is primarily used to remove Airy disks caused by diffraction around an intensity peak, improving the focus. Apodization in electronics Apodization in signal processing The term apodization is used frequently in publications on Fourier-transform infrared (FTIR) signal processing. An example of apodization is the use of the Hann window in the fast Fourier transform analyzer to smooth the discontinuities at the beginning and end of the sampled time record. Apodization in digital audio An apodizing filter can be used in digital audio processing instead of the more common brick-wall filters, in order to reduce the pre- and post-ringing that the latter introduces. Apodization in mass spectrometry During oscillation within an Orbitrap, ion transient signal may not be stable until the ions settle into their oscillations. Toward the end, subtle ion collisions have added up to cause noticeable dephasing. This presents a problem for the Fourier transformation, as it averages the oscillatory signal across the length of the time-domain measurement. The software allows “apodization”, the removal of the front and back section of the transient signal from consideration in the FT calculation. Thus, apodization improves the resolution of the resulting mass spectrum. Another way to improve the quality of the transient is to wait to collect data until ions have settled into stable oscillatory motion within the trap. Apodization in nuclear magnetic resonance spectroscopy Apodization is applied to NMR signals before discrete Fourier Transformation. Typically, NMR signals are truncated due to time constraints (indirect dimension) or to obtain a higher signal-to-noise ratio. In order to reduce truncation artifacts, the signals are subjected
https://en.wikipedia.org/wiki/K%C3%A9gresse%20track
A Kégresse track is a kind of rubber or canvas continuous track which uses a flexible belt rather than interlocking metal segments. It can be fitted to a conventional car or truck to turn it into a half-track, suitable for use over rough or soft ground. Conventional front wheels and steering are used, although skis may also be fitted. A snowmobile is a smaller ski-only type. Technology The Kégresse propulsion and suspension system incorporates an articulated bogie, fitted to the rear of the vehicle with a large drive wheel at one end, a large unpowered idler wheel at the other, and several small guide wheels in between, over which run a reinforced flexible belt. The belt is fitted with metal or rubber treads to grip the ground. It differs from conventional track systems by using a flexible belt rather than interlocking metal segments. Use in Russia The name comes from the system's inventor Adolphe Kégresse, who designed the original while working for Tsar Nicholas II of Russia between 1906 and 1916. He applied it to several cars in the royal garage including Rolls-Royce cars and Packard trucks. The Russian army also fitted the system to a number of their Austin Armoured Cars. Further development in France After the Russian Revolution Adolphe Kégresse returned to his native country, France, where the system was used on Citroën cars between 1921 and 1937 for off-road and military vehicles. A series of expeditions across the undeveloped parts of Asia, America, and Africa was undertaken by Citroën, demonstrating the all-terrain capabilities of these vehicles. In World War II, both sides used this system in the war effort. In the 1920s, the U.S. Army purchased several Citroën-Kégresse vehicles for evaluation and then purchased a licence to produce them. This resulted in the Army Ordnance Department building a prototype in 1939. In December 1942, it went into production with the M2 Half Track Car and M3 Half-track versions. The Nazis also captured many of these Citroën
https://en.wikipedia.org/wiki/Three-dimensional%20space
In geometry, a three-dimensional space (3D space, 3-space or, rarely, tri-dimensional space) is a mathematical space in which three values (coordinates) are required to determine the position of a point. Most commonly, it is the three-dimensional Euclidean space, the Euclidean n-space of dimension n=3 that models physical space. More general three-dimensional spaces are called 3-manifolds. The term may also refer colloquially to a subset of space, a three-dimensional region (or 3D domain), a solid figure. Technically, a tuple of numbers can be understood as the Cartesian coordinates of a location in a -dimensional Euclidean space. The set of these -tuples is commonly denoted and can be identified to the pair formed by a -dimensional Euclidean space and a Cartesian coordinate system. When , this space is called the three-dimensional Euclidean space (or simply "Euclidean space" when the context is clear). It serves as a model of the physical universe (when relativity theory is not considered), in which all known matter exists. While this space remains the most compelling and useful way to model the world as it is experienced, it is only one example of a large variety of spaces in three dimensions called 3-manifolds. In this classical example, when the three values refer to measurements in different directions (coordinates), any three directions can be chosen, provided that vectors in these directions do not all lie in the same 2-space (plane). Furthermore, in this case, these three values can be labeled by any combination of three chosen from the terms width/breadth, height/depth, and length. History Books XI to XIII of Euclid's Elements dealt with three-dimensional geometry. Book XI develops notions of orthogonality and parallelism of lines and planes, and defines solids including parallelpipeds, pyramids, prisms, spheres, octahedra, icosahedra and dodecahedra. Book XII develops notions of similarity of solids. Book XIII describes the construction of the five re
https://en.wikipedia.org/wiki/Detonation%20velocity
Explosive velocity, also known as detonation velocity or velocity of detonation (VoD), is the velocity at which the shock wave front travels through a detonated explosive. Explosive velocities are always faster than the local speed of sound in the material. If the explosive is confined before detonation, such as in an artillery shell, the force produced is focused on a much smaller area, and the pressure is significantly intensified. This results in an explosive velocity that is higher than if the explosive had been detonated in open air. Unconfined velocities are often approximately 70 to 80 percent of confined velocities. Explosive velocity is increased with smaller particle size (i.e., increased spatial density), increased charge diameter, and increased confinement (i.e., higher pressure). Typical detonation velocities for organic dust mixtures range from 1400 to 1650m/s. Gas explosions can either deflagrate or detonate based on confinement; detonation velocities are generally around 1700 m/s but can be as high as 3000m/s. Solid explosives often have detonation velocities ranging beyond 4000 m/s to 10300 m/s. Detonation velocity can be measured by the Dautriche method. In essence, this method relies on the time lag between the initiation of two ends of a detonating fuse of a known detonation velocity, inserted radially at two points into the explosive charge at a known distance apart. When the explosive charge is detonated, it triggers one end of the fuse, then the second end. This causes two detonation fronts travelling in opposite direction along the length of the detonating fuse, which meet at a specific point away from the centre of the fuse. Knowing the distance along the detonation charge between the two ends of the fuse, the position of the collision of the detonation fronts, and the detonation velocity of the detonating fuse, the detonation velocity of the explosive is calculated and is expressed in km/s. In other words "VOD is the velocity or rate o
https://en.wikipedia.org/wiki/Data-driven%20programming
In computer programming, data-driven programming is a programming paradigm in which the program statements describe the data to be matched and the processing required rather than defining a sequence of steps to be taken. Standard examples of data-driven languages are the text-processing languages sed and AWK, and the document transformation language XSLT, where the data is a sequence of lines in an input stream – these are thus also known as line-oriented languages – and pattern matching is primarily done via regular expressions or line numbers. Related paradigms Data-driven programming is similar to event-driven programming, in that both are structured as pattern matching and resulting processing, and are usually implemented by a main loop, though they are typically applied to different domains. The condition/action model is also similar to aspect-oriented programming, where when a join point (condition) is reached, a pointcut (action) is executed. A similar paradigm is used in some tracing frameworks such as DTrace, where one lists probes (instrumentation points) and associated actions, which execute when the condition is satisfied. Adapting abstract data type design methods to object-oriented programming results in a data-driven design. This type of design is sometimes used in object-oriented programming to define classes during the conception of a piece of software. Applications Data-driven programming is typically applied to streams of structured data, for filtering, transforming, aggregating (such as computing statistics), or calling other programs. Typical streams include log files, delimiter-separated values, or email messages, notably for email filtering. For example, an AWK program may take as input a stream of log statements, and for example send all to the console, write ones starting with WARNING to a "WARNING" file, and send an email to a sysadmin in case any line starts with "ERROR". It could also record how many warnings are logged per day. Al
https://en.wikipedia.org/wiki/PBR322
pBR322 is a plasmid and was one of the first widely used E. coli cloning vectors. Created in 1977 in the laboratory of Herbert Boyer at the University of California, San Francisco, it was named after Francisco Bolivar Zapata, the postdoctoral researcher and Raymond L. Rodriguez. The p stands for "plasmid," and BR for "Bolivar" and "Rodriguez." pBR322 is 4361 base pairs in length and has two antibiotic resistance genes – the gene bla encoding the ampicillin resistance (AmpR) protein, and the gene tetA encoding the tetracycline resistance (TetR) protein. It contains the origin of replication of pMB1, and the rop gene, which encodes a restrictor of plasmid copy number. The plasmid has unique restriction sites for more than forty restriction enzymes. Eleven of these forty sites lie within the TetR gene. There are two sites for restriction enzymes HindIII and ClaI within the promoter of the TetR gene. There are six key restriction sites inside the AmpR gene.The source of these antibiotic resistance genes are from pSC101 for Tetracycline and RSF2124 for Ampicillin. The circular sequence is numbered such that 0 is the middle of the unique EcoRI site and the count increases through the TetR gene. If we have to remove ampicillin for instance, we must use restriction endonuclease or molecular scissors against PstI and then pBR322 will become anti-resistant to ampicillin .The same process of Insertional Inactivation can be applied to Tetracycline. The AmpR gene is penicillin beta-lactamase. Promoters P1 and P3 are for the beta-lactamase gene. P3 is the natural promoter, and P1 is artificially created by the ligation of two different DNA fragments to create pBR322. P2 is in the same region as P1, but it is on the opposite strand and initiates transcription in the direction of the tetracycline resistance gene. Background Early cloning experiments may be conducted using natural plasmids such the ColE1 and pSC101. Each of these plasmids may have its advantages and disadvantage
https://en.wikipedia.org/wiki/Mired
Contracted from the term micro reciprocal degree, the mired is a unit of measurement used to express color temperature. Values in mireds are calculated by the formula: where T is the colour temperature in units of kelvins and M denotes the resulting mired dimensionless number. The constant is one million kelvins. The SI term for this unit is the reciprocal megakelvin (MK−1), shortened to mirek, but this term has not gained traction. For convenience, decamireds are sometimes used, with each decamired equaling ten mireds. The use of the term mired dates back to Irwin G. Priest's observation in 1932 that the just noticeable difference between two illuminants is based on the difference of the reciprocals of their temperatures, rather than the difference in the temperatures themselves. Examples A blue sky, which has a color temperature T of about , has a mired value of M = 40 mireds, while a standard electronic photography flash, having a color temperature T of 5000 K, has a mired value of M = 200 mireds. In photography, mireds are used to indicate the color temperature shift provided by a filter or gel for a given film and light source. For instance, to use daylight film (5700 K) to take a photograph under a tungsten light source (3200 K) without introducing a color cast, one would need a corrective filter or gel providing a mired shift This corresponds to a color temperature blue (CTB) filter. Color gels with negative mired values appear green or blue, while those with positive values appear amber or red. References Units of measurement Non-SI metric units Color
https://en.wikipedia.org/wiki/P-form%20electrodynamics
In theoretical physics, -form electrodynamics is a generalization of Maxwell's theory of electromagnetism. Ordinary (via. one-form) Abelian electrodynamics We have a one-form , a gauge symmetry where is any arbitrary fixed 0-form and is the exterior derivative, and a gauge-invariant vector current with density 1 satisfying the continuity equation where is the Hodge star operator. Alternatively, we may express as a closed -form, but we do not consider that case here. is a gauge-invariant 2-form defined as the exterior derivative . satisfies the equation of motion (this equation obviously implies the continuity equation). This can be derived from the action where is the spacetime manifold. p-form Abelian electrodynamics We have a -form , a gauge symmetry where is any arbitrary fixed -form and is the exterior derivative, and a gauge-invariant -vector with density 1 satisfying the continuity equation where is the Hodge star operator. Alternatively, we may express as a closed -form. is a gauge-invariant -form defined as the exterior derivative . satisfies the equation of motion (this equation obviously implies the continuity equation). This can be derived from the action where is the spacetime manifold. Other sign conventions do exist. The Kalb–Ramond field is an example with in string theory; the Ramond–Ramond fields whose charged sources are D-branes are examples for all values of . In 11-dimensional supergravity or M-theory, we have a 3-form electrodynamics. Non-abelian generalization Just as we have non-abelian generalizations of electrodynamics, leading to Yang–Mills theories, we also have nonabelian generalizations of -form electrodynamics. They typically require the use of gerbes. References Henneaux; Teitelboim (1986), "-Form electrodynamics", Foundations of Physics 16 (7): 593-617, Navarro; Sancho (2012), "Energy and electromagnetism of a differential -form ", J. Math. Phys. 53, 102501 (2012) Electrodynamics String th
https://en.wikipedia.org/wiki/Electronics%20manufacturing%20services
Electronics Manufacturing Services (EMS) is a term used for companies that design, manufacture, test, distribute, and provide return/repair services for electronic components and assemblies for original equipment manufacturers (OEMs). The concept is also referred to as Electronics Contract Manufacturing (ECM). Many consumer electronics are built in China, due to maintenance cost, availability of materials, and speed as opposed to other countries such as the United States. Cities such as Shenzhen and Penang have become important production centres for the industry, attracting many consumer electronics companies such as Apple Inc. Some companies such as Flex and Wistron are Original design manufacturers and providers of Electronics manufacturing services. History The EMS industry was initially established in 1961 by SCI Systems of Huntsville Alabama. The industry realized its most significant growth in the 1980s; at the time, most electronics manufacturing for large-scale product runs was handled by the OEMs in-house assembly. These new companies offered flexibility and eased human resources issues for smaller companies doing limited runs. The business model for the EMS industry is to specialize in large economies of scale in manufacturing, raw materials procurement and pooling together resources, industrial design expertise as well as create added value services such as warranty and repairs. This frees up the customer who does not need to manufacture and keep huge inventories of products. Therefore, they can respond to sudden spikes in demand more quickly and efficiently. The development of Surface Mount Technology (SMT) on printed circuit boards (PCB) allowed for the rapid assembly of electronics. By the mid-1990s the advantages of the EMS concept became compelling and OEMs began outsourcing PCB assembly (PCBA) in large scale. By the end of the 1990s and early 2000s, many OEMs sold their assembly plants to EMSs, aggressively vying for market share. A wave of con
https://en.wikipedia.org/wiki/Free%20Boolean%20algebra
In mathematics, a free Boolean algebra is a Boolean algebra with a distinguished set of elements, called generators, such that: Each element of the Boolean algebra can be expressed as a finite combination of generators, using the Boolean operations, and The generators are as independent as possible, in the sense that there are no relationships among them (again in terms of finite expressions using the Boolean operations) that do not hold in every Boolean algebra no matter which elements are chosen. A simple example The generators of a free Boolean algebra can represent independent propositions. Consider, for example, the propositions "John is tall" and "Mary is rich". These generate a Boolean algebra with four atoms, namely: John is tall, and Mary is rich; John is tall, and Mary is not rich; John is not tall, and Mary is rich; John is not tall, and Mary is not rich. Other elements of the Boolean algebra are then logical disjunctions of the atoms, such as "John is tall and Mary is not rich, or John is not tall and Mary is rich". In addition there is one more element, FALSE, which can be thought of as the empty disjunction; that is, the disjunction of no atoms. This example yields a Boolean algebra with 16 elements; in general, for finite n, the free Boolean algebra with n generators has 2n atoms, and therefore elements. If there are infinitely many generators, a similar situation prevails except that now there are no atoms. Each element of the Boolean algebra is a combination of finitely many of the generating propositions, with two such elements deemed identical if they are logically equivalent. Another way to see why the free Boolean algebra on an n-element set has elements is to note that each element is a function from n bits to one. There are possible inputs to such a function and the function will choose 0 or 1 to output for each input, so there are possible functions. Category-theoretic definition In the language of category theory, free Boolean a
https://en.wikipedia.org/wiki/Schubert%20calculus
In mathematics, Schubert calculus is a branch of algebraic geometry introduced in the nineteenth century by Hermann Schubert, in order to solve various counting problems of projective geometry (part of enumerative geometry). It was a precursor of several more modern theories, for example characteristic classes, and in particular its algorithmic aspects are still of current interest. The term Schubert calculus is sometimes used to mean the enumerative geometry of linear subspaces of a vector space, which is roughly equivalent to describing the cohomology ring of Grassmannians. Sometimes it is used to mean the more general enumerative geometry of algebraic varieties that are homogenous spaces of simple Lie groups. Even more generally, Schubert calculus is often understood to encompass the study of analogous questions in generalized cohomology theories. The objects introduced by Schubert are the Schubert cells, which are locally closed sets in a Grassmannian defined by conditions of incidence of a linear subspace in projective space with a given flag. For further details see Schubert variety. The intersection theory of these cells, which can be seen as the product structure in the cohomology ring of the Grassmannian of associated cohomology classes, in principle allows the prediction of the cases where intersections of cells results in a finite set of points, which are potentially concrete answers to enumerative questions. A key result is that the Schubert cells (or rather, the classes of their Zariski closures, the Schubert cycles or Schubert varieties) span the whole cohomology ring. The combinatorial aspects mainly arise in relation to computing intersections of Schubert cycles. Lifted from the Grassmannian, which is a homogeneous space, to the general linear group that acts on it, similar questions are involved in the Bruhat decomposition and classification of parabolic subgroups (as block traingular matrices). Putting Schubert's system on a rigorous footing wa
https://en.wikipedia.org/wiki/Anonymous%20recursion
In computer science, anonymous recursion is recursion which does not explicitly call a function by name. This can be done either explicitly, by using a higher-order function – passing in a function as an argument and calling it – or implicitly, via reflection features which allow one to access certain functions depending on the current context, especially "the current function" or sometimes "the calling function of the current function". In programming practice, anonymous recursion is notably used in JavaScript, which provides reflection facilities to support it. In general programming practice, however, this is considered poor style, and recursion with named functions is suggested instead. Anonymous recursion via explicitly passing functions as arguments is possible in any language that supports functions as arguments, though this is rarely used in practice, as it is longer and less clear than explicitly recursing by name. In theoretical computer science, anonymous recursion is important, as it shows that one can implement recursion without requiring named functions. This is particularly important for the lambda calculus, which has anonymous unary functions, but is able to compute any recursive function. This anonymous recursion can be produced generically via fixed-point combinators. Use Anonymous recursion is primarily of use in allowing recursion for anonymous functions, particularly when they form closures or are used as callbacks, to avoid having to bind the name of the function. Anonymous recursion primarily consists of calling "the current function", which results in direct recursion. Anonymous indirect recursion is possible, such as by calling "the caller (the previous function)", or, more rarely, by going further up the call stack, and this can be chained to produce mutual recursion. The self-reference of "the current function" is a functional equivalent of the "this" keyword in object-oriented programming, allowing one to refer to the current context.
https://en.wikipedia.org/wiki/Mutual%20authentication
Mutual authentication or two-way authentication (not to be confused with two-factor authentication) refers to two parties authenticating each other at the same time in an authentication protocol. It is a default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS). Mutual authentication is a desired characteristic in verification schemes that transmit sensitive data, in order to ensure data security. Mutual authentication can be accomplished with two types of credentials: usernames and passwords, and public key certificates. Mutual authentication is often employed in the Internet of Things (IoT). Writing effective security schemes in IoT systems can become challenging, especially when schemes are desired to be lightweight and have low computational costs. Mutual authentication is a crucial security step that can defend against many adversarial attacks, which otherwise can have large consequences if IoT systems (such as e-Healthcare servers) are hacked. In scheme analyses done of past works, a lack of mutual authentication had been considered a weakness in data transmission schemes. Process steps and verification Schemes that have a mutual authentication step may use different methods of encryption, communication, and verification, but they all share one thing in common: each entity involved in the communication is verified. If Alice wants to communicate with Bob, they will both authenticate the other and verify that it is who they are expecting to communicate with before any data or messages are transmitted. A mutual authentication process that exchanges user IDs may be implemented as follows: Alice sends an encrypted message to Bob to show that Alice is a valid user. Bob verifies message: Bob checks the format and timestamp. If either is incorrect or invalid, the session is aborted. The message is then decrypted with Bob's secret key, giving Alice's ID. Bob checks if the message matches a valid user. If not, the session is abor
https://en.wikipedia.org/wiki/Predictive%20maintenance
Predictive maintenance techniques are designed to help determine the condition of in-service equipment in order to estimate when maintenance should be performed. This approach promises cost savings over routine or time-based preventive maintenance, because tasks are performed only when warranted. Thus, it is regarded as condition-based maintenance carried out as suggested by estimations of the degradation state of an item. The main promise of predictive maintenance is to allow convenient scheduling of corrective maintenance, and to prevent unexpected equipment failures. The key is "the right infor equipment lifetime, increased plant safety, fewer accidents with negative impact on environment, and optimized spare parts handling. Predictive maintenance differs from preventive maintenance because it relies on the actual condition of equipment, rather than average or expected life statistics, to predict when maintenance will be required. Typically, Machine Learning approaches are adopted for the definition of the actual condition of the system and for forecasting its future states. Some of the main components that are necessary for implementing predictive maintenance are data collection and preprocessing, early fault detection, fault detection, time to failure prediction, maintenance scheduling and resource optimization. Predictive maintenance has also been considered to be one of the driving forces for improving productivity and one of the ways to achieve "just-in-time" in manufacturing. Overview Predictive maintenance evaluates the condition of equipment by performing periodic (offline) or continuous (online) equipment condition monitoring. The ultimate goal of the approach is to perform maintenance at a scheduled point in time when the maintenance activity is most cost-effective and before the equipment loses performance within a threshold. This results in a reduction in unplanned downtime costs because of failure, where costs can be in the hundreds of thousands
https://en.wikipedia.org/wiki/Network%20Admission%20Control
Network Admission Control (NAC) refers to Cisco's version of Network Access Control, which restricts access to the network based on identity or security posture. When a network device (switch, router, wireless access point, DHCP server, etc.) is configured for NAC, it can force user or machine authentication prior to granting access to the network. In addition, guest access can be granted to a quarantine area for remediation of any problems that may have caused authentication failure. This is enforced through an inline custom network device, changes to an existing switch or router, or a restricted DHCP class. A typical (non-free) WiFi connection is a form of NAC. The user must present some sort of credentials (or a credit card) before being granted access to the network. In its initial phase, the Cisco Network Admission Control (NAC) functionality enables Cisco routers to enforce access privileges when an endpoint attempts to connect to a network. This access decision can be on the basis of information about the endpoint device, such as its current antivirus state. The antivirus state includes information such as version of antivirus software, virus definitions, and version of scan engine. Network admission control systems allow noncompliant devices to be denied access, placed in a quarantined area, or given restricted access to computing resources, thus keeping insecure nodes from infecting the network. The key component of the Cisco Network Admission Control program is the Cisco Trust Agent, which resides on an endpoint system and communicates with Cisco routers on the network. The Cisco Trust Agent collects security state information, such as what antivirus software is being used, and communicates this information to Cisco routers. The information is then relayed to a Cisco Secure Access Control Server (ACS) where access control decisions are made. The ACS directs the Cisco router to perform enforcement against the endpoint. This Cisco product has been m
https://en.wikipedia.org/wiki/Optical%20vortex
An optical vortex (also known as a photonic quantum vortex, screw dislocation or phase singularity) is a zero of an optical field; a point of zero intensity. The term is also used to describe a beam of light that has such a zero in it. The study of these phenomena is known as singular optics. Explanation In an optical vortex, light is twisted like a corkscrew around its axis of travel. Because of the twisting, the light waves at the axis itself cancel each other out. When projected onto a flat surface, an optical vortex looks like a ring of light, with a dark hole in the center. The vortex is given a number, called the topological charge, according to how many twists the light does in one wavelength. The number is always an integer, and can be positive or negative, depending on the direction of the twist. The higher the number of the twist, the faster the light is spinning around the axis. This spinning carries orbital angular momentum with the wave train, and will induce torque on an electric dipole. Orbital angular momentum is distinct from the more commonly encountered spin angular momentum, which produces circular polarization. Orbital angular momentum of light can be observed in the orbiting motion of trapped particles. Interfering an optical vortex with a plane wave of light reveals the spiral phase as concentric spirals. The number of arms in the spiral equals the topological charge. Optical vortices are studied by creating them in the lab in various ways. They can be generated directly in a laser, or a laser beam can be twisted into vortex using any of several methods, such as computer-generated holograms, spiral-phase delay structures, or birefringent vortices in materials. Properties An optical singularity is a zero of an optical field. The phase in the field circulates around these points of zero intensity (giving rise to the name vortex). Vortices are points in 2D fields and lines in 3D fields (as they have codimension two). Integrating the phase
https://en.wikipedia.org/wiki/Semantide
Semantides (or semantophoretic molecules) are biological macromolecules that carry genetic information or a transcript thereof. Three different categories or semantides are distinguished: primary, secondary and tertiary. Primary Semantides are genes, which consist of DNA. Secondary semantides are chains of messenger RNA, which are transcribed from DNA. Tertiary semantides are polypeptides, which are translated from messenger RNA. In eukaryotic organisms, primary semantides may consist of nuclear, mitochondrial or plastid DNA. Not all primary semantides ultimately form tertiary semantides. Some primary semantides are not transcribed into mRNA (non-coding DNA) and some secondary semantides are not translated into polypeptides (non-coding RNA). The complexity of semantides varies greatly. For tertiary semantides, large globular polypeptide chains are most complex while structural proteins, consisting of repeating simple sequences, are least complex. The term semantide and related terms were coined by Linus Pauling and Emile Zuckerkandl. Although semantides are the major type of data used in modern phylogenetics, the term itself is not commonly used. Related terms Isosemantic DNA or RNA that differs in base sequence, but translate into identical polypeptide chains are referred to as being isosemantic. Episemantic Molecules that are synthesized by enzymes (tertiary semantides) are referred to as episemantic molecules. Episemantic molecules have a larger variety in types than semantides, which only consist of three types (DNA, RNA or polypeptides). Not all polypeptides are tertiary semantides. Some, mainly small polypeptides, can also be episemantic molecules. Asemantic Molecules that are not produced by an organism are referred to as asemantic molecules, because they do not contain any genetic information. Asementic molecules may be changed into episemantic molecules by anabolic processes. Asemantic molecules may also become semantic molecules when they integrate
https://en.wikipedia.org/wiki/Nano-ITX
Nano-ITX is a computer motherboard form factor first proposed by VIA Technologies at CeBIT in March 2003, and implemented in late 2005. Nano-ITX boards measure , and are fully integrated, very low power consumption motherboards with many uses, but targeted at smart digital entertainment devices such as DVRs, set-top boxes, media centers, car PCs, and thin devices. Nano-ITX motherboards have slots for SO-DIMM. There are four Nano-ITX motherboard product lines so far, VIA's EPIA N, EPIA NL, EPIA NX, and the VIA EPIA NR. These boards are available from a wide variety of manufacturers supporting numerous different CPU platforms. Udoo has now released at least 1 nano-ITX board: the Udoo Bolt. See also Mini-ITX Pico-ITX Mobile-ITX EPIA, mini-ITX and nano-ITX motherboards from VIA Ultra-Mobile PC Minimig, is an open source re-implementation of an Amiga 500 in Nano-ITX format References External links Jetway Computer Corp. J8F9 AMD Nano-ITX Mainboards Nano ITX Manufacturer, Mainboard OEMs, Daughterboards etc. VIA EPIA N-Series Nano-ITX Mainboard VIA EPIA NL-Series Nano-ITX Mainboard VIA EPIA NX-Series Nano-ITX Mainboard VIA EPIA NR-Series Nano-ITX Mainboard Digital video recorders Motherboard Motherboard form factors Set-top box
https://en.wikipedia.org/wiki/Quantitative%20psychology
Quantitative psychology is a field of scientific study that focuses on the mathematical modeling, research design and methodology, and statistical analysis of psychological processes. It includes tests and other devices for measuring cognitive abilities. Quantitative psychologists develop and analyze a wide variety of research methods, including those of psychometrics, a field concerned with the theory and technique of psychological measurement. Psychologists have long contributed to statistical and mathematical analysis, and quantitative psychology is now a specialty recognized by the American Psychological Association. Doctoral degrees are awarded in this field in a number of universities in Europe and North America, and quantitative psychologists have been in high demand in industry, government, and academia. Their training in both social science and quantitative methodology provides a unique skill set for solving both applied and theoretical problems in a variety of areas. History Quantitative psychology has its roots in early experimental psychology when, in the nineteenth century, the scientific method was first systematically applied to psychological phenomena. Notable contributions included E. H. Weber's studies of tactile sensitivity (1830s), Fechner's development and use of the psychophysical methods (1850-1860), and Helmholtz's research on vision and audition beginning after 1850. Wilhelm Wundt is often called the "founder of experimental psychology", because he called himself a psychologist and opened a psychological laboratory in 1879 where many researchers came to study. The work of these and many others helped put to rest the assertion, by theorists such as Immanuel Kant, that psychology could not become a science because precise experiments on the human mind were impossible. Intelligence testing Intelligence testing has long been an important branch of quantitative psychology. The nineteenth-century English statistician Francis Galton, a pione
https://en.wikipedia.org/wiki/Houndstooth
Houndstooth, hounds tooth check or hound's tooth (and similar spellings), also known as dogstooth, dogtooth, dog's tooth, (), (), is a duotone textile pattern characterized by broken checks or abstract four-pointed shapes, traditionally in black and white or such contrasting dark and light pattern. Design and history The classic houndstooth pattern is an example of a tessellation. It is a duotone textile pattern characterized by broken checks or abstract four-pointed shapes, traditionally in black and white or such contrasting dark and light pattern, although other colour combinations are also often applied. The oldest Bronze Age houndstooth textiles found so far are from the Hallstatt Celtic Salt Mine, Austria, 1500-1200 BC. One of the best known early occurrence of houndstooth is the Gerum Cloak, a garment uncovered in a Swedish peat bog, dated to between 360 and 100 BC. Contemporary houndstooth checks may have originated as a pattern in woven tweed cloth from the Scottish Lowlands, but are now used in many other woven fabric aside from wool. The traditional houndstooth check is made with alternating bands of four dark and four light threads in both warp and weft/filling woven in a simple 2:2 twill, two over/two under the warp, advancing one thread each pass. In an early reference to houndstooth, De Pinna, a New York City–based men's and women's high-end clothier founded in 1885, included houndstooth checks along with gun club checks and Scotch plaids as part of its 1933 spring men's suits collection. The actual term houndstooth for the pattern is not recorded before 1936. Oversized houndstooth patterns were also employed prominently at Alexander McQueen's Fall 2009 Collection, entitled Horn of Plenty. The patterns were a reference to Christian Dior's signature tweed suits. Houndstooth patterns, especially black-and-white houndstooth, have long been associated regionally with the University of Alabama (UA). This is because the longtime UA football coach Paul "
https://en.wikipedia.org/wiki/Open%20%28system%20call%29
For most file systems, a program initializes access to a file in a file system using the open system call. This allocates resources associated to the file (the file descriptor), and returns a handle that the process will use to refer to that file. In some cases the open is performed by the first access. The same file may be opened simultaneously by several processes, and even by the same process, resulting in several file descriptors for the same file; depending on the file organization and filesystem. Operations on the descriptors such as moving the file pointer or closing it are independentthey do not affect other descriptors for the same file. Operations on the file, such as a write, can be seen by operations on the other descriptors: a later read can read the newly written data. During the open, the filesystem may allocate memory for buffers, or it may wait until the first operation. The absolute file path is resolved. This may include connecting to a remote host and notifying an operator that a removable medium is required. It may include the initialization of a communication device. At this point an error may be returned if the host or medium is not available. The first access to at least the directory within the filesystem is performed. An error will usually be returned if the higher level components of the path (directories) cannot be located or accessed. An error will be returned if the file is expected to exist and it does not or if the file should not already exist and it does. If the file is expected to exist and it does, the file access, as restricted by permission flags within the file meta data or access control list, is validated against the requested type of operations. This usually requires an additional filesystem access although in some filesystems meta-flags may be part of the directory structure. If the file is being created, the filesystem may allocate the default initial amount of storage or a specified amount depending on the file
https://en.wikipedia.org/wiki/Enumerative%20geometry
In mathematics, enumerative geometry is the branch of algebraic geometry concerned with counting numbers of solutions to geometric questions, mainly by means of intersection theory. History The problem of Apollonius is one of the earliest examples of enumerative geometry. This problem asks for the number and construction of circles that are tangent to three given circles, points or lines. In general, the problem for three given circles has eight solutions, which can be seen as 23, each tangency condition imposing a quadratic condition on the space of circles. However, for special arrangements of the given circles, the number of solutions may also be any integer from 0 (no solutions) to six; there is no arrangement for which there are seven solutions to Apollonius' problem. Key tools A number of tools, ranging from the elementary to the more advanced, include: Dimension counting Bézout's theorem Schubert calculus, and more generally characteristic classes in cohomology The connection of counting intersections with cohomology is Poincaré duality The study of moduli spaces of curves, maps and other geometric objects, sometimes via the theory of quantum cohomology. The study of quantum cohomology, Gromov–Witten invariants and mirror symmetry gave a significant progress in Clemens conjecture. Enumerative geometry is very closely tied to intersection theory. Schubert calculus Enumerative geometry saw spectacular development towards the end of the nineteenth century, at the hands of Hermann Schubert. He introduced it for the purpose the Schubert calculus, which has proved of fundamental geometrical and topological value in broader areas. The specific needs of enumerative geometry were not addressed until some further attention was paid to them in the 1960s and 1970s (as pointed out for example by Steven Kleiman). Intersection numbers had been rigorously defined (by André Weil as part of his foundational programme 1942–6, and again subsequently), but this did
https://en.wikipedia.org/wiki/Opera%20Mini
Opera Mini is a mobile web browser made by Opera. It was primarily designed for the Java ME platform, as a low-end sibling for Opera Mobile, but only the Android build was still under active development. It had previously been developed for iOS, Windows 10 Mobile, Windows Phone 8.1, BlackBerry, Symbian, and Bada. Opera Mini was derived from the Opera web browser. Opera Mini requests web pages through Opera Software's compression proxy server. The compression server processes and compresses requested web pages before sending them to the mobile phone. The compression ratio is 90% and the transfer speed is increased by two to three times as a result. The pre-processing increases compatibility with web pages not designed for mobile phones. However, interactive sites which depend upon the device processing JavaScript do not work properly. In July 2012, Opera Software reported that Opera Mini had 168.8 million users as of March 2012. In February 2013, Opera reported 300 million unique Opera Mini active users and 150 billion page views served during that month. This represented an increase of 25 million users from September 2012. History Origin Opera Mini was derived from the Opera web browser for personal computers, which has been publicly available since 1996. Opera Mini was originally intended for use on mobile phones not capable of running a conventional Web browser. It was introduced on 10 August 2005, as a pilot project in cooperation with the Norwegian television station TV 2, and only available to TV 2 customers. The beta version was made available in Sweden, Denmark, Norway, and Finland on 20 October 2005. After the final version was launched in Germany on 10 November 2005, and quietly released to all countries through the Opera Mini website in December, the browser was officially launched worldwide on 24 January 2006. On 3 May 2006, Opera Mini 2.0 was released. It included new features such as the ability to download files, new custom skins, more search eng
https://en.wikipedia.org/wiki/Line%20spectral%20pairs
Line spectral pairs (LSP) or line spectral frequencies (LSF) are used to represent linear prediction coefficients (LPC) for transmission over a channel. LSPs have several properties (e.g. smaller sensitivity to quantization noise) that make them superior to direct quantization of LPCs. For this reason, LSPs are very useful in speech coding. LSP representation was developed by Fumitada Itakura, at Nippon Telegraph and Telephone (NTT) in 1975. From 1975 to 1981, he studied problems in speech analysis and synthesis based on the LSP method. In 1980, his team developed an LSP-based speech synthesizer chip. LSP is an important technology for speech synthesis and coding, and in the 1990s was adopted by almost all international speech coding standards as an essential component, contributing to the enhancement of digital speech communication over mobile channels and the internet worldwide. LSPs are used in the code-excited linear prediction (CELP) algorithm, developed by Bishnu S. Atal and Manfred R. Schroeder in 1985. Mathematical foundation The LP polynomial can be expressed as , where: By construction, P is a palindromic polynomial and Q an antipalindromic polynomial; physically P(z) corresponds to the vocal tract with the glottis closed and Q(z) with the glottis open. It can be shown that: The roots of P and Q lie on the unit circle in the complex plane. The roots of P alternate with those of Q as we travel around the circle. As the coefficients of P and Q are real, the roots occur in conjugate pairs The Line Spectral Pair representation of the LP polynomial consists simply of the location of the roots of P and Q (i.e. such that ). As they occur in pairs, only half of the actual roots (conventionally between 0 and ) need be transmitted. The total number of coefficients for both P and Q is therefore equal to p, the number of original LP coefficients (not counting ). A common algorithm for finding these is to evaluate the polynomial at a sequence of closely
https://en.wikipedia.org/wiki/Structural%20health%20monitoring
Structural health monitoring (SHM) involves the observation and analysis of a system over time using periodically sampled response measurements to monitor changes to the material and geometric properties of engineering structures such as bridges and buildings. In an operational environment, structures degrade with age and use. Long term SHM outputs periodically updated information regarding the ability of the structure to continue performing its intended function. After extreme events, such as earthquakes or blast loading, SHM is used for rapid condition screening. SHM is intended to provide reliable information regarding the integrity of the structure in near real time. The SHM process involves selecting the excitation methods, the sensor types, number and locations, and the data acquisition/storage/transmittal hardware commonly called health and usage monitoring systems. Measurements may be taken to either directly detect any degradation or damage that may occur to a system or indirectly by measuring the size and frequency of loads experienced to allow the state of the system to be predicted. To directly monitor the state of a system it is necessary to identify features in the acquired data that allows one to distinguish between the undamaged and damaged structure. One of the most common feature extraction methods is based on correlating measured system response quantities, such a vibration amplitude or frequency, with observations of the degraded system. Damage accumulation testing, during which significant structural components of the system under study are degraded by subjecting them to realistic loading conditions, can also be used to identify appropriate features. This process may involve induced-damage testing, fatigue testing, corrosion growth, or temperature cycling to accumulate certain types of damage in an accelerated fashion. Introduction Qualitative and non-continuous methods have long been used to evaluate structures for their capacity to serve t
https://en.wikipedia.org/wiki/Non-functional%20requirement
In systems engineering and requirements engineering, a non-functional requirement (NFR) is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviours. They are contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture, because they are usually architecturally significant requirements. Definition Broadly, functional requirements define what a system is supposed to do and non-functional requirements define how a system is supposed to be. Functional requirements are usually in the form of "system shall do <requirement>", an individual action or part of the system, perhaps explicitly in the sense of a mathematical function, a black box description input, output, process and control functional model or IPO Model. In contrast, non-functional requirements are in the form of "system shall be <requirement>", an overall property of the system as a whole or of a particular aspect and not a specific function. The system's overall properties commonly mark the difference between whether the development project has succeeded or failed. Non-functional requirements are often called the "quality attributes" of a system. Other terms for non-functional requirements are "qualities", "quality goals", "quality of service requirements", "constraints", "non-behavioral requirements", or "technical requirements". Informally these are sometimes called the "ilities", from attributes like stability and portability. Qualities—that is non-functional requirements—can be divided into two main categories: Execution qualities, such as safety, security and usability, which are observable during operation (at run time). Evolution qualities, such as testability, maintainability, extensibility and scalability, which are embodied in the
https://en.wikipedia.org/wiki/Electrolyte%E2%80%93insulator%E2%80%93semiconductor%20sensor
Within electronics, an Electrolyte–insulator–semiconductor (EIS) sensor is a sensor that is made of these three components: an electrolyte with the chemical that should be measured an insulator that allows field-effect interaction, without leak currents between the two other components a semiconductor to register the chemical changes The EIS sensor can be used in combination with other structures, for example to construct a light-addressable potentiometric sensor (LAPS). References Sensors
https://en.wikipedia.org/wiki/RNSAP
RNSAP (Radio Network Subsystem Application Part) is a 3GPP signalling protocol responsible for communications between RNCs Radio Network Controllers defined in 3GPP specification TS 25.423. It is carried on the lur interface and provides functionality needed for soft handovers and SRNS (Serving Radio Network Subsystem) relocation (handoff between RNCs). It defines signalling between RNCs, including SRNC (Serving RNC) and DRNC (drift RNC). SRNC | DRNC | IUR | RNSAP | RNSAP | | | Converge protol | Converge protol | | | AAL 5 | AAL5 ATM | ATM Physical links------→→→ Physical links RNSAP Layer Architecture Procedures RNSAP Basic Mobility Procedures- This set of procedures is used to handle mobility with in the UTRAN.This is the most important of the RNSAP procedures. The procedures belonging to this set includes SRNC relocation, inter-RNC cell update and UTRAN registration area update. RNSAP DCH procedures- This set of procedure used to handle dedicated channel traffic (it includes DCH, DSCH and TDD USCH) between two RNCs. Unlike the basic mobility procedures which is used only for signalling, this set of procedures provides support for data transfer over the Iur interface. The data transfer takes place using a frame protocol. The procedures belonging to this set include establishment, modification and release of dedicated channel in the DRNC due to hard and soft handover, set-up/release of dedicated transport connections over Iur interface and data transfer for dedicated channels. RNSAP Common Transport Channel Procedures- This set of procedures is used to handle common and shared channel traffic (it excludes DCH, DSCH and TDD USCH) between two RNCs. In particular, this set of procedures facilitates the set-up and release of common channel transport
https://en.wikipedia.org/wiki/CHAIN%20%28industry%20standard%29
The CECED Convergence Working Group has defined a new platform, called CHAIN (Ceced Home Appliances Interoperating Network), which defines a protocol for interconnecting different home appliances in a single multibrand system. It allows for control and automation of all basic appliance-related services in a home: e.g., remote control of appliance operation, energy or load management, remote diagnostics and automatic maintenance support to appliances, downloading and updating of data, programs and services (possibly from the Internet). See also CECED KNX/EIB LonWorks OSGi Home automation Interoperability ja:欧州家電機器委員会#CHAIN
https://en.wikipedia.org/wiki/Software%20visualization
Software visualization or software visualisation refers to the visualization of information of and related to software systems—either the architecture of its source code or metrics of their runtime behavior—and their development process by means of static, interactive or animated 2-D or 3-D visual representations of their structure, execution, behavior, and evolution. Software system information Software visualization uses a variety of information available about software systems. Key information categories include: implementation artifacts such as source codes, software metric data from measurements or from reverse engineering, traces that record execution behavior, software testing data (e.g., test coverage) software repository data that tracks changes. Objectives The objectives of software visualization are to support the understanding of software systems (i.e., its structure) and algorithms (e.g., by animating the behavior of sorting algorithms) as well as the analysis and exploration of software systems and their anomalies (e.g., by showing classes with high coupling) and their development and evolution. One of the strengths of software visualization is to combine and relate information of software systems that are not inherently linked, for example by projecting code changes onto software execution traces. Software visualization can be used as tool and technique to explore and analyze software system information, e.g., to discover anomalies similar to the process of visual data mining. For example, software visualization is used to monitoring activities such as for code quality or team activity. Visualization is not inherently a method for software quality assurance. Software visualization participates to Software Intelligence in allowing to discover and take advantage of mastering inner components of software systems. Types Tools for software visualization might be used to visualize source code and quality defects during software development an
https://en.wikipedia.org/wiki/Ternary%20Golay%20code
In coding theory, the ternary Golay codes are two closely related error-correcting codes. The code generally known simply as the ternary Golay code is an -code, that is, it is a linear code over a ternary alphabet; the relative distance of the code is as large as it possibly can be for a ternary code, and hence, the ternary Golay code is a perfect code. The extended ternary Golay code is a [12, 6, 6] linear code obtained by adding a zero-sum check digit to the [11, 6, 5] code. In finite group theory, the extended ternary Golay code is sometimes referred to as the ternary Golay code. Properties Ternary Golay code The ternary Golay code consists of 36 = 729 codewords. Its parity check matrix is Any two different codewords differ in at least 5 positions. Every ternary word of length 11 has a Hamming distance of at most 2 from exactly one codeword. The code can also be constructed as the quadratic residue code of length 11 over the finite field F3 (i.e., the Galois Field GF(3) ). Used in a football pool with 11 games, the ternary Golay code corresponds to 729 bets and guarantees exactly one bet with at most 2 wrong outcomes. The set of codewords with Hamming weight 5 is a 3-(11,5,4) design. The generator matrix given by Golay (1949, Table 1.) is The automorphism group of the (original) ternary Golay code is the Mathieu group M11, which is the smallest of the sporadic simple groups. Extended ternary Golay code The complete weight enumerator of the extended ternary Golay code is The automorphism group of the extended ternary Golay code is 2.M12, where M12 is the Mathieu group M12. The extended ternary Golay code can be constructed as the span of the rows of a Hadamard matrix of order 12 over the field F3. Consider all codewords of the extended code which have just six nonzero digits. The sets of positions at which these nonzero digits occur form the Steiner system S(5, 6, 12). A generator matrix for the extended ternary Golay code is The corresponding par
https://en.wikipedia.org/wiki/Giant%20cell
A giant cell (also known as a multinucleated giant cell, or multinucleate giant cell) is a mass formed by the union of several distinct cells (usually histiocytes), often forming a granuloma. Although there is typically a focus on the pathological aspects of multinucleate giant cells (MGCs), they also play many important physiological roles. Osteoclasts are a type of MGC that are critical for the maintenance, repair, and remodeling of bone and are present normally in a healthy human body. Osteoclasts are frequently classified and discussed separately from other MGCs which are more closely linked with disease. Non-osteoclast MGCs can arise in response to an infection, such as tuberculosis, herpes, or HIV, or as part of a foreign body reaction. These MGCs are cells of monocyte or macrophage lineage fused together. Similar to their monocyte precursors, they can phagocytose foreign materials. However, their large size and extensive membrane ruffling make them better equipped to clear up larger particles. They utilize activated CR3s to ingest complement-opsonized targets. Non-osteoclast MGCs are also responsible for the clearance of cell debris, which is necessary for tissue remodeling after injuries. Types include foreign-body giant cells, Langhans giant cells, Touton giant cells, Giant-cell arteritis, and Reed–Sternberg cells. History Osteoclasts were discovered in 1873. However, it wasn't until the development of the organ culture in the 1970s that their origin and function could be deduced. Although there was a consensus early on about the physiological function of osteoclasts, theories on their origins were heavily debated. Many believed osteoclasts and osteoblasts came from the same progenitor cell. Because of this, osteoclasts were thought to be derived from cells in connective tissue. Studies that observed that bone resorption could be restored by bone marrow and spleen transplants helped prove osteoclasts' hematopoietic origin. Other multinucleated giant ce
https://en.wikipedia.org/wiki/List%20of%20game%20theorists
This is a list of notable economists, mathematicians, political scientists, and computer scientists whose work has added substantially to the field of game theory. Derek Abbott – quantum game theory and Parrondo's games Susanne Albers – algorithmic game theory and algorithm analysis Kenneth Arrow – voting theory (Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel 1972) Robert Aumann – equilibrium theory (Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel 2005) Robert Axelrod – repeated Prisoner's Dilemma Tamer Başar – dynamic game theory and application robust control of systems with uncertainty Cristina Bicchieri – epistemology of game theory Olga Bondareva – Bondareva–Shapley theorem Steven Brams – cake cutting, fair division, theory of moves Jennifer Tour Chayes – algorithmic game theory and auction algorithms John Horton Conway – combinatorial game theory William Hamilton – evolutionary biology John Harsanyi – equilibrium theory (Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel 1994) Monika Henzinger – algorithmic game theory and information retrieval Naira Hovakimyan – differential games and adaptive control Peter L. Hurd – evolution of aggressive behavior Rufus Isaacs – differential games Ehud Kalai – Kalai-Smorodinski bargaining solution, rational learning, strategic complexity Anna Karlin – algorithmic game theory and online algorithms Michael Kearns – algorithmic game theory and computational social science Sarit Kraus – non-monotonic reasoning John Maynard Smith – evolutionary biology Oskar Morgenstern – social organization John Forbes Nash – Nash equilibrium (Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel 1994) John von Neumann – Minimax theorem, expected utility, social organization, arms race Abraham Neyman – Stochastic games, Shapley value J. M. R. Parrondo – games with a reversal of fortune, such as Parrondo's games Charles E. M. Pearce – games appli