source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Database%20tuning
Database tuning describes a group of activities used to optimize and homogenize the performance of a database. It usually overlaps with query tuning, but refers to design of the database files, selection of the database management system (DBMS) application, and configuration of the database's environment (operating system, CPU, etc.). Database tuning aims to maximize use of system resources to perform work as efficiently and rapidly as possible. Most systems are designed to manage their use of system resources, but there is still much room to improve their efficiency by customizing their settings and configuration for the database and the DBMS. I/O tuning Hardware and software configuration of disk subsystems are examined: RAID levels and configuration, block and stripe size allocation, and the configuration of disks, controller cards, storage cabinets, and external storage systems such as SANs. Transaction logs and temporary spaces are heavy consumers of I/O, and affect performance for all users of the database. Placing them appropriately is crucial. Frequently joined tables and indexes are placed so that as they are requested from file storage, they can be retrieved in parallel from separate disks simultaneously. Frequently accessed tables and indexes are placed on separate disks to balance I/O and prevent read queuing. DBMS tuning DBMS users and DBA experts DBMS tuning refers to tuning of the DBMS and the configuration of the memory and processing resources of the computer running the DBMS. This is typically done through configuring the DBMS, but the resources involved are shared with the host system. Tuning the DBMS can involve setting the recovery interval (time needed to restore the state of data to a particular point in time), assigning parallelism (the breaking up of work from a single query into tasks assigned to different processing resources), and network protocols used to communicate with database consumers. Memory is allocated for data, executio
https://en.wikipedia.org/wiki/Jean%20Tirole
Jean Tirole (born 9 August 1953) is a French professor of economics at Toulouse 1 Capitole University. He focuses on industrial organization, game theory, banking and finance, and psychology. Especially he focused on regulation of economics where he wants that it will not hinder the innovation and it will keeps a fair rules. "Many feel that the world is the spice of private interests that know no compassion or mercy." In 2014 he was awarded the Nobel Memorial Prize in Economic Sciences for his analysis of market power and regulation. Education Tirole received engineering degrees from the École Polytechnique in Paris in 1976, and from the École nationale des ponts et chaussées in 1978. He graduated as a member of the elite Corps of Bridges, Waters and Forests. Tirole pursued graduate studies at the Paris Dauphine University and was awarded a DEA degree in 1976 and a Doctorat de troisième cycle in decision mathematics in 1978. In 1981, he received a Ph.D. in economics from the Massachusetts Institute of Technology for his thesis titled Essays in economic theory, under the supervision of Eric Maskin. Surprisingly he started thinking about studying economics when he was 21 years old. "When I was 21, I discovered this discipline, which is very rigorous, but at the same time it is still a social science. There's a lot of that human aspect, and that was important to me. So I started to realize that it could be interesting, and being a mathematician helped me a lot." Career Tirole is chairman of the board of the Jean-Jacques Laffont Foundation at the Toulouse School of Economics, and scientific director of the Industrial Economics Institute (IDEI) at Toulouse 1 University Capitole. After receiving his doctorate from MIT in 1981, he worked as a researcher at the École nationale des ponts et chaussées until 1984. From 1984–1991, he worked as Professor of Economics at MIT. His work by 1988 helped to define modern industrial organization theory by organising and
https://en.wikipedia.org/wiki/Incremental%20backup
An incremental backup is one in which successive copies of the data contain only the portion that has changed since the preceding backup copy was made. When a full recovery is needed, the restoration process would need the last full backup plus all the incremental backups until the point of restoration. Incremental backups are often desirable as they reduce storage space usage, and are quicker to perform than differential backups. Variants Incremental The most basic form of incremental backup consists of identifying, recording and thus, preserving only those files that have changed since the last backup. Since changes are typically low, incremental backups are much smaller and quicker than full backups. For instance, following a full backup on Friday, a Monday backup will contain only those files that changed since Friday. A Tuesday backup contains only those files that changed since Monday, and so on. A full restoration of data will naturally be slower, since all increments must be restored. Should any one of the copies created fail, including the first (full), restoration will be incomplete. A Unix example would be: rsync -e ssh -va --link-dest=$dst/hourly.1 $remoteserver:$remotepath $dst/hourly.0 The use of rsync's option is what makes this command an example of incremental backup. Multilevel incremental A more sophisticated incremental backup scheme involves multiple numbered backup levels. A full backup is level 0. A level n backup will back up everything that has changed since the most recent level n-1 backup. Suppose for instance that a level 0 backup was taken on a Sunday. A level 1 backup taken on Monday would include only changes made since Sunday. A level 2 backup taken on Tuesday would include only changes made since Monday. A level 3 backup taken on Wednesday would include only changes made since Tuesday. If a level 2 backup was taken on Thursday, it would include all changes made since Monday because Monday was the most recent level n-1 b
https://en.wikipedia.org/wiki/Anthroposystem
The term anthroposystem is used to describe the anthropological analogue to the ecosystem. In other words, the anthroposystem model serves to compare the flow of materials through human systems to those in naturally occurring systems. As defined by Santos, an anthroposystem is "the orderly combination or arrangement of physical and biological environments for the purpose of maintaining human civilization...built by man to sustain his kind." The anthroposystem is intimately linked to economic and ecological systems as well. Description Both the anthroposystem and ecosystem can be divided into three groups: producers, consumers, and recyclers. In the ecosystem, the producers or autotrophs consist of plants and some bacteria capable of producing their own food via photosynthesis or chemical synthesis, the consumers consist of animals that obtain energy from grazing and/or by feeding on other animals and the recyclers consist of decomposers such as fungi and bacteria. In the anthroposystem, the producers consist of the energy production through fossil fuels, manufacturing with non-fuel minerals and growing food; the consumers consist of humans and domestic animals and the recyclers consist of the decomposing or recycling activities (i.e. waste water treatment, metal and solid waste recycling). The ecosystem is sustainable whereas the anthroposystem is not. The ecosystem is a closed loop in which nearly everything is recycled whereas the anthroposystem is an open loop where very little is recycled. In contrast to the ecosystem, the anthroposystem's producers and consumers are significantly more spatially displaced than those in the ecosystem and thus, more energy is required to transfer matter to a producer or recycler. Currently, a large majority of this energy comes from non-renewable fossil fuels. Additionally, recycling is a naturally occurring component of the ecosystem, and is responsible for much of the resources used by the system. Under the anthroposyste
https://en.wikipedia.org/wiki/De%20Furtivis%20Literarum%20Notis
De Furtivis Literarum Notis (On the Secret Symbols of Letters) is a 1563 book on cryptography written by Giambattista della Porta. The book includes three sets of cypher discs for coding and decoding messages and a substitution cipher improving on the work of Al-Qalqashandi. References 1563 books Cryptography books Non-fiction books
https://en.wikipedia.org/wiki/Ethernet%20Automatic%20Protection%20Switching
Ethernet Automatic Protection Switching (EAPS) is used to create a fault tolerant topology by configuring a primary and secondary path for each VLAN. Invented by Extreme Networks and submitted to IETF as RFC3619. The idea is to provide highly available Ethernet switched rings (commonly used in Metro Ethernet) to replace legacy TDM based transport protection fiber rings. Other implementations include Ethernet Protection Switching Ring (EPSR) by Allied Telesis which enhanced EAPS to provide full protected transport of IP Triple Play services (voice, video and internet traffic) for xDSL/FTTx deployments. EAPS/EPSR is the most widely deployed Ethernet protection switching solution deployed with major multi-vendor inter-operability support. The EAPS/EPSR are the basis of the ITU G.8032 Ethernet Protection recommendation. Operation A ring is formed by configuring a Domain. Each domain has a single "master node" and many "transit nodes". Each node will have a primary port and a secondary port, both known to be able to send control traffic to the master node. Under normal operation, the secondary port on the master is blocked for all protected vlans. When there is a link down situation, the devices that detect the failure send a control message to the master, and the master will then unblock the secondary port and instruct the transits to flush their forwarding databases. The next packets sent by the network can then be flooded and learned out of the (now enabled) secondary port without any network disruption. Fail-over times are demonstrably in the region of 50ms. The same switch can belong to multiple domains and thus multiple rings. However, these act as independent entities and can be controlled individually. EAPS v2 EAPSv2 is configured and enabled to avoid the potential of super-loops in environments where multiple EAPS domains share a common link. EAPSv2 works using the concept of a controller and partner mechanism. Shared port status is verified using
https://en.wikipedia.org/wiki/Ecospirituality
Ecospirituality connects the science of ecology with spirituality. It brings together religion and environmental activism. Ecospirituality has been defined as "a manifestation of the spiritual connection between human beings and the environment." The new millennium and the modern ecological crisis has created a need for environmentally based religion and spirituality. Ecospirituality is understood by some practitioners and scholars as one result of people wanting to free themselves from a consumeristic and materialistic society. Ecospirituality has been critiqued for being an umbrella term for concepts such as deep ecology, ecofeminism, and nature religion. Proponents may come from a range of faiths including: Islam; Jainism; Christianity (Catholicism, Evangelicalism and Orthodox Christianity); Judaism; Hinduism; Buddhism and Indigenous traditions. Although many of their practices and beliefs may differ, a central claim is that there is "a spiritual dimension to our present ecological crisis." According to the environmentalist Sister Virginia Jones, "Eco-spirituality is about helping people experience 'the holy' in the natural world and to recognize their relationship as human beings to all creation. Ecospirituality has been influenced by the ideas of deep ecology, which is characterized by "recognition of the inherent value of all living beings and the use of this view in shaping environmental policies" Similarly to ecopsychology, it refers to the connections between the science of ecology and the study of psychology. 'Earth-based' spirituality is another term related to ecospirituality; it is associated with pagan religious traditions and the work of prominent ecofeminist, Starhawk. Ecospirituality refers to the intertwining of intuition and bodily awareness pertaining to a relational view between human beings and the planet. Origins Ecospirituality finds its history in the relationship between spirituality and the environment. Some scholars say it "flows fr
https://en.wikipedia.org/wiki/Photochemical%20logic%20gate
A photochemical logic gate is based on the photochemical intersystem crossing and molecular electronic transition between photochemically active molecules, leading to logic gates that can be produced. The OR gate electron–photon transfer chain _A* A* = excited state of molecule A _B* _C* _A _B _C The OR gate is based on the activation of molecule A, and thus pass electron / photon to molecule C's excited state orbitals (C*). The electron from molecule A inter system crosses to C* via the excited state orbitals of B, eventually utilised as a signal in the C* emission. The 'OR' gate uses two inputs of light (photons) to molecule A in two separate electron transfer chains, both of which are capable of transferring to C* and thus producing the output of an OR gate. Therefore, if either electron transfer chain is activated, molecule C's excitation produces a valid/ output emission. Input Input A D ↘ ↙ B E ↘↙ C output The 'AND' gate _C** Second excited state of molecule C _A* _B* _C* _A _B _C Excitation A→A* by photon, whereby the promoted electron is passed down to the C* molecular orbital. A second photon applied to the system () causes the excitation of the electron in the C* molecular orbital to the C** molecular orbital -analogous pump probe spectroscopy. _**        Second excited state of molecule C ↑ _* ↑ _C Above, the energy level diagram illustrating the principle of pump probe spectroscopy –the excitation of an excited state. The AND gate is produced by the necessity of both A→A* and the C**→C excitations occurring at the same time -input and , are simultaneously required. To prevent erroneous emissions of light from a single input to the AND gate, it would be necessary to have an electron transfer series with ability accept any electrons (energy) from C* energy level. The electron transfer series would ter
https://en.wikipedia.org/wiki/Period-doubling%20bifurcation
In dynamical systems theory, a period-doubling bifurcation occurs when a slight change in a system's parameters causes a new periodic trajectory to emerge from an existing periodic trajectory—the new one having double the period of the original. With the doubled period, it takes twice as long (or, in a discrete dynamical system, twice as many iterations) for the numerical values visited by the system to repeat themselves. A period-halving bifurcation occurs when a system switches to a new behavior with half the period of the original system. A period-doubling cascade is an infinite sequence of period-doubling bifurcations. Such cascades are a common route by which dynamical systems develop chaos. In hydrodynamics, they are one of the possible routes to turbulence. Examples Logistic map The logistic map is where is a function of the (discrete) time . The parameter is assumed to lie in the interval , in which case is bounded on . For between 1 and 3, converges to the stable fixed point . Then, for between 3 and 3.44949, converges to a permanent oscillation between two values and that depend on . As grows larger, oscillations between 4 values, then 8, 16, 32, etc. appear. These period doublings culminate at , beyond which more complex regimes appear. As increases, there are some intervals where most starting values will converge to one or a small number of stable oscillations, such as near . In the interval where the period is for some positive integer , not all the points actually have period . These are single points, rather than intervals. These points are said to be in unstable orbits, since nearby points do not approach the same orbit as them. quadratic map Real version of complex quadratic map is related with real slice of the Mandelbrot set. Kuramoto–Sivashinsky equation The Kuramoto–Sivashinsky equation is an example of a spatiotemporally continuous dynamical system that exhibits period doubling. It is one of the most well-studied nonlin
https://en.wikipedia.org/wiki/Zenith%20Z-89
The Z-89 is a personal computer introduced in 1979 by Heathkit, but produced primarily by Zenith Data Systems (ZDS) in the early 1980s. It combined an updated version of the Heathkit H8 microcomputer and H19 terminal in a new case that also provided room for a built-in floppy disk on the right side of the display. Based on the Zilog Z80 microprocessor it is capable of running CP/M as well as Heathkit's own HDOS. Description The Zenith Z-89 is based on the Zilog Z80 microprocessor running at 2.048 MHz, and supports the HDOS and CP/M operating systems. The US$2295 Z-89 is integrated in a terminal-like enclosure with a non-detachable keyboard, 12-inch monochrome CRT with a 80x25 character screen, 48 KB RAM, and a 5.25" floppy disk drive. The keyboard is of high build quality and has an unusual number of special purpose keys: , , , , , , , , , , , and three with white, red, and blue squares. There are five function keys and a numeric keypad. The video display has reverse video and character graphics are available. The computer has two small card cages inside the cabinet on either side of the CRT, each of which accept up to three proprietary circuit cards. Upgrade cards available for this included disk controller cards (see below), a 16 KB RAM card that upgrades the standard 48 KB RAM to 64 KB, a RAM memory card accessible as a ramdrive using a special driver (above the Z80's 64 KB memory limit) and a multi-serial card providing extra RS-232 ports. The 2 MHz Z80 could be upgraded to 4 MHz. In 1979, prior to Zenith's purchase of Heath Company, Heathkit designed and marketed this computer in kit form as the Heath H89, assembled as the WH89, and without the floppy but with a cassette interface card as the H88. (Prior to the Zenith purchase, the Heathkit model numbers did not include the dash). Heath/Zenith also made a serial terminal, the H19/Z-19, based on the same enclosure (with a blank cover over the diskette drive cut-out) and terminal controller. The company of
https://en.wikipedia.org/wiki/Elgato
Elgato is a brand of consumer technology products. The brand was manufactured and designed by Elgato Systems, founded in 2010 by Markus Fest and was headquartered in Munich, Germany, until 2018 when the brand was sold to Corsair. History The brand, Elgato, was formerly a brand of Elgato Systems. The Elgato brand was used to refer to the company gaming and thunderbolt devices and was commonly called Elgato Gaming. On June 28, 2018, Corsair acquired the Elgato brand from Elgato Systems, while Elgato Systems kept their smart home division and renamed the company to Eve Systems. Products Thunderbolt dock Elgato introduced a Thunderbolt docking station in June 2014. A computer is plugged into the dock using a Thunderbolt port in order to gain access to the dock's three USB ports, audio jacks, HDMI and Ethernet. It is typically used to plug a Macbook into an office setting (printer, monitor, keyboard) or to provide additional ports not available in the MacBook Air. A review in The Register said it was compact and useful, but Windows users should consider a USB 3.0 dock. The Register and CNET disagreed on whether it was competitively priced. Reviews in TechRadar and Macworld gave it 4 out of 5 stars. Thunderbolt SSD Elgato introduced two external solid-state drives in September 2012 called Thunderbolt Drive. Benchmark tests by MacWorld and Tom's Hardware said that the hard drive was slower than other products they tested, despite being connected through a faster Thunderbolt port, rather than Firewire. The following year, in 2013, Elgato replaced them with similar drives identified as "Thunderbolt Drive +", which added USB 3.0 support and was claimed to be faster than the previous iteration. A CNET review of a Thunderbolt Drive+ drive gave it a 4.5 out of 5 star rating. It said the drive was "blazing fast" and "the most portable drive to date" but was also expensive. An article in The Register explained that the original drives introduced in 2012 didn't perform well
https://en.wikipedia.org/wiki/Geometric%20topology%20%28object%29
In mathematics, the geometric topology is a topology one can put on the set H of hyperbolic 3-manifolds of finite volume. Use Convergence in this topology is a crucial ingredient of hyperbolic Dehn surgery, a fundamental tool in the theory of hyperbolic 3-manifolds. Definition The following is a definition due to Troels Jorgensen: A sequence in H converges to M in H if there are a sequence of positive real numbers converging to 0, and a sequence of -bi-Lipschitz diffeomorphisms where the domains and ranges of the maps are the -thick parts of either the 's or M. Alternate definition There is an alternate definition due to Mikhail Gromov. Gromov's topology utilizes the Gromov-Hausdorff metric and is defined on pointed hyperbolic 3-manifolds. One essentially considers better and better bi-Lipschitz homeomorphisms on larger and larger balls. This results in the same notion of convergence as above as the thick part is always connected; thus, a large ball will eventually encompass all of the thick part. On framed manifolds As a further refinement, Gromov's metric can also be defined on framed hyperbolic 3-manifolds. This gives nothing new but this space can be explicitly identified with torsion-free Kleinian groups with the Chabauty topology. See also Algebraic topology (object) References William Thurston, The geometry and topology of 3-manifolds, Princeton lecture notes (1978-1981). Canary, R. D.; Epstein, D. B. A.; Green, P., Notes on notes of Thurston. Analytical and geometric aspects of hyperbolic space (Coventry/Durham, 1984), 3--92, London Math. Soc. Lecture Note Ser., 111, Cambridge Univ. Press, Cambridge, 1987. 3-manifolds Hyperbolic geometry Topological spaces
https://en.wikipedia.org/wiki/Password%20strength
Password strength is a measure of the effectiveness of a password against guessing or brute-force attacks. In its usual form, it estimates how many trials an attacker who does not have direct access to the password would need, on average, to guess it correctly. The strength of a password is a function of length, complexity, and unpredictability. Using strong passwords lowers the overall risk of a security breach, but strong passwords do not replace the need for other effective security controls. The effectiveness of a password of a given strength is strongly determined by the design and implementation of the authentication factors (knowledge, ownership, inherence). The first factor is the main focus of this article. The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g. three) of failed password entry attempts. In the absence of other vulnerabilities, such systems can be effectively secured with relatively simple passwords. However, the system must store information about the user's passwords in some form and if that information is stolen, say by breaching system security, the user's passwords can be at risk. In 2019, the United Kingdom's NCSC analyzed public databases of breached accounts to see which words, phrases, and strings people used. The most popular password on the list was 123456, appearing in more than 23 million passwords. The second-most popular string, 123456789, was not much harder to crack, while the top five included "qwerty", "password", and 1111111. Password creation Passwords are created either automatically (using randomizing equipment) or by a human; the latter case is more common. While the strength of randomly chosen passwords against a brute-force attack can be calculated with precision, determining the strength of human-generated passwords is difficult. Typically, humans are asked to choose a
https://en.wikipedia.org/wiki/Multiple%20single-level
Multiple single-level or multi-security level (MSL) is a means to separate different levels of data by using separate computers or virtual machines for each level. It aims to give some of the benefits of multilevel security without needing special changes to the OS or applications, but at the cost of needing extra hardware. The drive to develop MLS operating systems was severely hampered by the dramatic fall in data processing costs in the early 1990s. Before the advent of desktop computing, users with classified processing requirements had to either spend a lot of money for a dedicated computer or use one that hosted an MLS operating system. Throughout the 1990s, however, many offices in the defense and intelligence communities took advantage of falling computing costs to deploy desktop systems classified to operate only at the highest classification level used in their organization. These desktop computers operated in system high mode and were connected with LANs that carried traffic at the same level as the computers. MSL implementations such as these neatly avoided the complexities of MLS but traded off technical simplicity for inefficient use of space. Because most users in classified environments also needed unclassified systems, users often had at least two computers and sometimes more (one for unclassified processing and one for each classification level processed). In addition, each computer was connected to its own LAN at the appropriate classification level, meaning that multiple dedicated cabling plants were incorporated (at considerable cost in terms of both installation and maintenance). Limits of MSL versus MLS The obvious shortcoming of MSL (as compared to MLS) is that it does not support immixture of various classification levels in any manner. For example, the notion of concatenating a SECRET data stream (taken from a SECRET file) with a TOP SECRET data stream (read from a TOP SECRET file) and directing the resultant TOP SECRET data stream into
https://en.wikipedia.org/wiki/NetTop
NetTop is an NSA project to run Multiple Single-Level systems with a Security-Enhanced Linux host running VMware with Windows as a guest operating system. NetTop has . External links NSA web page on NetTop VMware PR page on NetTop HP NetTop web page TCS Trusted Workstation based on NetTop Linux security software National Security Agency operations
https://en.wikipedia.org/wiki/SBML
The Systems Biology Markup Language (SBML) is a representation format, based on XML, for communicating and storing computational models of biological processes. It is a free and open standard with widespread software support and a community of users and developers. SBML can represent many different classes of biological phenomena, including metabolic networks, cell signaling pathways, regulatory networks, infectious diseases, and many others. It has been proposed as a standard for representing computational models in systems biology today. History Late in the year 1999 through early 2000, with funding from the Japan Science and Technology Corporation (JST), Hiroaki Kitano and John C. Doyle assembled a small team of researchers to work on developing better software infrastructure for computational modeling in systems biology. Hamid Bolouri was the leader of the development team, which consisted of Andrew Finney, Herbert Sauro, and Michael Hucka. Bolouri identified the need for a framework to enable interoperability and sharing between the different simulation software systems for biology in existence during the late 1990s, and he organized an informal workshop in December 1999 at the California Institute of Technology to discuss the matter. In attendance at that workshop were the groups responsible for the development of DBSolve, E-Cell, Gepasi, Jarnac, StochSim, and The Virtual Cell. Separately, earlier in 1999, some members of these groups also had discussed the creation of a portable file format for metabolic network models in the BioThermoKinetics (BTK) group. The same groups who attended the first Caltech workshop met again on April 28–29, 2000, at the first of a newly created meeting series called Workshop on Software Platforms for Systems Biology. It became clear during the second workshop that a common model representation format was needed to enable the exchange of models between software tools as part of any functioning interoperability framework, and th
https://en.wikipedia.org/wiki/Double%20heterostructure
A double heterostructure, sometimes called double heterojunction, is formed when two semiconductor materials are grown into a "sandwich". One material (such as AlGaAs) is used for the outer layers (or cladding), and another of smaller band gap (such as GaAs) is used for the inner layer. In this example, there are two AlGaAs-GaAs junctions (or boundaries), one at each side of the inner layer. There must be two boundaries for the device to be a double heterostructure. If there was only one side of cladding material, the device would be a simple, or single, heterostructure. The double heterostructure is a very useful structure in optoelectronic devices and has interesting electronic properties. If one of the cladding layers is p-doped, the other cladding layer n-doped and the smaller energy gap semiconductor material undoped, a p-i-n structure is formed. When a current is applied to the ends of the pin structure, electrons and holes are injected into the heterostructure. The smaller energy gap material forms energy discontinuities at the boundaries, confining the electrons and holes to the smaller energy gap semiconductor. The electrons and holes recombine in the intrinsic semiconductor emitting photons. If the width of the intrinsic region is reduced to the order of the de Broglie wavelength, the energies in the intrinsic region no longer become continuous but become discrete. (Actually, they are not continuous but the energy levels are very close together so we think of them as being continuous.) In this situation the double heterostructure becomes a quantum well. References Semiconductor structures
https://en.wikipedia.org/wiki/Baranov%20Central%20Institute%20of%20Aviation%20Motor%20Development
The P. I. Baranov Central Institute of Aviation Motor Development (also known as the "Central Institute for Aviation Motor Development named after P. I. Baranov" or simply "Central Institute of Aviation Motors", CIAM or TsIAM, Tsentralniy Institut Aviatsionnogo Motorostroeniya, ) is the only specialized Russian research and engineering facility dealing with advanced aerospace propulsion research, aircraft engine certification and other gas dynamics-related issues. It was founded in 1930. CIAM operates the largest aerospace engine testing facility in Europe, surpassed only by the United States's Arnold Engineering Development Center and Glenn Research Center. It is based in Lefortovo (the southeast okrug of Moscow) with an address of 2 Aviamotornaya street, Moscow, Postcode 111116. CIAM also operates a scientific testing center in Lytkarino, Moscow Oblast. History The bases of the institute were formed by such academics as Keldysh, Klimov and Chelomey. Since its foundation in 1930, CIAM designed nearly all Russian aviation motors and gas turbines. In 1933 CIAM was named after the late Soviet Vice-Narkom of Heavy Industry Petr Ionovich Baranov, who was one of the leading theorists of the Soviet aviation industry. Before World War II, all engine-design work was transferred to mass-production motor-building plants and their own design bureaus. CIAM focused on theoretical and experimental research and modernization of prototypes up to the production stage. After the war, CIAM was engaged with reactive (jet) engines for airplanes, successors to the first-generation turbojets. In the early 1950s, the largest test base in Europe was built in Lytkarino. In the 1970s, the institute began work on a ramjet engine using the special hypersonic "flying laboratory" GLL Holod. This experiment used a liquid hydrogen, actively cooled dual-mode ramjet, which was based on a hydrogen-fueled axisymmetrical engine placed on a Russian SA5 missile during the flight. The first successful
https://en.wikipedia.org/wiki/GNU%20Binutils
The GNU Binary Utilities, or , are a set of programming tools for creating and managing binary programs, object files, libraries, profile data, and assembly source code. Tools They were originally written by programmers at Cygnus Solutions. The GNU Binutils are typically used in conjunction with compilers such as the GNU Compiler Collection (), build tools like , and the GNU Debugger (). Through the use of the Binary File Descriptor library (), most tools support the various object file formats supported by . Commands The include the following commands: elfutils Ulrich Drepper wrote , to partially replace GNU Binutils, purely for Linux and with support only for ELF and DWARF. It distributes three libraries with it for programmatic access. See also GNU Core Utilities GNU Debugger ldd (Unix), list symbols imported by the object file; similar to List of Unix commands llvm provides similar set of tools strace, a tool for system call debugging (enabled by kernel functionality) available on many distributions References External links The ELF Tool Chain Project : the BSD license similar project (mirror) Programming tools Free compilers and interpreters GNU Project software
https://en.wikipedia.org/wiki/Neurodegenerative%20disease
A neurodegenerative disease is caused by the progressive loss of structure or function of neurons, in the process known as neurodegeneration. Such neuronal damage may ultimately involve cell death. Neurodegenerative diseases include amyotrophic lateral sclerosis, multiple sclerosis, Parkinson's disease, Alzheimer's disease, Huntington's disease, multiple system atrophy, tauopathies, and prion diseases. Neurodegeneration can be found in the brain at many different levels of neuronal circuitry, ranging from molecular to systemic. Because there is no known way to reverse the progressive degeneration of neurons, these diseases are considered to be incurable; however research has shown that the two major contributing factors to neurodegeneration are oxidative stress and inflammation. Biomedical research has revealed many similarities between these diseases at the subcellular level, including atypical protein assemblies (like proteinopathy) and induced cell death. These similarities suggest that therapeutic advances against one neurodegenerative disease might ameliorate other diseases as well. Within neurodegenerative diseases, it is estimated that 55 million people worldwide had dementia in 2019, and that by 2050 this figure will increase to 139 million people. Specific disorders Alzheimer's disease Alzheimer's disease (AD) is a chronic neurodegenerative disease that results in the loss of neurons and synapses in the cerebral cortex and certain subcortical structures, resulting in gross atrophy of the temporal lobe, parietal lobe, and parts of the frontal cortex and cingulate gyrus. It is the most common neurodegenerative disease. Even with billions of dollars being used to find a treatment for Alzheimer's disease, no effective treatments have been found. However, clinical trials have developed certain compounds that could potentially change the future of Alzheimer's disease treatments. Within clinical trials stable and effective AD therapeutic strategies have a 99.5
https://en.wikipedia.org/wiki/Intego
Intego is a Mac and Windows security software company founded in 1997 by Jean-Paul Florencio and Laurent Marteau. The company creates Internet security software for macOS and Windows, including: antivirus, firewall, anti-spam, backup software and data protection software. Intego currently has offices in the U.S. in Seattle, Washington, and Austin, Texas, and international offices in Paris, France, and Nagano, Japan. All of Intego's products are universal binaries, and are supported in several languages: English, French, German, Japanese, and Spanish, and previously Italian. History Co-founded by former CEO Laurent Marteau and Jean-Paul Florencio and based in Paris, France, Intego released its first antivirus product in 1997: Rival, an antivirus for Mac OS 8. Two years later in July 1999, Intego released NetBarrier, the first personal security software suite for Mac OS 8. Then in October 2000, Intego released its legacy antivirus software, VirusBarrier 1.0, for Mac OS 8 and Mac OS 9. Intego launched The Mac Security Blog, a blog that covers Mac security news, Apple security updates, Mac malware alerts, as well as news and opinion pieces related to Apple products, in mid-2007. The company launched a podcast in October 2017, called the Intego Mac Podcast. Intego released its current X9 version of antivirus and security software in June 2016, which has since had several under-the-hood updates, including compatibility with new macOS releases and Apple silicon processors. Kape Technologies announced in July 2018 that it was acquiring Intego to "enhance [Kape's] arsenal of products in cyber protection." Intego released a Windows version of its antivirus software in July 2020. Intego's newest product is Intego Privacy Protection, a VPN solution for macOS and Windows that launched circa June 2021. Products VirusBarrier NetBarrier ContentBarrier Personal Backup Mac Washing Machine Intego Antivirus for Windows Intego Privacy Protection Remote Management Console Fil
https://en.wikipedia.org/wiki/Eckert%E2%80%93Mauchly%20Award
The Eckert–Mauchly Award recognizes contributions to digital systems and computer architecture. It is known as the computer architecture community’s most prestigious award. First awarded in 1979, it was named for John Presper Eckert and John William Mauchly, who between 1943 and 1946 collaborated on the design and construction of the first large scale electronic computing machine, known as ENIAC, the Electronic Numerical Integrator and Computer. A certificate and $5,000 are awarded jointly by the Association for Computing Machinery (ACM) and the IEEE Computer Society for outstanding contributions to the field of computer and digital systems architecture. Recipients 1979 Robert S. Barton 1980 Maurice V. Wilkes 1981 Wesley A. Clark 1982 Gordon C. Bell 1983 Tom Kilburn 1984 Jack B. Dennis 1985 John Cocke 1986 Harvey G. Cragon 1987 Gene M. Amdahl 1988 Daniel P. Siewiorek 1989 Seymour Cray 1990 Kenneth E. Batcher 1991 Burton J. Smith 1992 Michael J. Flynn 1993 David J. Kuck 1994 James E. Thornton 1995 John Crawford 1996 Yale Patt 1997 Robert Tomasulo 1998 T. Watanabe 1999 James E. Smith 2000 Edward Davidson 2001 John Hennessy 2002 Bantwal Ramakrishna "Bob" Rau 2003 Joseph A. (Josh) Fisher 2004 Frederick P. Brooks 2005 Robert P. Colwell 2006 James H. Pomerene 2007 Mateo Valero 2008 David Patterson 2009 Joel Emer 2010 Bill Dally 2011 Gurindar S. Sohi 2012 Algirdas Avizienis 2013 James R. Goodman 2014 Trevor Mudge 2015 Norman Jouppi 2016 Uri Weiser 2017 Charles P. Thacker 2018 Susan J. Eggers 2019 Mark D. Hill 2020 Luiz André Barroso 2021 Margaret Martonosi 2022 Mark Horowitz 2023 Kunle Olukotun See also ACM Special Interest Group on Computer Architecture Computer engineering Computer science Computing List of computer science awards References ACM-IEEE CS Eckert-Mauchly Award winners Eckert Mauchly Award Computer science awards IEEE society and council awards
https://en.wikipedia.org/wiki/Q%20%28emulator%29
Q is a free emulator software that runs on Mac OS X, including OS X on PowerPC. Q is Mike Kronenberg's port of the open source and generic processor emulator QEMU. Q uses Cocoa and other Apple technologies, such as Core Image and Core Audio, to achieve its emulation. Q can be used to run Windows, or any other operating system based on the x86 architecture, on the Macintosh. Q is available as a Universal Binary and, as such, can run on Intel or PowerPC based Macintosh systems. However, some target guest architectures are unsupported on Lion (due to the removal of Rosetta) such as SPARC, MIPS, ARM and x86_64 since the softmmus are PowerPC only binaries. Unlike QEMU, which is a command-line application, Q has a native graphical interface for managing and configuring virtual machines. As of June 2022, the project was "on hold". See also qcow Comparison of platform virtualization software SPIM Emulator QEMU References External links Q [kju:] - the new homepage of the Q project MacUpdate listing The QEMU forum for Mac OS X Boot Camp, Q/QEMU, Parallels: Pros/cons InfoWorld (April 17, 2006) MacOS emulation software MacOS-only free software Virtualization software
https://en.wikipedia.org/wiki/Multiplicity%20of%20infection
In microbiology, the multiplicity of infection or MOI is the ratio of agents (e.g. phage or more generally virus, bacteria) to infection targets (e.g. cell). For example, when referring to a group of cells inoculated with virus particles, the MOI is the ratio of the number of virus particles to the number of target cells present in a defined space. Interpretation The actual number of viruses or bacteria that will enter any given cell is a stochastic process: some cells may absorb more than one infectious agent, while others may not absorb any. Before determining the multiplicity of infection, it's absolutely necessary to have a well-isolated agent, as crude agents may not produce reliable and reproducible results. The probability that a cell will absorb virus particles or bacteria when inoculated with an MOI of can be calculated for a given population using a Poisson distribution. This application of Poisson's distribution was applied and described by Ellis and Delbrück. where is the multiplicity of infection or MOI, is the number of infectious agents that enter the infection target, and is the probability that an infection target (a cell) will get infected by infectious agents. In fact, the infectivity of the virus or bacteria in question will alter this relationship. One way around this is to use a functional definition of infectious particles rather than a strict count, such as a plaque forming unit for viruses. For example, when an MOI of 1 (1 infectious viral particle per cell) is used to infect a population of cells, the probability that a cell will not get infected is , and the probability that it be infected by a single particle is , by two particles is , by three particles is , and so on. The average percentage of cells that will become infected as a result of inoculation with a given MOI can be obtained by realizing that it is simply . Hence, the average fraction of cells that will become infected following an inoculation with an MOI of is give
https://en.wikipedia.org/wiki/Building%20regulations%20approval
To comply with the Building Act 1984 and the subsequent statutory instruments known as the Building Regulations, Building regulations approval is required to construct certain structures in England and Wales. Construction projects falling into this category are sometimes referred to as "notifiable", however this is different from the "notification" (which may also be required under the Construction (Design and Management) Regulations 2015, which seeks to monitor health and safety in construction projects. The rules vary for Scotland and Northern Ireland, but elsewhere Building Regulations approval can usually be obtained by application to a building control body (BCB), of which there are two types: local Authority BCBs (usually a council's building control department) and private BCBs (known as Approved Inspectors). If an Approved Inspector is used, before any controlled building work can start on site they must inform the local authority about the work. This is called giving an 'initial notice'. This notice states that a particular Approved Inspector is the BCB for the specified works, at the specified location. If using a local authority, approval can be obtained in 1 of 3 ways:- 1. Full Plans By the "full plans" method where drawings are deposited with the Local Authority and are subsequently checked for compliance with the Building Regulations. The various stages of the work are also inspected and checked for compliance with the relevant technical requirements of the Building Regulations; by a Building Control Surveyor employed by the Local Authority. This is the most thorough option. And a response from the Local Authority will typically take 4–8 weeks. However, unlike planning permission, work may start before approval has been granted. It is also quite usual for the final building to differ in some respects to that which received full plans approval, in which case amended "as built" plans are often required to be submitted to the Local Authority. A "com
https://en.wikipedia.org/wiki/Concurrent%20constraint%20logic%20programming
Concurrent constraint logic programming is a version of constraint logic programming aimed primarily at programming concurrent processes rather than (or in addition to) solving constraint satisfaction problems. Goals in constraint logic programming are evaluated concurrently; a concurrent process is therefore programmed as the evaluation of a goal by the interpreter. Syntactically, concurrent constraint logic programs are similar to non-concurrent programs, the only exception being that clauses include guards, which are constraints that may block the applicability of the clause under some conditions. Semantically, concurrent constraint logic programming differs from its non-concurrent versions because a goal evaluation is intended to realize a concurrent process rather than finding a solution to a problem. Most notably, this difference affects how the interpreter behaves when more than one clause is applicable: non-concurrent constraint logic programming recursively tries all clauses; concurrent constraint logic programming chooses only one. This is the most evident effect of an intended directionality of the interpreter, which never revise a choice it has previously taken. Other effects of this are the semantical possibility of having a goal that cannot be proved while the whole evaluation does not fail, and a particular way for equating a goal and a clause head. Constraint handling rules can be seen as a form of concurrent constraint logic programming, but are used for programming a constraint simplifier or solver rather than concurrent processes. Description In constraint logic programming, the goals in the current goal are evaluated sequentially, usually proceeding in a LIFO order in which newer goals are evaluated first. The concurrent version of logic programming allows for evaluating goals in parallel: every goal is evaluated by a process, and processes run concurrently. These processes interact via the constraint store: a process can add a constraint to
https://en.wikipedia.org/wiki/Building%20regulations%20in%20the%20United%20Kingdom
Building regulations in the United Kingdom are statutory instruments or statutory regulations that seek to ensure that the policies set out in the relevant legislation are carried out. Building regulations approval is required for most building work in the UK. Building regulations that apply across England and Wales are set out in the Building Act 1984 while those that apply across Scotland are set out in the Building (Scotland) Act 2003. The Act in England and Wales permits detailed regulations to be made by the Secretary of State. The regulations made under the Act have been periodically updated, rewritten or consolidated, with the latest and current version being the Building Regulations 2010. The UK Government (at Westminster) is responsible for the relevant legislation and administration in England, the Welsh Government (at Cardiff) is the responsible body in Wales, the Scottish Government (at Edinburgh) is responsible for the issue in Scotland, and the Northern Ireland Executive (at Belfast) has responsibility within its jurisdiction. There are very similar (and technically very comparable) Building Regulations in the Republic of Ireland. The Building Regulations 2010 have recently been updated by the Building Safety Act 2022. Regulatory structure The detailed requirements of the Building Regulations in England and Wales are scheduled within 18 separate headings, each designated by a letter (Part A to Part R), and covering aspects such as workmanship, adequate materials, structure, waterproofing and weatherisation, fire safety and means of escape, sound isolation, ventilation, safe (potable) water, protection from falling, drainage, sanitary facilities, accessibility and facilities for the disabled, electrical safety, security of a building, and high-speed broadband infrastructure. For each Part, detailed specifications are available free online (in the English and Welsh governments' "approved documents") describing the matters to be taken into account. The
https://en.wikipedia.org/wiki/Rolling%20code
A rolling code (or sometimes called a hopping code) is used in keyless entry systems to prevent replay attacks, where an eavesdropper records the transmission and replays it at a later time to cause the receiver to 'unlock'. Such systems are typical in garage door openers and keyless car entry systems. Techniques Common PRNG (pseudorandom number generator) — preferably cryptographically secure — in both transmitter and receiver Transmitter sends 'next' code in sequence Receiver compares 'next' to its calculated 'next' code. A typical implementation compares within the next 256 codes in case receiver missed some transmitted keypresses. HMAC-based one-time password employed widely in multi-factor authentication uses similar approach, but with pre-shared secret key and HMAC instead of PRNG and pre-shared random seed. Application in RF remote control A rolling code transmitter is useful in a security system for providing secure encrypted radio frequency (RF) transmission comprising an interleaved trinary bit fixed code and rolling code. A receiver demodulates the encrypted RF transmission and recovers the fixed code and rolling code. Upon comparison of the fixed and rolling codes with stored codes and determining that the signal has emanated from an authorized transmitter, a signal is generated to actuate an electric motor to open or close a movable component. Rolling code vs. fixed code RF remote control Remote controls send a digital code word to the receiver. If the receiver determines the codeword is acceptable, then the receiver will actuate the relay, unlock the door, or open the barrier. Simple remote control systems use a fixed code word; the code word that opens the gate today will also open the gate tomorrow. An attacker with an appropriate receiver could discover the code word and use it to gain access sometime later. More sophisticated remote control systems use a rolling code (or hopping code) that changes for every use. An attacker may be abl
https://en.wikipedia.org/wiki/Cre%20recombinase
Cre recombinase is a tyrosine recombinase enzyme derived from the P1 bacteriophage. The enzyme uses a topoisomerase I-like mechanism to carry out site specific recombination events. The enzyme (38kDa) is a member of the integrase family of site specific recombinase and it is known to catalyse the site specific recombination event between two DNA recognition sites (LoxP sites). This 34 base pair (bp) loxP recognition site consists of two 13 bp palindromic sequences which flank an 8bp spacer region. The products of Cre-mediated recombination at loxP sites are dependent upon the location and relative orientation of the loxP sites. Two separate DNA species both containing loxP sites can undergo fusion as the result of Cre mediated recombination. DNA sequences found between two loxP sites are said to be "floxed". In this case the products of Cre mediated recombination depends upon the orientation of the loxP sites. DNA found between two loxP sites oriented in the same direction will be excised as a circular loop of DNA whilst intervening DNA between two loxP sites that are opposingly orientated will be inverted. The enzyme requires no additional cofactors (such as ATP) or accessory proteins for its function. The enzyme plays important roles in the life cycle of the P1 bacteriophage, such as cyclization of the linear genome and resolution of dimeric chromosomes that form after DNA replication. Cre recombinase is a widely used tool in the field of molecular biology. The enzyme's unique and specific recombination system is exploited to manipulate genes and chromosomes in a huge range of research, such as gene knock out or knock in studies. The enzyme's ability to operate efficiently in a wide range of cellular environments (including mammals, plants, bacteria, and yeast) enables the Cre-Lox recombination system to be used in a vast number of organisms, making it a particularly useful tool in scientific research. Discovery Studies carried out in 1981 by Sternberg and Ham
https://en.wikipedia.org/wiki/Geisenheim%20Grape%20Breeding%20Institute
The Geisenheim Grape Breeding Institute was founded in 1872 and is located in the town of Geisenheim, in Germany's Rheingau region. In 1876 Swiss-born professor Hermann Müller joined the institute, where he developed his namesake grape variety Müller-Thurgau, which became Germany's most-planted grape variety in the 1970s. Professor Helmut Becker worked at the institute from 1964 until his death in 1989. Academic Grade Geisenheim is the only German institution to award higher academic degrees in winemaking. Formally, undergraduate level viticulture and enology, ending with a bachelor's degree in engineering is awarded by the University of Applied Sciences in Wiesbaden, and the newly introduced master's degree is awarded by the Giessen University. Breeds White: Müller-Thurgau, Arnsburger, Ehrenfelser, Saphira, Reichensteiner, Ehrenbreitsteiner, Prinzipal, Osteiner, Witberger, Schönburger, Primera, Rabaner, Hibernal Red: Rotberger, Dakapo Improvements: Rondo, Orléans, Dunkelfelder See also Wine German wine Geilweilerhof Institute for Grape Breeding References External links DEPARTMENT OF GRAPEVINE BREEDING at Geisenheim University 1872 establishments in Germany Wine industry organizations Oenology Organizations established in 1872 Agricultural research institutes in Germany Rheingau Yeast banks
https://en.wikipedia.org/wiki/Transistor%20count
The transistor count is the number of transistors in an electronic device (typically on a single substrate or "chip"). It is the most common measure of integrated circuit complexity (although the majority of transistors in modern microprocessors are contained in the cache memories, which consist mostly of the same memory cell circuits replicated many times). The rate at which MOS transistor counts have increased generally follows Moore's law, which observed that the transistor count doubles approximately every two years. However, being directly proportional to the area of a chip, transistor count does not represent how advanced the corresponding manufacturing technology is: a better indication of this is the transistor density (the ratio of a chip's transistor count to its area). , the highest transistor count in flash memory is Micron's 2terabyte (3D-stacked) 16-die, 232-layer V-NAND flash memory chip, with 5.3trillion floating-gate MOSFETs (3bits per transistor). The highest transistor count in a single chip processor is that of the deep learning processor Wafer Scale Engine 2 by Cerebras. It has 2.6trillion MOSFETs in 84 exposed fields (dies) on a wafer, manufactured using TSMC's 7 nm FinFET process. As of 2023, the GPU with the highest transistor count is AMD's MI300X, built on TSMC's N5 process and totalling 153 billion MOSFETs. The highest transistor count in a consumer microprocessor is 134billion transistors, in Apple's ARM-based dual-die M2 Ultra system on a chip, which is fabricated using TSMC's 5 nm semiconductor manufacturing process. In terms of computer systems that consist of numerous integrated circuits, the supercomputer with the highest transistor count was the Chinese-designed Sunway TaihuLight, which has for all CPUs/nodes combined "about 400 trillion transistors in the processing part of the hardware" and "the DRAM includes about 12 quadrillion transistors, and that's about 97 percent of all the transistors." To compare, the smallest comp
https://en.wikipedia.org/wiki/Wilson%20current%20mirror
A Wilson current mirror is a three-terminal circuit (Fig. 1) that accepts an input current at the input terminal and provides a "mirrored" current source or sink output at the output terminal. The mirrored current is a precise copy of the input current. It may be used as a Wilson current source by applying a constant bias current to the input branch as in Fig. 2. The circuit is named after George R. Wilson, an integrated circuit design engineer who worked for Tektronix. Wilson devised this configuration in 1967 when he and Barrie Gilbert challenged each other to find an improved current mirror overnight that would use only three transistors. Wilson won the challenge. Circuit operation There are three principal metrics of how well a current mirror will perform as part of a larger circuit. The first measure is the static error, the difference between the input and output currents expressed as a fraction of the input current. Minimizing this difference is critical in such applications of a current mirror as the differential to single-ended output signal conversion in a differential amplifier stage because this difference controls the common mode and power supply rejection ratios. The second measure is the output impedance of the current source or equivalently its inverse, the output conductance. This impedance affects stage gain when a current source is used as an active load and affects common mode gain when the source provides the tail current of a differential pair. The last metric is the pair of minimum voltages from the common terminal, usually a power rail connection, to the input and output terminals that are required for proper operation of the circuit. These voltages affect the headroom to the power supply rails that is available for the circuitry in which the current mirror is embedded. An approximate analysis due to Gilbert shows how the Wilson current mirror works and why its static error should be very low. Transistors Q1 and Q2 in Fig. 1 are a mat
https://en.wikipedia.org/wiki/DenyHosts
DenyHosts is a log-based intrusion-prevention security tool for SSH servers written in Python. It is intended to prevent brute-force attacks on SSH servers by monitoring invalid login attempts in the authentication log and blocking the originating IP addresses. DenyHosts is developed by Phil Schwartz, who is also the developer of Kodos Python Regular Expression Debugger. Operation DenyHosts checks the end of the authentication log for recent failed login attempts. It records information about their originating IP addresses and compares the number of invalid attempts to a user-specified threshold. If there have been too many invalid attempts it assumes a dictionary attack is occurring and prevents the IP address from making any further attempts by adding it to /etc/hosts.deny on the server. DenyHosts 2.0 and above support centralized synchronization, so that repeat offenders are blocked from many computers. The site denyhosts.net gathers statistics from computers running the software. DenyHosts is restricted to connections using IPv4. It does not work with IPv6. DenyHosts may be run manually, as a daemon, or as a cron job. Discoveries In July 2007, The Register reported that from May until July that year, "compromised computers" at Oracle UK were listed among the ten worst offenders for launching brute force SSH attacks on the Internet, according to public DenyHosts listings. After an investigation, Oracle denied suggestions that any of its computers had been compromised. Vulnerabilities Daniel B. Cid wrote a paper showing that DenyHosts, as well the similar programs Fail2ban and BlockHosts, were vulnerable to remote log injection, an attack technique similar to SQL injection, in which a specially crafted user name is used to trigger a block against a site chosen by the attacker. This was fixed in version 2.6. Forks and descendants Since there had been no further development by the original author Phil Schwartz after the release of version 2.6 (December 2006) a
https://en.wikipedia.org/wiki/Gresham%20Professor%20of%20Geometry
The Professor of Geometry at Gresham College, London, gives free educational lectures to the general public. The college was founded for this purpose in 1597, when it appointed seven professors; this has since increased to ten and in addition the college now has visiting professors. The Professor of Geometry is always appointed by the City of London Corporation. List of Gresham Professors of Geometry Note, years given as, say, 1596/7 refer to Old Style and New Style dates. References Gresham College old website, Internet Archive List of professors Gresham College website Profile of current professor and list of past professors Notes External links '400 Years of Geometry at Gresham College', lecture by Robin Wilson at Gresham College, 14 May 2008 (available for download as PDF, audio and video files) Further reading Geometry 1596 establishments in England Professorships in mathematics Mathematics education in the United Kingdom
https://en.wikipedia.org/wiki/Inscribed%20figure
In geometry, an inscribed planar shape or solid is one that is enclosed by and "fits snugly" inside another geometric shape or solid. To say that "figure F is inscribed in figure G" means precisely the same thing as "figure G is circumscribed about figure F". A circle or ellipse inscribed in a convex polygon (or a sphere or ellipsoid inscribed in a convex polyhedron) is tangent to every side or face of the outer figure (but see Inscribed sphere for semantic variants). A polygon inscribed in a circle, ellipse, or polygon (or a polyhedron inscribed in a sphere, ellipsoid, or polyhedron) has each vertex on the outer figure; if the outer figure is a polygon or polyhedron, there must be a vertex of the inscribed polygon or polyhedron on each side of the outer figure. An inscribed figure is not necessarily unique in orientation; this can easily be seen, for example, when the given outer figure is a circle, in which case a rotation of an inscribed figure gives another inscribed figure that is congruent to the original one. Familiar examples of inscribed figures include circles inscribed in triangles or regular polygons, and triangles or regular polygons inscribed in circles. A circle inscribed in any polygon is called its incircle, in which case the polygon is said to be a tangential polygon. A polygon inscribed in a circle is said to be a cyclic polygon, and the circle is said to be its circumscribed circle or circumcircle. The inradius or filling radius of a given outer figure is the radius of the inscribed circle or sphere, if it exists. The definition given above assumes that the objects concerned are embedded in two- or three-dimensional Euclidean space, but can easily be generalized to higher dimensions and other metric spaces. For an alternative usage of the term "inscribed", see the inscribed square problem, in which a square is considered to be inscribed in another figure (even a non-convex one) if all four of its vertices are on that figure. Properties Ever
https://en.wikipedia.org/wiki/Vehicular%20Technology%20Conference
The Vehicular Technology Conference (VTC) is a semiannual international academic conference on wireless communications organized by the Institute of Electrical and Electronics Engineers' Vehicular Technology Society. History The first conference was held in Detroit, Michigan, United States in 1950 and then annually until 1998. Since 1999, the conference has been held in spring and fall, when it is known as VTC Spring and VTC Fall respectively. The alignment with the seasons has meant that the conference has almost always been held in the Northern Hemisphere, but in May 2006, VTC Spring was held in Melbourne, Victoria, Australia, visiting the Southern Hemisphere for the first time. Recent VTCs have been attended by about 600-700 people. The conference focuses mainly on the physical layer and medium access control layer (PHY and MAC) of wireless systems. References External links The Vehicular Technology Society IEEE conferences Telecommunication conferences
https://en.wikipedia.org/wiki/International%20Conference%20on%20Communications
The International Conference on Communications (ICC) is an annual international academic conference organised by the Institute of Electrical and Electronics Engineers' Communications Society. The conference grew out of the Global Communications Conference (GLOBECOM) when, in 1965, the seventh GLOBECOM was sponsored by the Communications Society's predecessor as the "IEEE Communications Convention". The following year it adopted its current name and GLOBECOM was disbanded (it has since been revived). The conference was held in the United States until 1984 when it was held in Amsterdam; it has since been held in several other countries. Some major telecommunications discoveries have been announced at ICC, such as the invention of turbo codes. In fact, this ground breaking paper had been submitted to ICC the previous year, but was rejected by the referees who thought the results too good to be true. Recent ICCs have been attended by 2500–3000 people. Conferences References IEEE conferences Telecommunication conferences Computer networking conferences
https://en.wikipedia.org/wiki/Atomic%20formula
In mathematical logic, an atomic formula (also known as an atom or a prime formula) is a formula with no deeper propositional structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformulas. Atoms are thus the simplest well-formed formulas of the logic. Compound formulas are formed by combining the atomic formulas using the logical connectives. The precise form of atomic formulas depends on the logic under consideration; for propositional logic, for example, a propositional variable is often more briefly referred to as an "atomic formula", but, more precisely, a propositional variable is not an atomic formula but a formal expression that denotes an atomic formula. For predicate logic, the atoms are predicate symbols together with their arguments, each argument being a term. In model theory, atomic formulas are merely strings of symbols with a given signature, which may or may not be satisfiable with respect to a given model. Atomic formula in first-order logic The well-formed terms and propositions of ordinary first-order logic have the following syntax: Terms: , that is, a term is recursively defined to be a constant c (a named object from the domain of discourse), or a variable x (ranging over the objects in the domain of discourse), or an n-ary function f whose arguments are terms tk. Functions map tuples of objects to objects. Propositions: , that is, a proposition is recursively defined to be an n-ary predicate P whose arguments are terms tk, or an expression composed of logical connectives (and, or) and quantifiers (for-all, there-exists) used with other propositions. An atomic formula or atom is simply a predicate applied to a tuple of terms; that is, an atomic formula is a formula of the form P (t1 ,…, tn) for P a predicate, and the tn terms. All other well-formed formulae are obtained by composing atoms with logical connectives and quantifiers. For example, the formula ∀x. P (x) ∧ ∃y.
https://en.wikipedia.org/wiki/Chamfered%20dodecahedron
In geometry, the chamfered dodecahedron is a convex polyhedron with 80 vertices, 120 edges, and 42 faces: 30 hexagons and 12 pentagons. It is constructed as a chamfer (edge-truncation) of a regular dodecahedron. The pentagons are reduced in size and new hexagonal faces are added in place of all the original edges. Its dual is the pentakis icosidodecahedron. It is also called a truncated rhombic triacontahedron, constructed as a truncation of the rhombic triacontahedron. It can more accurately be called an order-12 truncated rhombic triacontahedron because only the order-12 vertices are truncated. Structure These 12 order-5 vertices can be truncated such that all edges are equal length. The original 30 rhombic faces become non-regular hexagons, and the truncated vertices become regular pentagons. The hexagon faces can be equilateral but not regular with D symmetry. The angles at the two vertices with vertex configuration are and at the remaining four vertices with , they are each. It is the Goldberg polyhedron , containing pentagonal and hexagonal faces. It also represents the exterior envelope of a cell-centered orthogonal projection of the 120-cell, one of six convex regular 4-polytopes. Chemistry This is the shape of the fullerene ; sometimes this shape is denoted to describe its icosahedral symmetry and distinguish it from other less-symmetric 80-vertex fullerenes. It is one of only four fullerenes found by to have a skeleton that can be isometrically embeddable into an L space. Related polyhedra This polyhedron looks very similar to the uniform truncated icosahedron which has 12 pentagons, but only 20 hexagons. The chamfered dodecahedron creates more polyhedra by basic Conway polyhedron notation. The zip chamfered dodecahedron makes a chamfered truncated icosahedron, and Goldberg (2,2). Chamfered truncated icosahedron In geometry, the chamfered truncated icosahedron is a convex polyhedron with 240 vertices, 360 edges, and 122 faces, 110 hexagon
https://en.wikipedia.org/wiki/Traverse%20%28surveying%29
Traverse is a method in the field of surveying to establish control networks. It is also used in geodesy. Traverse networks involve placing survey stations along a line or path of travel, and then using the previously surveyed points as a base for observing the next point. Traverse networks have many advantages, including: Less reconnaissance and organization needed; While in other systems, which may require the survey to be performed along a rigid polygon shape, the traverse can change to any shape and thus can accommodate a great deal of different terrains; Only a few observations need to be taken at each station, whereas in other survey networks a great deal of angular and linear observations need to be made and considered; Traverse networks are free of the strength of figure considerations that happen in triangular systems; Scale error does not add up as the traverse is performed. Azimuth swing errors can also be reduced by increasing the distance between stations. The traverse is more accurate than triangulateration (a combined function of the triangulation and trilateration practice). Types Frequently in surveying engineering and geodetic science, control points (CP) are setting/observing distance and direction (bearings, angles, azimuths, and elevation). The CP throughout the control network may consist of monuments, benchmarks, vertical control, etc. There are mainly two types of traverse: Closed traverse: either originates from a station and returns to the same station completing a circuit, or runs between two known stations Open traverse: neither returns to its starting station, nor closes on any other known station. Compound traverse: it is where an open traverse is linked at its ends to an existing traverse to form a closed traverse. The closing line may be defined by coordinates at the end points wh
https://en.wikipedia.org/wiki/ECC%20memory
Error correction code memory (ECC memory) is a type of computer data storage that uses an error correction code (ECC) to detect and correct n-bit data corruption which occurs in memory. ECC memory is used in most computers where data corruption cannot be tolerated, like industrial control applications, critical databases, and infrastructural memory caches. Typically, ECC memory maintains a memory system immune to single-bit errors: the data that is read from each word is always the same as the data that had been written to it, even if one of the bits actually stored has been flipped to the wrong state. Most non-ECC memory cannot detect errors, although some non-ECC memory with parity support allows detection but not correction. Description Error correction codes protect against undetected data corruption and are used in computers where such corruption is unacceptable, examples being scientific and financial computing applications, or in database and file servers. ECC can also reduce the number of crashes in multi-user server applications and maximum-availability systems. Electrical or magnetic interference inside a computer system can cause a single bit of dynamic random-access memory (DRAM) to spontaneously flip to the opposite state. It was initially thought that this was mainly due to alpha particles emitted by contaminants in chip packaging material, but research has shown that the majority of one-off soft errors in DRAM chips occur as a result of background radiation, chiefly neutrons from cosmic ray secondaries, which may change the contents of one or more memory cells or interfere with the circuitry used to read or write to them. Hence, the error rates increase rapidly with rising altitude; for example, compared to sea level, the rate of neutron flux is 3.5 times higher at 1.5 km and 300 times higher at 10-12 km (the cruising altitude of commercial airplanes). As a result, systems operating at high altitudes require special provisions for reliability. A
https://en.wikipedia.org/wiki/Trusted%20Network%20Connect
Trusted Network Connect (TNC) is an open architecture for Network Access Control, promulgated by the Trusted Network Connect Work Group (TNC-WG) of the Trusted Computing Group (TCG). History The TNC architecture was first introduced at the RSA Conference in 2005. TNC was originally a network access control standard with a goal of multi-vendor endpoint policy enforcement. In 2009 TCG announced expanded specifications which extended the specifications to systems outside of the enterprise network. Additional uses for TNC which have been reported include Industrial Control System (ICS), SCADA security, and physical security. Specifications Specifications introduced by the TNC Work Group: TNC Architecture for Interoperability IF-IMC - Integrity Measurement Collector Interface IF-IMV - Integrity Measurement Verifier Interface IF-TNCCS - Trusted Network Connect Client-Server Interface IF-M - Vendor-Specific IMC/IMV Messages Interface IF-T - Network Authorization Transport Interface IF-PEP - Policy Enforcement Point Interface IF-MAP - Metadata Access Point Interface CESP - Clientless Endpoint Support Profile Federated TNC TNC Vendor Adoption A partial list of vendors who have adopted TNC Standards: ArcSight Aruba Networks Avenda Systems Enterasys Extreme Networks Fujitsu IBM Pulse Secure Juniper Networks Lumeta McAfee Microsoft Nortel ProCurve strongSwan Wave Systems Also, networking by Cisco HP Symantec Trapeze Networks Tofino TNC Customer Adoption The U.S. Army has planned to use this technology to enhance the security of its computer networks. The South Carolina Department of Probation, Parole, and Pardon Services has tested a TNC-SCAP integration combination in a pilot program. See also IF-MAP Trusted Computing Trusted Computing Group Trusted Internet Connection References Sources Dornan, Andy. “'Trusted Network Connect' Puts Hardware Security Agent In Every PC”, “Information Week Magazine”, UBM Techweb Publishing. Vijay
https://en.wikipedia.org/wiki/Actuarial%20reserves
In insurance, an actuarial reserve is a reserve set aside for future insurance liabilities. It is generally equal to the actuarial present value of the future cash flows of a contingent event. In the insurance context an actuarial reserve is the present value of the future cash flows of an insurance policy and the total liability of the insurer is the sum of the actuarial reserves for every individual policy. Regulated insurers are required to keep offsetting assets to pay off this future liability. The loss random variable The loss random variable is the starting point in the determination of any type of actuarial reserve calculation. Define to be the future state lifetime random variable of a person aged x. Then, for a death benefit of one dollar and premium , the loss random variable, , can be written in actuarial notation as a function of From this we can see that the present value of the loss to the insurance company now if the person dies in t years, is equal to the present value of the death benefit minus the present value of the premiums. The loss random variable described above only defines the loss at issue. For K(x) > t, the loss random variable at time t can be defined as: Net level premium reserves Net level premium reserves, also called benefit reserves, only involve two cash flows and are used for some US GAAP reporting purposes. The valuation premium in an NLP reserve is a premium such that the value of the reserve at time zero is equal to zero. The net level premium reserve is found by taking the expected value of the loss random variable defined above. They can be formulated prospectively or retrospectively. The amount of prospective reserves at a point in time is derived by subtracting the actuarial present value of future valuation premiums from the actuarial present value of the future insurance benefits. Retrospective reserving subtracts accumulated value of benefits from accumulated value of valuation premiums as of a point in time. T
https://en.wikipedia.org/wiki/Intraspecific%20antagonism
Intraspecific antagonism means a disharmonious or antagonistic interaction between two individuals of the same species. As such, it could be a sociological term, but was actually coined by Alan Rayner and Norman Todd working at Exeter University in the late 1970s, to characterise a particular kind of zone line formed between wood-rotting fungal mycelia. Intraspecific antagonism is one of the expressions of a phenomenon known as vegetative or somatic incompatibility. Fungal individualism Zone lines form in wood for many reasons, including host reactions against parasitic encroachment, and inter-specific interactions, but the lines observed by Rayner and Todd when transversely-cut sections of brown-rotted birch tree trunk or branch were incubated in plastic bags appeared to be due to a reaction between different individuals of the same species of fungus. This was a startling inference at a time when the prevailing orthodoxy within the mycological community was that of the "unit mycelium". This was the theory that when two different individuals of the same species of basidiomycete wood rotting fungi grew and met within the substratum, they fused, cooperated, and shared nuclei freely. Rayner and Todd's insight was that basidiomycete fungi individuals do, in most "adult" or dikaryotic cases anyway, retain their individuality. A small stable of postgraduate and postdoctoral students helped elucidate the mechanisms underlying these intermycelial interactions, at Exeter University (Todd) and the University of Bath (Rayner), over the next few years. Applications of intraspecific antagonism Although the attribution of individual status to the mycelia confined by intraspecific zone lines is a comparatively new idea, zone lines themselves have been known since time immemorial. The term spalting is applied by woodworkers to wood showing strongly-figured zone lines, particularly those cases where the area of "no-man's land" between two antagonistic conspecific mycelia is c
https://en.wikipedia.org/wiki/Duhamel%27s%20principle
In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity. The philosophy underlying Duhamel's principle is that it is possible to go from solutions of the Cauchy problem (or initial value problem) to solutions of the inhomogeneous problem. Consider, for instance, the example of the heat equation modeling the distribution of heat energy in . Indicating by the time derivative of , the initial value problem is where g is the initial heat distribution. By contrast, the inhomogeneous problem for the heat equation, corresponds to adding an external heat energy at each point. Intuitively, one can think of the inhomogeneous problem as a set of homogeneous problems each starting afresh at a different time slice . By linearity, one can add up (integrate) the resulting solutions through time and obtain the solution for the inhomogeneous problem. This is the essence of Duhamel's principle. General considerations Formally, consider a linear inhomogeneous evolution equation for a function with spatial domain in , of the form where L is a linear differential operator that involves no time derivatives. Duhamel's principle is, formally, that
https://en.wikipedia.org/wiki/Mail-sink
Smtp-sink is a utility program in the Postfix Mail software package that implements a "black hole" function. It listens on the named host (or address) and port. It accepts Simple Mail Transfer Protocol (SMTP) messages from the network and discards them. The purpose is to support measurement of client performance. It is not SMTP protocol compliant. Connections can be accepted on IPv4 or IPv6 endpoints, or on UNIX-domain sockets. IPv4 and IPv6 are the default. This program is the complement of the smtp-source(1) program. See also Tarpit (networking) SMTP References External links Postfix documentation for smtp-source How to create a domain mail sink for Exchange Server. System.Web.Mail and SMTP Explained Computer networking
https://en.wikipedia.org/wiki/Child%20development%20stages
Child development stages are the theoretical milestones of child development, some of which are asserted in nativist theories. This article discusses the most widely accepted developmental stages in children. There exists a wide variation in terms of what is considered "normal", caused by variations in genetic, cognitive, physical, family, cultural, nutritional, educational, and environmental factors. Many children reach some or most of these milestones at different times from the norm. Holistic development sees the child in the round, as a whole person – physically, emotionally, intellectually, socially, morally, culturally and spiritually. Learning about child development involves studying patterns of growth and development, from which guidelines for 'normal' development are construed. Developmental norms are sometimes called milestones – they define the recognized development pattern that children are expected to follow. Each child develops in a unique way; however, using norms helps in understanding these general patterns of development while recognizing the wide variation between individuals. One way to identify pervasive developmental disorders is if infants fail to meet the development milestones in time or at all. Table of milestones Infancy Newborn Physical development Infants are usually born weighing between and , but infants born prematurely often weigh less. Newborns typically lose 7–10% of their birth weight in the first few days, but they usually regain it within two weeks. During the first month, infants grow about and gain weight at a rate of about per day. Resting heart rate is generally between 70 and 190 beats per minute. Motor development Moves in response to stimuli. Displays several infantile reflexes, including: The rooting reflex, which causes the infant to suck when the nipple of a breast or bottle is placed in their mouth. The Moro reflex, which causes the infant to throw out their arms and legs when startled. The asymmet
https://en.wikipedia.org/wiki/MikroMikko
MikroMikko was a Finnish line of microcomputers released by Nokia Corporation's computer division Nokia Data from 1981 through 1987. MikroMikko was Nokia Data's attempt to enter the business computer market. They were especially designed for good ergonomy. History The first model in the line, MikroMikko 1, was released on 29 September 1981, 48 days after IBM introduced its Personal Computer. The launch date of MikroMikko 1 is the name day of Mikko in the Finnish almanac. The MikroMikko line was manufactured in a factory in the Kilo district of Espoo, Finland, where computers had been produced since the 1960s. Nokia later bought the computer division of the Swedish telecommunications company Ericsson. During Finland's economic depression in the early 1990s, Nokia streamlined many of its operations and sold many of its less profitable divisions to concentrate on its key competence of telecommunications. Nokia's personal computer division was sold to the British computer company ICL (International Computers Limited) in 1991, which later became part of Fujitsu. However, ICL and later Fujitsu retained the MikroMikko trademark in Finland. Internationally the MikroMikko line was marketed by Fujitsu under the trademark ErgoPro. Fujitsu later transferred its personal computer operations to Fujitsu Siemens Computers, which shut down its only factory in Espoo at the end of March 2000, thus ending large-scale PC manufacturing in the country. Models MikroMikko 1 M6 Processor: Intel 8085, 2 MHz 64 KB RAM, 4 KB ROM Display: 80×24 character text mode, the 25th row was used as a status row. Graphics resolutions 160×75 and 800×375 pixels, refresh rate 50 Hz Two 640 KB 5.25" floppy drives (other models might only have one drive) Optional 5 MB hard disk (stock in model M7) Connectors: two RS-232s, display, printer, keyboard Software: Nokia CP/M 2.2 operating system, Microsoft BASIC, editor, assembler and debugger Cost: 30,000 mk in 1984 MikroMikko 2 Released in 1983 Pro
https://en.wikipedia.org/wiki/Microcom%20Networking%20Protocol
The Microcom Networking Protocols, almost always shortened to MNP, is a family of error-correcting protocols commonly used on early high-speed (2400 bit/s and higher) modems. Originally developed for use on Microcom's own family of modems, the protocol was later openly licensed and used by most of the modem industry, notably the "big three", Telebit, USRobotics and Hayes. MNP was later supplanted by V.42bis, which was used almost universally starting with the first V.32bis modems in the early 1990s. Overview Although Xmodem was introduced 1977, as late as 1985 The New York Times described XMODEM first, then discussed MNP as a leading contender, and that 9600 baud modems "are beginning to make their appearance." By 1988, the Times was talking about 9600 and 19.2K, and that "At least 100 other brands of modems follow" MNP (compared to Hayes' use of LAP-B). Error correction basics Modems are, by their nature, error-prone devices. Noise on the telephone line, a common occurrence, can easily mimic the sounds used by the modems to transmit data, thereby introducing errors that are difficult to notice. For some tasks, like reading or writing simple text, a small number of errors can be accepted without causing too many problems. For other tasks, like transferring computer programs in machine format, even one error can render the received data useless. As modems increase in speed by using up more of the available bandwidth, the chance that random noise would introduce errors also increases; above 2400 bit/s these errors are quite common. To deal with this problem, a number of file transfer protocols were introduced and implemented in various programs. In general, these protocols break down a file into a series of frames or packets containing a number of bytes from the original file. Some sort of additional data, normally a checksum or CRC, is added to each packet to indicate whether the packet encountered an error while being received . The packet is then sent to the re
https://en.wikipedia.org/wiki/Propositional%20directed%20acyclic%20graph
A propositional directed acyclic graph (PDAG) is a data structure that is used to represent a Boolean function. A Boolean function can be represented as a rooted, directed acyclic graph of the following form: Leaves are labeled with (true), (false), or a Boolean variable. Non-leaves are (logical and), (logical or) and (logical not). - and -nodes have at least one child. -nodes have exactly one child. Leaves labeled with () represent the constant Boolean function which always evaluates to 1 (0). A leaf labeled with a Boolean variable is interpreted as the assignment , i.e. it represents the Boolean function which evaluates to 1 if and only if . The Boolean function represented by a -node is the one that evaluates to 1, if and only if the Boolean function of all its children evaluate to 1. Similarly, a -node represents the Boolean function that evaluates to 1, if and only if the Boolean function of at least one child evaluates to 1. Finally, a -node represents the complementary Boolean function its child, i.e. the one that evaluates to 1, if and only if the Boolean function of its child evaluates to 0. PDAG, BDD, and NNF Every binary decision diagram (BDD) and every negation normal form (NNF) are also a PDAG with some particular properties. The following pictures represent the Boolean function : See also Data structure Boolean satisfiability problem Proposition References M. Wachter & R. Haenni, "Propositional DAGs: a New Graph-Based Language for Representing Boolean Functions", KR'06, 10th International Conference on Principles of Knowledge Representation and Reasoning, Lake District, UK, 2006. M. Wachter & R. Haenni, "Probabilistic Equivalence Checking with Propositional DAGs", Technical Report iam-2006-001, Institute of Computer Science and Applied Mathematics, University of Bern, Switzerland, 2006. M. Wachter, R. Haenni & J. Jonczy, "Reliability and Diagnostics of Modular Systems: a New Probabilistic Approach", DX'06, 18th Internation
https://en.wikipedia.org/wiki/PWE3
In 2001, the IETF set up the Pseudowire Emulation Edge to Edge working group, and this group was given the initialism PWE3 (the 3 standing for the third power of E, i.e. EEE). The working group was chartered to develop an architecture for service provider edge-to-edge pseudowires and service-specific documents detailing the encapsulation techniques. In computer networking and telecommunications, a pseudowire (PW) is an emulation of a native service over a packet-switched network (PSN). The native service may be ATM, Frame Relay, Ethernet, low-rate TDM, or SONET/SDH, while the PSN may be MPLS, IP (either IPv4 or IPv6), or L2TPv3. The working group chairs were originally Danny McPherson and Luca Martini, but following Martini's resignation Stewart Bryant became co-chair. External links Charter for PWE3 Working Group : http://datatracker.ietf.org/wg/pwe3/charter/ Computer network organizations
https://en.wikipedia.org/wiki/Accelerated%20Math
Accelerated Math is a daily, progress-monitoring software tool that monitors and manages mathematics skills practice, from preschool math through calculus. It is primarily used by primary and secondary schools, and it is published by Renaissance Learning, Inc. Currently, there are five versions: a desktop version and a web-based version in Renaissance Place, the company's web software for Accelerated Math and a number of other software products (e.g. Accelerated Reader). In Australia and the United Kingdom, the software is referred to as "Accelerated Maths". Research Below is a sample of some of the most current research on Accelerated Math. Sadusky and Brem (2002) studied the impact of first-year implementation of Accelerated Math in a K-6 urban elementary school during the 2001–2002 school year. The researchers found that teachers were able to immediately use data to make decisions about instruction in the classroom. The students in classrooms using Accelerated Math had twice the percentile gains when tested as compared to the control classrooms that did not use Accelerated Math. Ysseldkyke and Tardrew (2003) studied 2,202 students in 125 classrooms encompassing 24 states. The results showed that when students using Accelerated Math were compared to a control group, those students using the software made a significant gains on the STAR Math test. Students in grades 3 through 10 that were using Accelerated Math had more than twice the percentile gains on these tests than students in the control group. Ysseldyke, Betts, Thill, and Hannigan (2004) conducted a quasi-experimental study with third- through sixth-grade Title I students. They found that Title I students who used Accelerated Math outperformed students who did not. Springer, Pugalee, and Algozzine (2005) also discovered a similar pattern. They studied students that failed to pass the AIMS test in order to graduate. Over half of the students passed the test after taking a course in which Accelerated Math
https://en.wikipedia.org/wiki/HashClash
HashClash was a volunteer computing project running on the Berkeley Open Infrastructure for Network Computing (BOINC) software platform to find collisions in the MD5 hash algorithm. It was based at Department of Mathematics and Computer Science at the Eindhoven University of Technology, and Marc Stevens initiated the project as part of his master's degree thesis. The project ended after Stevens defended his M.Sc. thesis in June 2007. However SHA1 was added later, and the code repository was ported to git in 2017. The project was used to create a rogue certificate authority certificate in 2009. See also Berkeley Open Infrastructure for Network Computing (BOINC) List of volunteer computing projects References External links HashClash HashClash at Stevens' home page Create your own MD5 collisions on AWS, Nat McHugh's blog Science in society Free science software Volunteer computing projects Cryptography Cryptanalytic software
https://en.wikipedia.org/wiki/Mobile%20PCI%20Express%20Module
A Mobile PCI Express Module (MXM) is an interconnect standard for GPUs (MXM Graphics Modules) in laptops using PCI Express created by MXM-SIG. The goal was to create a non-proprietary, industry standard socket, so one could easily upgrade the graphics processor in a laptop, without having to buy a whole new system or relying on proprietary vendor upgrades. Generations Smaller graphics modules can be inserted into larger slots, but type I and II heatsinks will not fit type III and above or vice versa. The Alienware m5700 platform uses a heatsink that will fit Type I, II, & III cards without modification. MXM 3.1 was released in March 2012 and added PCIe 3.0 support. First generation modules are not compatible with second generation (MXM 3) modules and vice versa. First generation modules I to IV are fully backwards compatible. Specification MXM is no longer freely supplied by Nvidia but it is controlled by the MXM-SIG controlled by Nvidia. Only corporate clients are granted access to the standard. The MXM 2.1 specification is widely available. List of MXM cards MXM 3.x cards Other uses The Qseven computer-on-module form factor uses a MXM-II connector, while the SMARC computer-on-module form factor uses a MXM 3 connector. Both implementations are not in any way compatible with the MXM standard. Notes References External links MXM-SIG Nearly complete list of Acer laptops implementing MXM Initial MXM 3.0 technical brief 3.1 Electromechanical specification Peripheral Component Interconnect
https://en.wikipedia.org/wiki/Lake%20ecosystem
A lake ecosystem or lacustrine ecosystem includes biotic (living) plants, animals and micro-organisms, as well as abiotic (non-living) physical and chemical interactions. Lake ecosystems are a prime example of lentic ecosystems (lentic refers to stationary or relatively still freshwater, from the Latin lentus, which means "sluggish"), which include ponds, lakes and wetlands, and much of this article applies to lentic ecosystems in general. Lentic ecosystems can be compared with lotic ecosystems, which involve flowing terrestrial waters such as rivers and streams. Together, these two ecosystems are examples of freshwater ecosystems. Lentic systems are diverse, ranging from a small, temporary rainwater pool a few inches deep to Lake Baikal, which has a maximum depth of 1642 m. The general distinction between pools/ponds and lakes is vague, but Brown states that ponds and pools have their entire bottom surfaces exposed to light, while lakes do not. In addition, some lakes become seasonally stratified. Ponds and pools have two regions: the pelagic open water zone, and the benthic zone, which comprises the bottom and shore regions. Since lakes have deep bottom regions not exposed to light, these systems have an additional zone, the profundal. These three areas can have very different abiotic conditions and, hence, host species that are specifically adapted to live there. Two important subclasses of lakes are ponds, which typically are small lakes that intergrade with wetlands, and water reservoirs. Over long periods of time, lakes, or bays within them, may gradually become enriched by nutrients and slowly fill in with organic sediments, a process called succession. When humans use the watershed, the volumes of sediment entering the lake can accelerate this process. The addition of sediments and nutrients to a lake is known as eutrophication. Zones Lake ecosystems can be divided into zones. One common system divides lakes into three zones. The first, the littoral zo
https://en.wikipedia.org/wiki/Route%20Views
RouteViews is a project founded by the Advanced Network Technology Center at the University of Oregon to allow Internet users to view global Border Gateway Protocol routing information from the perspective of other locations around the internet. Originally created to help Internet Service Providers determine how their network prefixes were viewed by others in order to debug and optimize access to their network, RouteViews is now used for a range of other purposes including academic research. RouteViews collectors obtain BGP data by directly peering with network operators at Internet exchange points or by multi-hop peering with network operators that aren't colocated at an exchange where RouteViews has a collector. The collectors can be queried using telnet. Historical data is stored in MRT format and can be downloaded using HTTP or FTP from archive.routeviews.org. As of 2023, the RouteViews project has collectors at 39 exchange points and more than 1,000 peers. References External links RouteViews homepage Internet architecture Network management
https://en.wikipedia.org/wiki/Al-Khansaa%20%28magazine%29
Al-Khansaa was an online women's magazine launched in 2004 by a Saudi branch of al-Qaeda. The magazine claimed to have been founded by Saudi leader Abd-al-Aziz al-Muqrin shortly before his death. It offered advice on first aid for wounded family members, how to raise children to believe in Jihad and physical training for women to prepare for combat. The magazine was named after Al-Khansaa, an Arab poet and a contemporary of Muhammad. References 2004 establishments in Saudi Arabia Magazines established in 2004 Online magazines Magazines published in Saudi Arabia Works by al-Qaeda Women's magazines Propaganda newspapers and magazines Magazines with year of disestablishment missing
https://en.wikipedia.org/wiki/Primordial%20Soup%20%28board%20game%29
Primordial Soup is a board game designed by Doris Matthäus & Frank Nestel and published by Z-Man Games. It was first published in 1997 in Germany by Doris & Frank under the name Ursuppe and this original version won 2nd prize in the 1998 Deutscher Spiele Preis. Theme Each player guides a species of primitive amoeba drifting through the primordial soup. The player controls whether and how their amoebas move, eat and procreate using the 10 biological points which s/he receives each turn. A player may evolve their species by buying gene cards, which give the amoebas abilities such as faster movement. The abilities are pictured on the gene cards, showing amoebas growing fins, tentacles, spines, etc. A key feature of the game is its self-balancing ecosystem. The food required by each amoeba is a mixture of the excrement of the other players' species. Food may become scarce and cause amoebas to starve, die and decompose into food. If one species becomes scarce, this will then cause problems for the other players, since their amoebas depend on all the other species to supply their food. Genes may mitigate this, for example by turning a species into a predator. However, this still requires some healthy prey to be available. Furthermore, the other players may react by turning their amoebas into predators themselves, growing spines for defense, or simply increase their procreation rate to offset the losses. The success of each strategy highly depends on the other players' actions as each species evolves to fill ecological niches. Objective Each turn each species scores points based upon its population and genes. The game ends when a player reaches 42 points or when the last environment card is drawn. This usually happens after 5-10 rounds so the game lasts 1–2 hours. Equipment A game board with spaces representing the primordial soup, a scoring track, and a compass diagram. Two dice. 28 pegged discs representing amoebas, in four sets of different colors and shapes.
https://en.wikipedia.org/wiki/Ceramic%20engineering
Ceramic engineering is the science and technology of creating objects from inorganic, non-metallic materials. This is done either by the action of heat, or at lower temperatures using precipitation reactions from high-purity chemical solutions. The term includes the purification of raw materials, the study and production of the chemical compounds concerned, their formation into components and the study of their structure, composition and properties. Ceramic materials may have a crystalline or partly crystalline structure, with long-range order on atomic scale. Glass ceramics may have an amorphous or glassy structure, with limited or short-range atomic order. They are either formed from a molten mass that solidifies on cooling, formed and matured by the action of heat, or chemically synthesized at low temperatures using, for example, hydrothermal or sol-gel synthesis. The special character of ceramic materials gives rise to many applications in materials engineering, electrical engineering, chemical engineering and mechanical engineering. As ceramics are heat resistant, they can be used for many tasks for which materials like metal and polymers are unsuitable. Ceramic materials are used in a wide range of industries, including mining, aerospace, medicine, refinery, food and chemical industries, packaging science, electronics, industrial and transmission electricity, and guided lightwave transmission. History The word "ceramic" is derived from the Greek word () meaning pottery. It is related to the older Indo-European language root "to burn". "Ceramic" may be used as a noun in the singular to refer to a ceramic material or the product of ceramic manufacture, or as an adjective. Ceramics is the making of things out of ceramic materials. Ceramic engineering, like many sciences, evolved from a different discipline by today's standards. Materials science engineering is grouped with ceramics engineering to this day. Abraham Darby first used coke in 1709 in Shropshir
https://en.wikipedia.org/wiki/AOSS
AOSS (AirStation One-Touch Secure System) is a system by Buffalo Technology which allows a secure wireless connection to be set up with the push of a button. AirStation residential gateways incorporated a button on the unit to let the user initiate this procedure. AOSS was designed to use the maximum level of security available to both connecting devices, including both Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access (WPA). Connection Process Association Phase: Once AOSS has been initiated on both devices via the AOSS button, the access point will change its SSID to "ESSID-AOSS" and the client will attempt to connect to it. Both devices will attempt connection for two minutes. Connection will be made using a secret 64-bit WEP key known to both devices. Key Generation Phase: With both devices connected, the AP generates and transfers a unique key to the client, where an RC4 tunnel is created. The AP creates four SSIDs and encryption keys for AES, TKIP, WEP128, and WEP64 generated from a random key script. These keys are available in the user interface of the AOSS AP to be used with non-AOSS clients. Information Exchange Phase: The client notifies the AP of its encryption support. Key Transfer Phase: All four encryption keys are transmitted to the client regardless of encryption support, allowing the client to change the SSID if needed. The user does not have access to the keys through the client device. Reboot Stack: The AP applies the SSID and key for the highest level of encryption supported by the client and reboots. The previously used WEP64 and RC4 tunnel are no longer used. The client adapter will automatically reboot or re-initialize and connect to the SSID using the proper encryption key. If a subsequent AOSS process connects with a lesser wireless encryption standard, the AP will apply the lesser standard and the Reboot Stack phase will be repeated for all connected devices. Compatible products The Nintendo Wi-Fi Connection used b
https://en.wikipedia.org/wiki/Response%20surface%20methodology
In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables. The method was introduced by George E. P. Box and K. B. Wilson in 1951. The main idea of RSM is to use a sequence of designed experiments to obtain an optimal response. Box and Wilson suggest using a second-degree polynomial model to do this. They acknowledge that this model is only an approximation, but they use it because such a model is easy to estimate and apply, even when little is known about the process. Statistical approaches such as RSM can be employed to maximize the production of a special substance by optimization of operational factors. Of late, for formulation optimization, the RSM, using proper design of experiments (DoE), has become extensively used. In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques. Basic approach of response surface methodology An easy way to estimate a first-degree polynomial model is to use a factorial experiment or a fractional factorial design. This is sufficient to determine which explanatory variables affect the response variable(s) of interest. Once it is suspected that only significant explanatory variables are left, then a more complicated design, such as a central composite design can be implemented to estimate a second-degree polynomial model, which is still only an approximation at best. However, the second-degree model can be used to optimize (maximize, minimize, or attain a specific target for) the response variable(s) of interest. Important RSM properties and features OrthogonalityThe property that allows individual effects of the k-factors to be estimated independently without (or with minimal) confounding. Also orthogonality provides minimum variance estimates of the model coefficient so that they are uncorrelated. RotatabilityThe property of rotating points of the design about th
https://en.wikipedia.org/wiki/Ultimate%20Guitar
Ultimate Guitar (Ultimate Guitar USA LLC), also known as Ultimate-Guitar.com or simply UG, is an online platform for guitarists and musicians. Its website and mobile application provides guitar tablature catalogues and chord sheets. UG's platform also includes video courses, reviews of music and equipment, interviews with notable musicians and forums. It was started on October 9, 1998, by Eugeny Naidenov. Since 2008, Ultimate Guitar operates from San Francisco, US, with its platform available in most countries. As of December 2021, the site and mobile app contain 1,600,000 tabs and chords for over 900,000 songs from over 115,000 artists. The UG app (also known as 'Tabs') has been downloaded more than 53,000,000 times. Community UG has over 43 million registered users. The website is regulated by an administrator and moderators. Moderators are users who are rewarded for being particularly helpful and knowledgeable in a specific subject and are responsible for moderating forums that focus on the subject they specialize in. Inappropriate words were formerly censored by a computer that searched for and replaced undesired words posted within the community, until September 1, 2015, when censorship of curse and swear words was discontinued. Community members may also create guitar lessons, and have their approved works published on the website and accessed by its users. Reviews of albums, multimedia, gear, and news articles can also be submitted by members. Like the tabs, the lessons and columns are also rated by users, which contribute to the creator's "UG Points". A user's UG score increases or decreases as other members rate their contributions. Although UG encourages participation, they also have a strict guideline and set of rules that all UG users must follow. Members must be over the age of 13 to use the services offered by the site and only one account is allowed to be made per person. Strong media is also prohibited from use on the site. Some members of the
https://en.wikipedia.org/wiki/Coverity
Coverity is a proprietary static code analysis tool from Synopsys. This product enables engineers and security teams to find and fix software defects. Coverity started as an independent software company in 2002 at the Computer Systems Laboratory at Stanford University in Palo Alto, California. It was founded by Benjamin Chelf, Andy Chou, and Seth Hallem with Stanford professor Dawson Engler as a technical adviser. The headquarters was moved to San Francisco. In June 2008, Coverity acquired Solidware Technologies. In February 2014, Coverity announced an agreement to be acquired by Synopsys, an electronic design automation company, for $350 million net of cash on hand. Products Coverity is a static code analysis tool for C, C++, C#, Java, JavaScript, PHP, Python, .NET, ASP.NET, Objective-C, Go, JSP, Ruby, Swift, Fortran, Scala, VB.NET, and TypeScript. It also supports more than 70 different frameworks for Java, JavaScript, C# and other languages. Coverity Scan is a free static-analysis cloud-based service for the open source community. Applications Under a United States Department of Homeland Security contract in 2006, the tool was used to examine over 150 open source applications for bugs; 6000 bugs found by the scan were fixed across 53 projects. National Highway Traffic Safety Administration used the tool in its 2010-2011 investigation into reports of sudden unintended acceleration in Toyota vehicles. The tool was used by CERN on the software employed in the Large Hadron Collider and in the NASA Jet Propulsion Laboratory during the flight software development of the Mars rover Curiosity. References Static program analysis tools Software testing tools Software companies based in California Companies based in San Francisco Defunct software companies of the United States 2014 mergers and acquisitions
https://en.wikipedia.org/wiki/Metabolic%20waste
Metabolic wastes or excrements are substances left over from metabolic processes (such as cellular respiration) which cannot be used by the organism (they are surplus or toxic), and must therefore be excreted. This includes nitrogen compounds, water, CO2, phosphates, sulphates, etc. Animals treat these compounds as excretes. Plants have metabolic pathways which transforms some of them (primarily the oxygen compounds) into useful substances.. All the metabolic wastes are excreted in a form of water solutes through the excretory organs (nephridia, Malpighian tubules, kidneys), with the exception of CO2, which is excreted together with the water vapor throughout the lungs. The elimination of these compounds enables the chemical homeostasis of the organism. Nitrogen wastes The nitrogen compounds through which excess nitrogen is eliminated from organisms are called nitrogenous wastes () or nitrogen wastes. They are ammonia, urea, uric acid, and creatinine. All of these substances are produced from protein metabolism. In many animals, the urine is the main route of excretion for such wastes; in some, it is the feces. Ammonotelism Ammonotelism is the excretion of ammonia and ammonium ions. Ammonia (NH3) forms with the oxidation of amino groups.(-NH2), which are removed from the proteins when they convert into carbohydrates. It is a very toxic substance to tissues and extremely soluble in water. Only one nitrogen atom is removed with it. A lot of water is needed for the excretion of ammonia, about 0.5 L of water is needed per 1 g of nitrogen to maintain ammonia levels in the excretory fluid below the level in body fluids to prevent toxicity. Thus, the marine organisms excrete ammonia directly into the water and are called ammonotelic. Ammonotelic animals include crustaceans, platyhelminths, cnidarians, poriferans, echinoderms, and other aquatic invertebrates. Ureotelism The excretion of urea is called ureotelism. Land animals, mainly amphibians and mammals, convert
https://en.wikipedia.org/wiki/Hilbert%20spectrum
The Hilbert spectrum (sometimes referred to as the Hilbert amplitude spectrum), named after David Hilbert, is a statistical tool that can help in distinguishing among a mixture of moving signals. The spectrum itself is decomposed into its component sources using independent component analysis. The separation of the combined effects of unidentified sources (blind signal separation) has applications in climatology, seismology, and biomedical imaging. Conceptual summary The Hilbert spectrum is computed by way of a 2-step process consisting of: Preprocessing a signal separate it into intrinsic mode functions using a mathematical decomposition such as singular value decomposition (SVD) or empirical mode decomposition (EMD); Applying the Hilbert transform to the results of the above step to obtain the instantaneous frequency spectrum of each of the components. The Hilbert transform defines the imaginary part of the function to make it an analytic function (sometimes referred to as a progressive function), i.e. a function whose signal strength is zero for all frequency components less than zero. With the Hilbert transform, the singular vectors give instantaneous frequencies that are functions of time, so that the result is an energy distribution over time and frequency. The result is an ability to capture time-frequency localization to make the concept of instantaneous frequency and time relevant (the concept of instantaneous frequency is otherwise abstract or difficult to define for all but monocomponent signals). Definition For a given signal decomposed (with for example Empirical Mode Decomposition) to where is the number of intrinsic mode functions that consists of and The instantaneous angle frequency is then defined as From this, we can define the Hilbert Spectrum for as The Hilbert Spectrum of is then given by Marginal Hilbert Spectrum A two dimensional representation of a Hilbert Spectrum, called Marginal Hilbert Spectrum, is defined as where
https://en.wikipedia.org/wiki/Lowry%20protein%20assay
The Lowry protein assay is a biochemical assay for determining the total level of protein in a solution. The total protein concentration is exhibited by a color change of the sample solution in proportion to protein concentration, which can then be measured using colorimetric techniques. It is named for the biochemist Oliver H. Lowry who developed the reagent in the 1940s. His 1951 paper describing the technique is the most-highly cited paper ever in the scientific literature, cited over 300,000 times. Mechanism The method combines the reactions of copper ions with the peptide bonds under alkaline conditions (the Biuret test) with the oxidation of aromatic protein residues. The Lowry method is based on the reaction of Cu+, produced by the oxidation of peptide bonds, with Folin–Ciocalteu reagent (a mixture of phosphotungstic acid and phosphomolybdic acid in the Folin–Ciocalteu reaction). The reaction mechanism is not well understood, but involves reduction of the Folin–Ciocalteu reagent and oxidation of aromatic residues (mainly tryptophan, also tyrosine). Experiments have shown that cysteine is also reactive to the reagent. Therefore, cysteine residues in protein probably also contribute to the absorbance seen in the Lowry assay. The result of this reaction is an intense blue molecule known as heteropolymolybdenum Blue. The concentration of the reduced Folin reagent (heteropolymolybdenum Blue) is measured by absorbance at 660 nm. As a result, the total concentration of protein in the sample can be deduced from the concentration of tryptophan and tyrosine residues that reduce the Folin–Ciocalteu reagent. The method was first proposed by Lowry in 1951. The bicinchoninic acid assay and the Hartree–Lowry assay are subsequent modifications of the original Lowry procedure. See also Biuret test Bradford protein assay References Walker, J. M. (2002). The protein protocols handbook. Totowa, N.J: Humana Press. External links A simplification of the protein assay met
https://en.wikipedia.org/wiki/Kernel%20Transaction%20Manager
Kernel Transaction Manager (KTM) is a component of the Windows operating system kernel in Windows Vista and Windows Server 2008 that enables applications to use atomic transactions on resources by making them available as kernel objects. Overview The transaction engine, which operates in kernel mode, allows for transactions on both kernel mode and user mode resources, as well as among distributed resources. The Kernel Transaction Manager intends to make it easy for application developers to do much error recovery, virtually transparently, with KTM acting as a transaction manager that transaction clients can plug into. Those transaction clients can be third-party clients that want to initiate transactions on resources that are managed by Transaction Resource Manager. The resource managers can also be third-party or built into the system. KTM is used to implement Transactional NTFS (TxF) and Transactional Registry (TxR). KTM relies on the Common Log File System (CLFS) for its operation. CLFS is a general-purpose log-file subsystem designed for creating data and event logs. References Further reading External links Kernel Transaction Manager - Win32 apps | Microsoft Docs Transactional Vista: Kernel Transaction Manager and friends (TxF, TxR) | Going Deep | Channel 9 Transaction processing Windows Vista Windows Server 2008 Windows NT kernel
https://en.wikipedia.org/wiki/Fundamental%20matrix%20%28computer%20vision%29
In computer vision, the fundamental matrix is a 3×3 matrix which relates corresponding points in stereo images. In epipolar geometry, with homogeneous image coordinates, x and x′, of corresponding points in a stereo image pair, Fx describes a line (an epipolar line) on which the corresponding point x′ on the other image must lie. That means, for all pairs of corresponding points holds Being of rank two and determined only up to scale, the fundamental matrix can be estimated given at least seven point correspondences. Its seven parameters represent the only geometric information about cameras that can be obtained through point correspondences alone. The term "fundamental matrix" was coined by QT Luong in his influential PhD thesis. It is sometimes also referred to as the "bifocal tensor". As a tensor it is a two-point tensor in that it is a bilinear form relating points in distinct coordinate systems. The above relation which defines the fundamental matrix was published in 1992 by both Olivier Faugeras and Richard Hartley. Although H. Christopher Longuet-Higgins' essential matrix satisfies a similar relationship, the essential matrix is a metric object pertaining to calibrated cameras, while the fundamental matrix describes the correspondence in more general and fundamental terms of projective geometry. This is captured mathematically by the relationship between a fundamental matrix and its corresponding essential matrix , which is and being the intrinsic calibration matrices of the two images involved. Introduction The fundamental matrix is a relationship between any two images of the same scene that constrains where the projection of points from the scene can occur in both images. Given the projection of a scene point into one of the images the corresponding point in the other image is constrained to a line, helping the search, and allowing for the detection of wrong correspondences. The relation between corresponding points, which the fundamental
https://en.wikipedia.org/wiki/TCP%20sequence%20prediction%20attack
A TCP sequence prediction attack is an attempt to predict the sequence number used to identify the packets in a TCP connection, which can be used to counterfeit packets. The attacker hopes to correctly guess the sequence number to be used by the sending host. If they can do this, they will be able to send counterfeit packets to the receiving host which will seem to originate from the sending host, even though the counterfeit packets may in fact originate from some third host controlled by the attacker. One possible way for this to occur is for the attacker to listen to the conversation occurring between the trusted hosts, and then to issue packets using the same source IP address. By monitoring the traffic before an attack is mounted, the malicious host can figure out the correct sequence number. After the IP address and the correct sequence number are known, it is basically a race between the attacker and the trusted host to get the correct packet sent. One common way for the attacker to send it first is to launch another attack on the trusted host, such as a denial-of-service attack. Once the attacker has control over the connection, they are able to send counterfeit packets without getting a response. If an attacker can cause delivery of counterfeit packets of this sort, they may be able to cause various sorts of mischief, including the injection into an existing TCP connection of data of the attacker's choosing, and the premature closure of an existing TCP connection by the injection of counterfeit packets with the RST bit set, a TCP reset attack. Theoretically, other information such as timing differences or information from lower protocol layers could allow the receiving host to distinguish authentic TCP packets from the sending host and counterfeit TCP packets with the correct sequence number sent by the attacker. If such other information is available to the receiving host, if the attacker can also fake that other information, and if the receiving host ga
https://en.wikipedia.org/wiki/Operation%20Stella%20Polaris
Operation Stella Polaris was the cover name for an operation in which Finnish signals intelligence records, equipment and personnel were transported to Sweden in late September 1944 after the end of combat on the Finnish-Soviet front in the Second World War. The purpose was to enable the signals intelligence activities against the advancing Russians to continue in Sweden and to prevent the equipment falling into the hands of the Soviet Union. A Soviet invasion was considered likely and plans were made to support guerrilla warfare in Finland after a possible occupation. The operation had its base in the small fishing village of Nämpnäs in Närpes, Ostrobothnia region, from where the archives were shipped to Swedish ports. The leaders of the operation were Colonel Aladár Paasonen, chief of Finnish military intelligence, and Colonel Reino Hallamaa, head of the Finnish signals intelligence section. Transportation to Sweden On 20September 1944 a large part of the Finnish signals intelligence unit was moved to Sweden. From the Swedish side Major Carl Petersén, head of the Defence Staff's intelligence section C-byrån, was responsible for the operation. Approximately 750 people were transported across the Gulf of Bothnia: by three ships from Närpes to Härnösand; and one ship from Uusikaupunki to Gävle. The ships also carried boxes of archives and signals intelligence equipment. After the Soviet Union was ceded parts of Karelia and Salla from Finland on 19September 1944, in accord with the Moscow Armistice, the majority of the Finnish personnel and their families returned home, except those hired by Sweden's National Defence Radio Establishment (FRA). They crossed the border at the Torne River in secret. Sweden offered to take over the equipment and some of the documents. The FRA thus had access to technical equipment and seven boxes of files, which became important in the newly established activities of the FRA. Operation Stella Polaris led to Sweden gaining access to a l
https://en.wikipedia.org/wiki/Standard%20model%20%28cryptography%29
In cryptography the standard model is the model of computation in which the adversary is only limited by the amount of time and computational power available. Other names used are bare model and plain model. Cryptographic schemes are usually based on complexity assumptions, which state that some problems, such as factorization, cannot be solved in polynomial time. Schemes that can be proven secure using only complexity assumptions are said to be secure in the standard model. Security proofs are notoriously difficult to achieve in the standard model, so in many proofs, cryptographic primitives are replaced by idealized versions. The most common example of this technique, known as the random oracle model, involves replacing a cryptographic hash function with a genuinely random function. Another example is the generic group model, where the adversary is given access to a randomly chosen encoding of a group, instead of the finite field or elliptic curve groups used in practice. Other models used invoke trusted third parties to perform some task without cheating; for example, the public key infrastructure (PKI) model requires a certificate authority, which if it were dishonest, could produce fake certificates and use them to forge signatures, or mount a man in the middle attack to read encrypted messages. Other examples of this type are the common random string model, where it is assumed that all parties have access to some string chosen uniformly at random, and its generalization, the common reference string model, where a string is chosen according to some other probability distribution. These models are often used for non-interactive zero-knowledge proofs (NIZK). In some applications, such as the Dolev–Dwork–Naor encryption scheme, it makes sense for a particular party to generate the common reference string, while in other applications, the common reference string must be generated by a trusted third party. Collectively, these models are referred to as models with
https://en.wikipedia.org/wiki/CDC%20display%20code
Display code is the six-bit character code used by many computer systems manufactured by Control Data Corporation, notably the CDC 6000 series in 1964, the 7600 in 1967 and the following Cyber series in 1971. The CDC 6000 series and their successors had 60 bit words. As such, typical usage packed 10 characters per word. It is a six-bit extension of the four-bit BCD encoding, and was referred to as BCDIC (BCD interchange code.) There were several variations of display code, notably the 63-character character set, and the 64-character character set. There were also 'CDC graphic' and 'ASCII graphic' variants of both the 63- and 64-character sets. The choice between 63- or 64-character character set, and between CDC or ASCII graphic was site-selectable. Generally, early CDC customers started out with the 63-character character set, and CDC graphic print trains on their line printers. As time-sharing became prevalent, almost all sites used the ASCII variant - so that line printer output would match interactive usage. Later CDC customers were also more likely to use the 64-character character set. A later variation, called 6/12 display code, was used in the Kronos and NOS timesharing systems in order to support full ASCII capabilities. In 6/12 mode, an escape character (the circumflex, octal 76) would indicate that the following letter was lower case. Thus, upper case and other characters were 6 bits in length, and lower case characters were 12 bits in length. The PLATO system used a further variant of 6/12 display code. Noting that lower case letters were most common in typical PLATO usage, the roles were reversed. Lower case letters were the norm, and the escape character preceded upper case letters. The typical text file format used a zero-byte terminator to signify the end of each record. The zero-byte terminator was indicated by, at least, the final twelve bits of a 60-bit word being set to zero. The terminator could actually be anywhere from 12- to
https://en.wikipedia.org/wiki/CER-10
CER model 10 was a vacuum tube, transistor and electronic relay based computer developed at IBK-Vinča and the Mihajlo Pupin Institute (Belgrade) in 1960. It was the first digital computer developed in SFR Yugoslavia, and in Southern Europe. CER-10 was designed by Tihomir Aleksić and his associates (Rajko Tomović, Vukašin Masnikosa, Ahmed Mandžić, Dušan Hristović, Petar Vrbavac and Milojko Marić) and was developed over four years. The team included 10 engineers and 10 technicians, as well as many others. After initial prototype testing at Vinča and a redesign at the M. Pupin Institute, it was fully deployed at the Tanjug Agency building and worked there for the SKNE from 1961 and the Yugoslav government's SIV, from 1963 to 1967. The first CER-10 system was located at the SKNE (Federal secretary of internal affairs) building in 1961, which would later belong to Tanjug. The M. Pupin Institute donated the computer's case and some parts of the CER-10 along with its documentation to the Museum of Science and Technology in Belgrade in March 2006, where the computer's CPU is now displayed. Specifications 1750 vacuum tubes 1500 transistors 14000 Germanium diodes Magnetic core primary memory: 4096 of 30-bit words Secondary memory: punched tape Capable of performing min. 1600 additions per second Gallery See also CER Computers Mihajlo Pupin Institute History of computer hardware in the SFRY List of vacuum tube computers Rajko Tomović References External links http://www.pupin.rs/Profile Mihajlo Pupin Institute One-of-a-kind computers Vacuum tube computers CER computers
https://en.wikipedia.org/wiki/Load%20profile
In electrical engineering, a load profile is a graph of the variation in the electrical load versus time. A load profile will vary according to customer type (typical examples include residential, commercial and industrial), temperature and holiday seasons. Power producers use this information to plan how much electricity they will need to make available at any given time. Teletraffic engineering uses a similar load curve. Power generation In a power system, a load curve or load profile is a chart illustrating the variation in demand/electrical load over a specific time. Generation companies use this information to plan how much power they will need to generate at any given time. A load duration curve is similar to a load curve. The information is the same but is presented in a different form. These curves are useful in the selection of generator units for supplying electricity. Electricity distribution In an electricity distribution grid, the load profile of electricity usage is important to the efficiency and reliability of power transmission. The power transformer or battery-to-grid are critical aspects of power distribution and sizing and modelling of batteries or transformers depends on the load profile. The factory specification of transformers for the optimization of load losses versus no-load losses is dependent directly on the characteristics of the load profile that the transformer is expected to be subjected to. This includes such characteristics as average load factor, diversity factor, utilization factor, and demand factor, which can all be calculated based on a given load profile. On the power market so-called EFA blocks are used to specify the traded forward contract on the delivery of a certain amount of electrical energy at a certain time. Retail energy markets In retail energy markets, supplier obligations are settled on an hourly or subhourly basis. For most customers, consumption is measured on a monthly basis, based on meter reading s
https://en.wikipedia.org/wiki/Mihajlo%20Pupin%20Institute
Mihajlo Pupin Institute () is an institute based in Belgrade, Serbia. It is named after Mihajlo Idvorski Pupin and is part of the University of Belgrade. It is notable for manufacturing numerous computer systems used in SFR Yugoslavia - especially early CER and later TIM line of computers. Departments The institute is well known in wide range of fields. In the science community, it is known for early work in humanoid robotics. The institute and companies owned by it compete in fields such as: System integration and networking, Information systems for government and industry, Internet/Intranet IS E-commerce, e-government applications Decision support systems, expert systems, intelligent Internet applications, Power systems control, supervision and optimization Process control and supervision, Traffic control, GPS Telecommunications Digital signal processing Simulators, training aids, specialised H/S systems Image processing Real-time systems (large scale and embedded) Turn-key engineering solutions Robotics Subsidiaries IMP-Automatika d.o.o. Belgrade IMP-Računarski sistemi d.o.o. Belgrade IMP-Telekomunikacije d.o.o. Belgrade Idvorski laboratorije d.o.o. Belgrade IMP-Piezotehnologija d.o.o. Belgrade IMP-Poslovne usluge d.o.o. Belgrade IMP-Naučnotehnološki park d.o.o. Belgrade See also CER Computers HRS-100 computer TIM-100 and TIM-011 Michael I. Pupin - Serbian scientist after whom this institute is named. History of computer hardware in the SFRY Rajko Tomović Miomir Vukobratović References External links 1946 establishments in Serbia Defense companies of Serbia Economy of Belgrade Science and technology in Serbia University of Belgrade Zvezdara
https://en.wikipedia.org/wiki/Logarithmic%20mean
In mathematics, the logarithmic mean is a function of two non-negative numbers which is equal to their difference divided by the logarithm of their quotient. This calculation is applicable in engineering problems involving heat and mass transfer. Definition The logarithmic mean is defined as: for the positive numbers . Inequalities The logarithmic mean of two numbers is smaller than the arithmetic mean and the generalized mean with exponent one-third but larger than the geometric mean, unless the numbers are the same, in which case all three means are equal to the numbers. Toyesh Prakash Sharma generalizes the arithmetic logarithmic geometric mean inequality for any belongs to the whole number as Now, for : This is the arithmetic logarithmic geometric mean inequality. similarly, one can also obtain results by putting different values of as below For : for the proof go through the bibliography. Derivation Mean value theorem of differential calculus From the mean value theorem, there exists a value in the interval between and where the derivative equals the slope of the secant line: The logarithmic mean is obtained as the value of by substituting for and similarly for its corresponding derivative: and solving for : Integration The logarithmic mean can also be interpreted as the area under an exponential curve. The area interpretation allows the easy derivation of some basic properties of the logarithmic mean. Since the exponential function is monotonic, the integral over an interval of length 1 is bounded by and . The homogeneity of the integral operator is transferred to the mean operator, that is . Two other useful integral representations areand Generalization Mean value theorem of differential calculus One can generalize the mean to variables by considering the mean value theorem for divided differences for the -th derivative of the logarithm. We obtain where denotes a divided difference of the logarithm. For this leads
https://en.wikipedia.org/wiki/Dangerous%20Goods%20Safety%20Advisor
A Dangerous Goods Safety Advisor (DGSA) is a consultant or an owner or employee of an organization appointed by an organization that transports, loads, or unloads dangerous goods in the European Union and other countries. This include 48 countries : Albania, Andorra, Austria, Azerbaijan, Belarus, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Kazakhstan, Latvia, Liechtenstein, Lithuania, Luxembourg, Macedonia, Malta, Montenegro, Morocco, Netherlands, Norway, Poland, Portugal, The Republic of Moldova, Romania, Russia, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Tajikistan, Tunisia, Turkey, Ukraine, United Kingdom. Rules The rules involving the transport of dangerous goods are complex and each mode of transport, i.e. road, rail or inland waterway, has its own set of regulations. There are also separate sets of regulations for sea and air transportation. For many elements of transportation the regulations from each mode are similar or identical. All the various sets of regulations are based upon "Recommendations on the transport of Dangerous Goods - Model Regulations", known as "The Orange Book," issued by the United Nations Committee of Experts on the Transportation of Dangerous Goods and the Globally Harmonized System of Classification and Labeling. Duties The duties of the DGSA include providing advice to the appointing organization, preparing accident reports, monitoring the activities of the organisation which involve dangerous goods and preparing an annual report. To become a DGSA, it is usual for a candidate to be trained by a specialist training organization, then to sit various examinations. The qualification lasts five years. The examining body in the UK is the Scottish Qualifications Authority. Notes External links “Dangerous Goods-HazMat Group”, a Yahoo-hosted global network for discussion of dangerous goods and hazardou
https://en.wikipedia.org/wiki/Electrolytic%20detector
The electrolytic detector, or liquid barretter, was a type of detector (demodulator) used in early radio receivers. First used by Canadian radio researcher Reginald Fessenden in 1903, it was used until about 1913, after which it was superseded by crystal detectors and vacuum tube detectors such as the Fleming valve and Audion (triode). It was considered very sensitive and reliable compared to other detectors available at the time such as the magnetic detector and the coherer. It was one of the first rectifying detectors, able to receive AM (sound) transmissions. On December 24, 1906, US Naval ships with radio receivers equipped with Fessenden's electrolytic detectors received the first AM radio broadcast from Fessenden's Brant Rock, Massachusetts transmitter, consisting of a program of Christmas music. History Fessenden, more than any other person, is responsible for developing amplitude modulation (AM) radio transmission around 1900. While working to develop AM transmitters, he realized that the radio wave detectors used in existing radio receivers were not suitable to receive AM signals. The radio transmitters of the time transmitted information by radiotelegraphy; the transmitter was turned on and off by the operator using a switch called a telegraph key producing pulses of radio waves, to transmit text data using Morse code. Thus receivers didn't have to extract an audio signal from the radio signal, but only detected the presence or absence of the radio frequency to produce "clicks" in the earphone representing the pulses of Morse code. The device that did this was called a "detector". The detector used in receivers of that day, called a coherer, simply acted as a switch, that conducted current in the presence of radio waves, and thus did not have the capability to demodulate, or extract the audio signal from, an amplitude modulated radio wave. The simplest way to extract the sound waveform from an AM signal is to rectify it; remove the oscillations
https://en.wikipedia.org/wiki/Tagged%20Command%20Queuing
Tagged Command Queuing (TCQ) is a technology built into certain ATA and SCSI hard drives. It allows the operating system to send multiple read and write requests to a hard drive. ATA TCQ is not identical in function to the more efficient Native Command Queuing (NCQ) used by SATA drives. SCSI TCQ does not suffer from the same limitations as ATA TCQ. Without TCQ, an operating system was limited to sending one request at a time. To boost performance, the OS had to determine the order of the requests based on its own possibly incorrect perspective of the hard drive activity (otherwise known as I/O scheduling). With TCQ, the drive can make its own decisions about how to order the requests (and in turn relieve the operating system from having to do so). Thus TCQ can improve the overall performance of a hard drive if it is implemented correctly. Overview For increased efficiency the sectors should be serviced in order of proximity to the current head position, not the order received. The queue is constantly receiving new requests, fulfilling and removing existing requests, and re-ordering the queue according to the current pending read/write requests and the changing position of the head. The exact reordering algorithm may depend upon the controller and the drive itself, but the host computer simply makes requests as needed, leaving the controller to handle the details. This queuing mechanism is sometimes referred to as "elevator seeking", as the image of a modern elevator in a building servicing multiple calls and processing them to minimise travel illustrates the idea well. If the buttons for floors 5, 2, and 4 are pressed in that order with the elevator starting on floor 1, an old elevator would go to the floors in the order requested. A modern elevator processes the requests to stop at floors in the logical order 2, 4, and 5, without unnecessary travel. Non-queueing disk drives service the requests in the order received, like an old elevator; queueing drives serv
https://en.wikipedia.org/wiki/VeryCD
VeryCD is a Chinese website that shares files via eD2k links. The website was begun in September 2003 by Huang Yimeng (). In June 2005, Shanghai Source Networking Technology Co., Ltd (, or VeryCD company) was established. It is a for-profit organization headquartered in Shanghai, China. Today, VeryCD is one of the most popular file-sharing (via ed2k links) websites in China. Aims According to VeryCD company, VeryCD.com "aims to be the biggest and the most user-friendly P2P seed database website in the world. […] Its declaration against corruption from capitalized operation kept the website organized and free of advertisement abuse." But some people thought that it was contradictory, since VeryCD was already a commercial company which had a lot of advertisement on the website. The creator of the website and leader of the VeryCD company, Huang Yimeng was also listed in a "list of Chinese multimillionaires born in 1980s" by some Chinese media. Software Two eDonkey network clients, eMule VeryCD Mod and easyMule, are developed by VeryCD company. eMule VeryCD Mod eMule VeryCD Mod developed since 2003 is based on eMule and open-sourced. It has a built-in browser to access the Web. Due to the censorship in China, eMule VeryCD Mod has a search word filter to prevent users from searching some political or pornographic words. easyMule easyMule developed since 2007 is now the company's primary client. It removes the category, message, IRC, custom skin and some other features from eMule, adds BHO (Browser Helper Object) plug-in to users' IE browser. The browser built in easyMule can only access VeryCD.com site. easyMule's users can't search via eDonkey servers or Kad network, it is only allowed to search from the links indexed by VeryCD.com. easyMule version 1 is eMule-based and open-sourced. Since v2.0, easyMule has closed its source. VeryCD company's developer claimed that easyMule 2.0 is written from scratch by them. On 1 July 2009, an aMule developer wrote a topic on
https://en.wikipedia.org/wiki/NEC%20SX-8
The SX-8 is a supercomputer built by NEC Corporation. The SX-8 Series implements an eight-way SMP system in a compact node module and uses an enhanced version of the single chip vector processor that was introduced with the SX-6. The NEC SX-8 processors run at 2 GHz for vectors and 1 GHz for scalar operations. The SX-8 CPU operates at 16 GFLOPS and can address up to 128 GB of memory. Up to 8 CPUs may be used in a single node, and a complete system may have up to 512 nodes. The SX-8 series ranges from the single-CPU SX-8b system to the SX-8/4096M512, with 512 nodes, 4,096 CPUs, and a peak performance of 65 TFLOPS. There is up to 512 GB/s bandwidth per node (64 GB/s per processor). The SX-8 runs SUPER-UX, a Unix-like operating system developed by NEC. The first production SX-8 was installed at the UK Met Office in early 2005. In October 2006, an upgraded SX-8 was announced, the SX-8R. The NEC SX-8R processors run at 2.2 GHz for vectors and 1.1 GHz for scalar operations. The SX-8R can process double the number of vector operations per clock compared to the SX-8. The SX-8R CPU has a peak vector performance 35.2 GFLOPS (10% frequency increase and double the number of vector operations) and can address up to 256 GB of memory in a single node (up from 128 GB). The French national meteorological service, Météo-France, rents a SX-8R for 3.7 million euros a year. NEC published product highlights 16 GFLOPS peak vector performance, with eight operations per clock running at 2 GHz or 0.5 ns (1 GHz for scalar) 88 million transistors per CPU, 1.0 V, 8,210 pins (1,923 signal pins) Up to 8 CPUs per node, manufactured in 90 nm Cu technology, 9 copper layers, bare chip packaging Up to 16 GB of memory per CPU, 128 GB in a single node Up to 512 GB/s bandwidth per node, 64 GB/s per CPU IXS Super-Switch between nodes, up to 512 nodes supported, 32 GB/s per node (16 GB/s for each direction) Air cooled Runs SUPER-UX, System V port, 4.3 BSD with enhancements for multinode systems;
https://en.wikipedia.org/wiki/Hot-wire%20barretter
The hot-wire barretter was a demodulating detector, invented in 1902 by Reginald Fessenden, that found limited use in early radio receivers. In effect, it was a highly sensitive thermoresistor, which could demodulate amplitude-modulated signals, something that the coherer (the standard detector of the time) could not do. The first device used to demodulate amplitude modulated signals, it was later superseded by the electrolytic detector, also generally attributed to Fessenden. The barretter principle is still used as a detector for microwave radiation, similar to a bolometer. Description and construction Fessenden's 1902 patent describes the construction of the device. A fine platinum wire, about in diameter, is embedded in the middle of a silver tube having a diameter of about . This compound wire is then drawn until the silver wire has a diameter of about ; as the platinum wire within it is reduced in the same ratio, it is drawn down to a final diameter of . The result is called Wollaston wire. The silver cladding is etched off a short piece of the composite wire, leaving an extremely fine platinum wire; this is supported, on two heavier silver wires, in a loop inside a glass bulb. The leads are taken out through the glass envelope, and the whole device is put under vacuum and then sealed. Operation The hot-wire barretter depends upon the increase of a metal resistivity with increasing temperature. The device is biased by a direct current adjusted to heat the wire to its most sensitive temperature. When there is an oscillating current from the antenna through the extremely fine platinum wire loop, the wire is further heated as the current increases and cools as the current decreases again. As the wire heats and cools, it varies its resistance in response to the signals passing through it. Because of the low thermal mass of the wire, it is capable of responding quickly enough to vary its resistance in response to audio signals. However, it cannot vary its
https://en.wikipedia.org/wiki/Tautology%20%28logic%29
In mathematical logic, a tautology (from ) is a formula or assertion that is true in every possible interpretation. An example is "x=y or x≠y". Similarly, "either the ball is green, or the ball is not green" is always true, regardless of the colour of the ball. The philosopher Ludwig Wittgenstein first applied the term to redundancies of propositional logic in 1921, borrowing from rhetoric, where a tautology is a repetitive statement. In logic, a formula is satisfiable if it is true under at least one interpretation, and thus a tautology is a formula whose negation is unsatisfiable. In other words, it cannot be false. It cannot be untrue. Unsatisfiable statements, both through negation and affirmation, are known formally as contradictions. A formula that is neither a tautology nor a contradiction is said to be logically contingent. Such a formula can be made either true or false based on the values assigned to its propositional variables. The double turnstile notation is used to indicate that S is a tautology. Tautology is sometimes symbolized by "Vpq", and contradiction by "Opq". The tee symbol is sometimes used to denote an arbitrary tautology, with the dual symbol (falsum) representing an arbitrary contradiction; in any symbolism, a tautology may be substituted for the truth value "true", as symbolized, for instance, by "1". Tautologies are a key concept in propositional logic, where a tautology is defined as a propositional formula that is true under any possible Boolean valuation of its propositional variables. A key property of tautologies in propositional logic is that an effective method exists for testing whether a given formula is always satisfied (equiv., whether its negation is unsatisfiable). The definition of tautology can be extended to sentences in predicate logic, which may contain quantifiers—a feature absent from sentences of propositional logic. Indeed, in propositional logic, there is no distinction between a tautology and a logically v
https://en.wikipedia.org/wiki/LR-attributed%20grammar
LR-attributed grammars are a special type of attribute grammars. They allow the attributes to be evaluated on LR parsing. As a result, attribute evaluation in LR-attributed grammars can be incorporated conveniently in bottom-up parsing. zyacc is based on LR-attributed grammars. They are a subset of the L-attributed grammars, where the attributes can be evaluated in one left-to-right traversal of the abstract syntax tree. They are a superset of the S-attributed grammars, which allow only synthesized attributes. In yacc, a common hack is to use global variables to simulate some kind of inherited attributes and thus LR-attribution. External links http://www.cs.binghamton.edu/~zdu/zyacc/doc/zyacc_4.html Reinhard Wilhelm: LL- and LR-Attributed Grammars. Programmiersprachen und Programmentwicklung, 7. Fachtagung, veranstaltet vom Fachausschuß 2 der GI (1982), 151–164, Informatik-Fachberichte volume 53. J. van Katwijk: A preprocessor for YACC or A poor man's approach to parsing attributed grammars. Sigplan Notices 18:10 (1983), 12–15. Formal languages Compiler construction
https://en.wikipedia.org/wiki/ECLR-attributed%20grammar
ECLR-attributed grammars are a special type of attribute grammars. They are a variant of LR-attributed grammars where an equivalence relation on inherited attributes is used to optimize attribute evaluation. EC stands for equivalence class. Rie is based on ECLR-attributed grammars. External links http://www.is.titech.ac.jp/~sassa/lab/rie-e.html M. Sassa, H. Ishizuka and I. Nakata: ECLR-attributed grammars: a practical class of LR-attributed grammars. Inf. Process. Lett. 24 (1987), 31–41. M. Sassa, H. Ishizuka and I. Nakata: Rie, a Compiler Generator Based on a One-pass-type Attribute Grammar. Software—practice and experience 25:3 (March 1995), 229–250. Formal languages Compiler construction
https://en.wikipedia.org/wiki/Topological%20quantum%20computer
A topological quantum computer is a theoretical quantum computer proposed by Russian-American physicist Alexei Kitaev in 1997. It employs quasiparticles in two-dimensional systems, called anyons, whose world lines pass around one another to form braids in a three-dimensional spacetime (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer. The advantage of a quantum computer based on quantum braids over using trapped quantum particles is that the former is much more stable. Small, cumulative perturbations can cause quantum states to decohere and introduce errors in the computation, but such small perturbations do not change the braids' topological properties. This is like the effort required to cut a string and reattach the ends to form a different braid, as opposed to a ball (representing an ordinary quantum particle in four-dimensional spacetime) bumping into a wall. While the elements of a topological quantum computer originate in a purely mathematical realm, experiments in fractional quantum Hall systems indicate these elements may be created in the real world using semiconductors made of gallium arsenide at a temperature of near absolute zero and subjected to strong magnetic fields. Introduction Anyons are quasiparticles in a two-dimensional space. Anyons are neither fermions nor bosons, but like fermions, they cannot occupy the same state. Thus, the world lines of two anyons cannot intersect or merge, which allows their paths to form stable braids in space-time. Anyons can form from excitations in a cold, two-dimensional electron gas in a very strong magnetic field, and carry fractional units of magnetic flux. This phenomenon is called the fractional quantum Hall effect. In typical laboratory systems, the electron gas occupies a thin semiconducting layer sandwiched between layers of aluminium gallium arsenide. When anyons are braided, the transformation of the quantum state of the system depends only o
https://en.wikipedia.org/wiki/Reproductive%20value%20%28population%20genetics%29
Reproductive value is a concept in demography and population genetics that represents the discounted number of future female children that will be born to a female of a specific age. Ronald Fisher first defined reproductive value in his 1930 book The Genetical Theory of Natural Selection where he proposed that future offspring be discounted at the rate of growth of the population; this implies that sexually reproductive value measures the contribution of an individual of a given age to the future growth of the population. Definition Consider a species with a life history table with survival and reproductive parameters given by and , where = probability of surviving from age 0 to age and = average number of offspring produced by an individual of age In a population with a discrete set of age classes, Fisher's reproductive value is calculated as where is the long-term population growth rate given by the dominant eigenvalue of the Leslie matrix. When age classes are continuous, where is the intrinsic rate of increase or Malthusian growth rate. See also Population dynamics Euler–Lotka equation Leslie matrix Senescence Notes Fisher, R. A. 1930. The Genetical Theory of Natural Selection. Oxford University Press, Oxford. Keyfitz, N. and Caswell, H. 2005. Applied Mathematical Demography. Springer, New York. 3rd edition. doi:10.1007/b139042 References Population genetics Senescence
https://en.wikipedia.org/wiki/Non-Euclidean%20crystallographic%20group
In mathematics, a non-Euclidean crystallographic group, NEC group or N.E.C. group is a discrete group of isometries of the hyperbolic plane. These symmetry groups correspond to the wallpaper groups in euclidean geometry. A NEC group which contains only orientation-preserving elements is called a Fuchsian group, and any non-Fuchsian NEC group has an index 2 Fuchsian subgroup of orientation-preserving elements. The hyperbolic triangle groups are notable NEC groups. Others are listed in Orbifold notation. See also Non-Euclidean geometry Isometry group Fuchsian group Uniform tilings in hyperbolic plane References . . . Non-Euclidean geometry Hyperbolic geometry Symmetry Discrete groups
https://en.wikipedia.org/wiki/Order%20type
In mathematics, especially in set theory, two ordered sets and are said to have the same order type if they are order isomorphic, that is, if there exists a bijection (each element pairs with exactly one in the other set) such that both and its inverse are monotonic (preserving orders of elements). In the special case when is totally ordered, monotonicity of already implies monotonicity of its inverse. One and the same set may be equipped with different orders. Since order-equivalence is an equivalence relation, it partitions the class of all ordered sets into equivalence classes. Notation If a set has order type denoted , the order type of the reversed order, the dual of , is denoted . The order type of a well-ordered set is sometimes expressed as . Examples The order type of the integers and rationals is usually denoted and , respectively. The set of integers and the set of even integers have the same order type, because the mapping is a bijection that preserves the order. But the set of integers and the set of rational numbers (with the standard ordering) do not have the same order type, because even though the sets are of the same size (they are both countably infinite), there is no order-preserving bijective mapping between them. The open interval of rationals is order isomorphic to the rationals, since, for example, is a strictly increasing bijection from the former to the latter. Relevant theorems of this sort are expanded upon below. More examples can be given now: The set of positive integers (which has a least element), and that of negative integers (which has a greatest element). The natural numbers have order type denoted by ω, as explained below. The rationals contained in the half-closed intervals [0,1) and (0,1], and the closed interval [0,1], are three additional order type examples. Order type of well-orderings Every well-ordered set is order-equivalent to exactly one ordinal number, by definition. The ordinal numbers are taken
https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Go%C5%82%C4%85b
Stanisław Gołąb (July 26, 1902 – April 30, 1980) was a Polish mathematician from Kraków, working in particular on the field of affine geometry. In 1932, he proved that the perimeter of the unit disc respect to a given metric can take any value in between 6 and 8, and that these extremal values are obtained if and only if the unit disc is an affine regular hexagon resp. a parallelogram. Selected works S. Gołąb: Quelques problèmes métriques de la géometrie de Minkowski, Trav. de l'Acad. Mines Cracovie 6 (1932), 1–79 Golab, S., Über einen algebraischen Satz, welcher in der Theorie der geometrischen Objekte auftritt, Beiträge zur Algebra und Geometrie 2 (1974) 7–10. Golab, S.; Swiatak, H.: Note on Inner Products in Vector Spaces. Aequationes Mathematicae (1972) 74. Golab, S.: Über das Carnotsche Skalarprodukt in schwach normierten Vektorräumen. Aequationes Mathematicae 13 (1975) 9–13. Golab,S., Sur un problème de la métrique angulaire dans la géometrie de Minkowski, Aequationes Mathematicae (1971) 121. Golab, S., Über die Grundlagen der affinen Geometrie., Jahresbericht DMV 71 (1969) 138–155. Notes External links List of Golab's articles at U. of Göttingen, Germany 1902 births 1980 deaths 20th-century Polish mathematicians Geometers Scientists from Kraków
https://en.wikipedia.org/wiki/Proofs%20from%20THE%20BOOK
Proofs from THE BOOK is a book of mathematical proofs by Martin Aigner and Günter M. Ziegler. The book is dedicated to the mathematician Paul Erdős, who often referred to "The Book" in which God keeps the most elegant proof of each mathematical theorem. During a lecture in 1985, Erdős said, "You don't have to believe in God, but you should believe in The Book." Content Proofs from THE BOOK contains 32 sections (45 in the sixth edition), each devoted to one theorem but often containing multiple proofs and related results. It spans a broad range of mathematical fields: number theory, geometry, analysis, combinatorics and graph theory. Erdős himself made many suggestions for the book, but died before its publication. The book is illustrated by . It has gone through six editions in English, and has been translated into Persian, French, German, Hungarian, Italian, Japanese, Chinese, Polish, Portuguese, Korean, Turkish, Russian and Spanish. In November 2017 the American Mathematical Society announced the 2018 Leroy P. Steele Prize for Mathematical Exposition to be awarded to Aigner and Ziegler for this book. The proofs include: Six proofs of the infinitude of the primes, including Euclid's and Furstenberg's Proof of Bertrand's postulate Fermat's theorem on sums of two squares Two proofs of the Law of quadratic reciprocity Proof of Wedderburn's little theorem asserting that every finite division ring is a field Four proofs of the Basel problem Proof that e is irrational (also showing the irrationality of certain related numbers) Hilbert's third problem Sylvester–Gallai theorem and De Bruijn–Erdős theorem Cauchy's theorem Borsuk's conjecture Schröder–Bernstein theorem Wetzel's problem on families of analytic functions with few distinct values The fundamental theorem of algebra Monsky's theorem (4th edition) Van der Waerden's conjecture Littlewood–Offord lemma Buffon's needle problem Sperner's theorem, Erdős–Ko–Rado theorem and Hall's theorem Lindström
https://en.wikipedia.org/wiki/Minkowski%20plane
In mathematics, a Minkowski plane (named after Hermann Minkowski) is one of the Benz planes (the others being Möbius plane and Laguerre plane). Classical real Minkowski plane Applying the pseudo-euclidean distance on two points (instead of the euclidean distance) we get the geometry of hyperbolas, because a pseudo-euclidean circle is a hyperbola with midpoint . By a transformation of coordinates , , the pseudo-euclidean distance can be rewritten as . The hyperbolas then have asymptotes parallel to the non-primed coordinate axes. The following completion (see Möbius and Laguerre planes) homogenizes the geometry of hyperbolas: the set of points: the set of cycles The incidence structure is called the classical real Minkowski plane. The set of points consists of , two copies of and the point . Any line is completed by point , any hyperbola by the two points (see figure). Two points can not be connected by a cycle if and only if or . We define: Two points are (+)-parallel () if and (−)-parallel () if . Both these relations are equivalence relations on the set of points. Two points are called parallel () if or . From the definition above we find: Lemma: For any pair of non parallel points there is exactly one point with . For any point and any cycle there are exactly two points with . For any three points , , , pairwise non parallel, there is exactly one cycle that contains . For any cycle , any point and any point and there exists exactly one cycle such that , i.e. touches at point P. Like the classical Möbius and Laguerre planes Minkowski planes can be described as the geometry of plane sections of a suitable quadric. But in this case the quadric lives in projective 3-space: The classical real Minkowski plane is isomorphic to the geometry of plane sections of a hyperboloid of one sheet (not degenerated quadric of index 2). The axioms of a Minkowski plane Let be an incidence structure with the set of points, the set of
https://en.wikipedia.org/wiki/ISO/IEC%2015288
The ISO/IEC 15288 is a technical standard in systems engineering which covers processes and lifecycle stages, developed by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Planning for the ISO/IEC 15288:2002(E) standard started in 1994 when the need for a common systems engineering process framework was recognized. The previously accepted standard MIL STD 499A (1974) was cancelled after a memo from the United States Secretary of Defense (SECDEF) prohibited the use of most U.S. Military Standards without a waiver (this memo was rescinded in 2005). The first edition was issued on 1 November 2002. Stuart Arnold was the editor and Harold Lawson was the architect of the standard. In 2004 this standard was adopted by the Institute of Electrical and Electronics Engineers as IEEE 15288. ISO/IEC 15288 has been updated 1 February 2008 as well as on 15 May 2015. ISO/IEC 15288 is managed by ISO/IEC JTC1/SC7, which is the committee responsible for developing standards in the area of Software and Systems Engineering. ISO/IEC 15288 is part of the SC 7 Integrated set of Standards, and other standards in this domain include: ISO/IEC TR 15504 which addresses capability ISO/IEC 12207 and ISO/IEC 15288 which address lifecycle and ISO 9001 & ISO 90003 which address quality History ISO/IEC 15288:2023 ISO/IEC 15288:2015 Revises: ISO/IEC 15288:2008 (harmonized with ISO/IEC 12207:2008) Revises: ISO/IEC 15288:2002 (first edition) Processes The standard defines thirty processes grouped into four categories: Agreement processes Organizational project-enabling processes Technical management processes Technical processes The standard defines two agreement processes: Acquisition process (clause 6.1.1) Supply process (clause 6.1.2) The standard defines six organizational project-enabling processes: Life cycle model management process (clause 6.2.1) Infrastructure management process (clause 6.2.2) Portfolio manage
https://en.wikipedia.org/wiki/Integer%20matrix
In mathematics, an integer matrix is a matrix whose entries are all integers. Examples include binary matrices, the zero matrix, the matrix of ones, the identity matrix, and the adjacency matrices used in graph theory, amongst many others. Integer matrices find frequent application in combinatorics. Examples     and     are both examples of integer matrices. Properties Invertibility of integer matrices is in general more numerically stable than that of non-integer matrices. The determinant of an integer matrix is itself an integer, thus the numerically smallest possible magnitude of the determinant of an invertible integer matrix is one, hence where inverses exist they do not become excessively large (see condition number). Theorems from matrix theory that infer properties from determinants thus avoid the traps induced by ill conditioned (nearly zero determinant) real or floating point valued matrices. The inverse of an integer matrix is again an integer matrix if and only if the determinant of equals or . Integer matrices of determinant form the group , which has far-reaching applications in arithmetic and geometry. For , it is closely related to the modular group. The intersection of the integer matrices with the orthogonal group is the group of signed permutation matrices. The characteristic polynomial of an integer matrix has integer coefficients. Since the eigenvalues of a matrix are the roots of this polynomial, the eigenvalues of an integer matrix are algebraic integers. In dimension less than 5, they can thus be expressed by radicals involving integers. Integer matrices are sometimes called integral matrices, although this use is discouraged. See also GCD matrix Unimodular matrix Wilson matrix External links Integer Matrix at MathWorld Matrices
https://en.wikipedia.org/wiki/Stress%20granule
In cellular biology, stress granules are biomolecular condensates in the cytosol composed of proteins and RNAs that assemble into 0.1–2 μm membraneless organelles when the cell is under stress. The mRNA molecules found in stress granules are stalled translation pre-initiation complexes associated with 40S ribosomal subunits, translation initiation factors, poly(A)+ mRNAs and RNA-binding proteins (RBPs). While they are membraneless organelles, stress granules have been proposed to be associated with the endoplasmatic reticulum. There are also nuclear stress granules. This article is about the cytosolic variety. Proposed functions The function of stress granules remains largely unknown. Stress granules have long been proposed to have a function to protect RNAs from harmful conditions, thus their appearance under stress. The accumulation of RNAs into dense globules could keep them from reacting with harmful chemicals and safeguard the information coded in their RNA sequence. Stress granules might also function as a decision point for untranslated mRNAs. Molecules can go down one of three paths: further storage, degradation, or re-initiation of translation. Conversely, it has also been argued that stress granules are not important sites for mRNA storage nor do they serve as an intermediate location for mRNAs in transit between a state of storage and a state of degradation. Efforts to identify all RNAs within stress granules (the stress granule transcriptome) in an unbiased way by sequencing RNA from biochemically purified stress granule "cores" have shown that RNAs are not recruited to stress granules in a sequence-specific manner, but rather generically, with longer and/or less-optimally translated transcripts being enriched. These data imply that the stress granule transcriptome is influenced by the valency of RNA (for proteins or other RNAs) and by the rates of RNA run-off from polysomes. The latter is further supported by recent single molecule imaging studies.
https://en.wikipedia.org/wiki/Unique%20user
Website popularity is commonly determined using the number of unique users, and the metric is often quoted to potential advertisers or investors. A website's number of unique users is usually measured over a standard period of time, typically a month. "Unique" is a term of art in this context that means distinct and does not count repeat visits or uses by the same person. Unique visitor Unique visitors refers to the number of distinct individuals requesting pages from a website during a given period, regardless of how often they visit. Because a visitor can make multiple visits in a specified period, the number of visits may be greater than the number of visitors. A visitor is sometimes referred to as a unique visitor or a unique user to clearly convey the idea that each visitor is only counted once. The purpose of tracking unique visitors is to help marketers understand website user behavior. The measurement of users or visitors requires a standard time period and can be distorted by automatic activity (such as bots) that classify web content. Estimation of visitors, visits, and other traffic statistics are usually filtered to remove this type of activity by eliminating known IP addresses for bots, by requiring registration or cookies, or by using panel data. Understanding unique users numbers Similar to the TURF (total unduplicated reach and frequency) metric often used in television, radio and newspaper analyses, unique users is a measure of the distribution of content to a number of distinct consumers. A common mistake in using unique user numbers is adding up such numbers across dimensions. A unique user metric is valid only for its given set of dimensions, e.g. time and browsers. For example, a website may have 100 unique users on each day (day being the dimension) of a particular week. With only this data, one cannot extrapolate the number of weekly unique users (only that the unique user count for the week is between 100 and 700). However, website adm
https://en.wikipedia.org/wiki/FOUP
FOUP (an acronym for Front Opening Unified Pod or Front Opening Universal Pod) is a specialized plastic carrier designed to hold silicon wafers securely and safely in a controlled environment, and to allow the wafers to be transferred between machines for processing or measurement. FOUPs began to appear along with the first 300mm wafer processing tools in the mid 1990s. The size of the wafers and their comparative lack of rigidity meant that SMIF pods were not a viable form factor. FOUP standards were developed by SEMI and SEMI members to ensure that FOUPs and all equipment that interacts with FOUPs work together seamlessly. Transitioning from a SMIF pod to a FOUP design, the removable cassette used to hold wafers was replaced by fixed wafer columns. The door was relocated from a bottom orientation to a front orientation, where automated handling equipment can access the wafers. Pitch for a 300 mm FOUP is 10 mm, while 13 slot FOUPs can have a pitch up to 20 mm. The weight of a fully loaded 25 wafer FOUP is between 7 and 9 kilograms which means that automated material handling systems are essential for all but the smallest of fabrication plants. To allow this, each FOUP has coupling plates and interface holes to allow the FOUP to be positioned on a load port, and to be picked up and transferred by the AMHS (Automated Material Handling System) to other process tools or to storage locations such as a stocker or undertrack storage. FOUPs may use RF tags that allow them to be identified by RF readers on tools or AMHS. FOUPs are available in several colors, depending on the customer's wish. FOUPs have begun to have the capability to have a purge gas applied by process, measurement and storage tools in an effort to increase device yield. FOSB FOSB is an acronym for Front Opening Shipping Box. FOSBs are used for transporting wafers between manufacturing facilities. Manufacturers 3S Korea CKplas Danichi Shoji Entegris E-SUN System Technology Gudeng Precisi
https://en.wikipedia.org/wiki/Channel%20state%20information
In wireless communications, channel state information (CSI) is the known channel properties of a communication link. This information describes how a signal propagates from the transmitter to the receiver and represents the combined effect of, for example, scattering, fading, and power decay with distance. The method is called channel estimation. The CSI makes it possible to adapt transmissions to current channel conditions, which is crucial for achieving reliable communication with high data rates in multiantenna systems. CSI needs to be estimated at the receiver and usually quantized and feedback to the transmitter (although reverse-link estimation is possible in time-division duplex (TDD) systems). Therefore, the transmitter and receiver can have different CSI. The CSI at the transmitter and the CSI at the receiver are sometimes referred to as CSIT and CSIR, respectively. Different kinds of channel state information There are basically two levels of CSI, namely instantaneous CSI and statistical CSI. Instantaneous CSI (or short-term CSI) means that the current channel conditions are known, which can be viewed as knowing the impulse response of a digital filter. This gives an opportunity to adapt the transmitted signal to the impulse response and thereby optimize the received signal for spatial multiplexing or to achieve low bit error rates. Statistical CSI (or long-term CSI) means that a statistical characterization of the channel is known. This description can include, for example, the type of fading distribution, the average channel gain, the line-of-sight component, and the spatial correlation. As with instantaneous CSI, this information can be used for transmission optimization. The CSI acquisition is practically limited by how fast the channel conditions are changing. In fast fading systems where channel conditions vary rapidly under the transmission of a single information symbol, only statistical CSI is reasonable. On the other hand, in slow fading sy
https://en.wikipedia.org/wiki/Frame%20%28linear%20algebra%29
In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering. Definition and motivation Motivating example: computing a basis from a linearly dependent set Suppose we have a set of vectors in the vector space V and we want to express an arbitrary element as a linear combination of the vectors , that is, we want to find coefficients such that If the set does not span , then such coefficients do not exist for every such . If spans and also is linearly independent, this set forms a basis of , and the coefficients are uniquely determined by . If, however, spans but is not linearly independent, the question of how to determine the coefficients becomes less apparent, in particular if is of infinite dimension. Given that spans and is linearly dependent, one strategy is to remove vectors from the set until it becomes linearly independent and forms a basis. There are some problems with this plan: Removing arbitrary vectors from the set may cause it to be unable to span before it becomes linearly independent. Even if it is possible to devise a specific way to remove vectors from the set until it becomes a basis, this approach may become unfeasible in practice if the set is large or infinite. In some applications, it may be an advantage to use more vectors than necessary to represent . This means that we want to find the coefficients without removing elements in . The coefficients will no longer be uniquely determined by . Therefore, the vector can be represented as a linear combination of in more than one way. Formal definition Let V be an inner product space and be a set of vectors in . Th
https://en.wikipedia.org/wiki/Cache%20pollution
Cache pollution describes situations where an executing computer program loads data into CPU cache unnecessarily, thus causing other useful data to be evicted from the cache into lower levels of the memory hierarchy, degrading performance. For example, in a multi-core processor, one core may replace the blocks fetched by other cores into shared cache, or prefetched blocks may replace demand-fetched blocks from the cache. Example Consider the following illustration: T[0] = T[0] + 1; for i in 0..sizeof(CACHE) C[i] = C[i] + 1; T[0] = T[0] + C[sizeof(CACHE)-1]; (The assumptions here are that the cache is composed of only one level, it is unlocked, the replacement policy is pseudo-LRU, all data is cacheable, the set associativity of the cache is N (where N > 1), and at most one processor register is available to contain program values). Right before the loop starts, T[0] will be fetched from memory into cache, its value updated. However, as the loop executes, because the number of data elements the loop references requires the whole cache to be filled to its capacity, the cache block containing T[0] has to be evicted. Thus, the next time the program requests T[0] to be updated, the cache misses, and the cache controller has to request the data bus to bring the corresponding cache block from main memory again. In this case the cache is said to be "polluted". Changing the pattern of data accesses by positioning the first update of T[0] between the loop and the second update can eliminate the inefficiency: for i in 0..sizeof(CACHE) C[i] = C[i] + 1; T[0] = T[0] + 1; T[0] = T[0] + C[sizeof(CACHE)-1]; Solutions Other than code-restructuring mentioned above, the solution to cache pollution is ensure that only high-reuse data are stored in cache. This can be achieved by using special cache control instructions, operating system support or hardware support. Examples of specialized hardware instructions include "lvxl" provided by PowerPC AltiVec. T