source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/International%20Association%20of%20Privacy%20Professionals
The International Association of Privacy Professionals (IAPP) is a nonprofit, non-advocacy membership association founded in 2000. It provides a forum for privacy professionals to share best practices, track trends, advance privacy management issues, standardize the designations for privacy professionals, provide education and guidance on career opportunities in the field of information privacy. The IAPP offers a full suite of educational and professional development services, including privacy training, certification programs, publications and annual conferences. It is headquartered in Portsmouth, New Hampshire. History Founded in 2000, IAPP was originally constituted as the Privacy Officers Association (POA). In 2002, it became the International Association of Privacy Officers (IAPO) when the POA merged with a competing group, the Association of Corporate Privacy Officers (ACPO). The group was renamed to the International Association of Privacy Professionals in 2003 to reflect a broadened mission that includes the ranks of corporate personnel, beyond the position of Chief Privacy Officer, engaged in privacy-related tasks. Membership reached 10,000 in 2012 and in 2019, the organization reported it had surpassed the 50,000 member mark. The rapid growth was the result of increased demand for privacy expertise in the face of emerging laws such as the EU's General Data Protection Regulation (GDPR). Half of the association's members are women. Professional certifications The IAPP is responsible for developing and launching a global credentialing programs in information privacy. The CIPM, CIPP/E, CIPP/US and CIPT credentials are accredited by the American National Standards Institute (ANSI) under the International Organization for Standardization (ISO) standard for Personnel Certification Bodies 17024:2012. These certifications have been described as "the gold standard" for validating privacy expertise. Certified Information Privacy Professional (CIPP) The CIPP cur
https://en.wikipedia.org/wiki/FAAM%20Airborne%20Laboratory
The FAAM Airborne Laboratory is an atmospheric science research facility. It is based on the Cranfield University campus alongside Cranfield Airport in Bedfordshire, England. It was formed by a collaboration between the Met Office and the Natural Environment Research Council (NERC) in 2001. The facility FAAM was established jointly by the Natural Environmental Research Council and the Met Office. Initial funding was provided to prepare an aircraft for instrumentation. The main aircraft used is a modified BAe 146-301 aircraft, registration G-LUXE, owned by NERC and operated by Airtask. Work carried out by FAAM includes Radiative transfer studies in clear and cloudy air; Tropospheric chemistry measurements; Cloud physics and dynamic studies; Dynamics of mesoscale weather systems; Boundary layer and turbulence studies; Remote sensing: verification of ground-based instruments; Satellite ground truth: radiometric measurements and winds; Satellite instrument test-bed; FAAM is staffed by a mixture of NERC, University of Leeds and Met Office personnel, and acts as a servant to numerous UK and occasionally overseas science organisations; primarily the Met Office itself, or UK universities funded by NERC. It flies around 400 hours annually, most commonly on large campaigns where a team of typically 30 will spend around a month at a base location, potentially anywhere in the world, delivering a specific science campaign, although some flying from Cranfield also takes place. An emergency response role exists, which has been used three times - at the 2005 Buncefield fire, the 2010 Eyjafjallajökull volcanic eruption and 2012 Total Elgin gas platform leak: after Eyjafjallajökull a new aircraft, MOCCA - the Met Office Civil Contingency Aircraft, has been commissioned as the "first responder" to British volcanic ash emergencies. The facility was originally established in 2001, with an intended operating base of the BAe site at Woodford, in Cheshire. However, b
https://en.wikipedia.org/wiki/Polygraphic%20substitution
Polygraphic substitution is a cipher in which a uniform substitution is performed on blocks of letters. When the length of the block is specifically known, more precise terms are used: for instance, a cipher in which pairs of letters are substituted is bigraphic. As a concept, polygraphic substitution contrasts with monoalphabetic (or simple) substitutions in which individual letters are uniformly substituted, or polyalphabetic substitutions in which individual letters are substituted in different ways depending on their position in the text. In theory, there is some overlap in these definitions; one could conceivably consider a Vigenère cipher with an eight-letter key to be an octographic substitution. In practice, this is not a useful observation since it is far more fruitful to consider it to be a polyalphabetic substitution cipher. Specific ciphers In 1563, Giambattista della Porta devised the first bigraphic substitution. However, it was nothing more than a matrix of symbols. In practice, it would have been all but impossible to memorize, and carrying around the table would lead to risks of falling into enemy hands. In 1854, Charles Wheatstone came up with the Playfair cipher, a keyword-based system that could be performed on paper in the field. This was followed up over the next fifty years with the closely related four-square and two-square ciphers, which are slightly more cumbersome but offer slightly better security. In 1929, Lester S. Hill developed the Hill cipher, which uses matrix algebra to encrypt blocks of any desired length. However, encryption is very difficult to perform by hand for any sufficiently large block size, although it has been implemented by machine or computer. This is therefore on the frontier between classical and modern cryptography. Cryptanalysis of general polygraphic substitutions Polygraphic systems do provide a significant improvement in security over monoalphabetic substitutions. Given an individual letter 'E' in
https://en.wikipedia.org/wiki/Antiunitary%20operator
In mathematics, an antiunitary transformation is a bijective antilinear map between two complex Hilbert spaces such that for all and in , where the horizontal bar represents the complex conjugate. If additionally one has then is called an antiunitary operator. Antiunitary operators are important in quantum theory because they are used to represent certain symmetries, such as time reversal. Their fundamental importance in quantum physics is further demonstrated by Wigner's theorem. Invariance transformations In quantum mechanics, the invariance transformations of complex Hilbert space leave the absolute value of scalar product invariant: for all and in . Due to Wigner's theorem these transformations can either be unitary or antiunitary. Geometric Interpretation Congruences of the plane form two distinct classes. The first conserves the orientation and is generated by translations and rotations. The second does not conserve the orientation and is obtained from the first class by applying a reflection. On the complex plane these two classes correspond (up to translation) to unitaries and antiunitaries, respectively. Properties holds for all elements of the Hilbert space and an antiunitary . When is antiunitary then is unitary. This follows from For unitary operator the operator , where is complex conjugate operator, is antiunitary. The reverse is also true, for antiunitary the operator is unitary. For antiunitary the definition of the adjoint operator is changed to compensate the complex conjugation, becoming The adjoint of an antiunitary is also antiunitary and (This is not to be confused with the definition of unitary operators, as the antiunitary operator is not complex linear.) Examples The complex conjugate operator is an antiunitary operator on the complex plane. The operator where is the second Pauli matrix and is the complex conjugate operator, is antiunitary. It satisfies . Decomposition of an antiunitary oper
https://en.wikipedia.org/wiki/Dosing
Dosing generally applies to feeding chemicals or medicines when used in small quantities. For medicines the term dose is generally used. In the case of inanimate objects the word dosing is typical. The term dose titration, referring to stepwise adjustment of doses until a desired level of effect is reached, is common in medicine. Engineering The word dosing is very commonly used by engineers in thermal power stations, in water treatment, in any industry where steam is being generated, and in building services for heating and cooling water treatment. Dosing procedures are also in vogue in textile and similar industries where chemical treatment is involved. Commercial swimming pools also require chemical dosing in order to control pH balance, chlorine level, and other such water quality criteria. Modern swimming pool plant will have bulk storage of chemicals held in separate dosing tanks, and will have automated controls and dosing pumps to top up the various chemicals as required to control the water quality. In a power station treatment chemicals are injected or fed to boiler and also to feed and make up water under pressure, but in small dosages or rate of injection. The feeding at all places is done by means of small capacity dosing pumps specially designed for the duty demanded. In building services the water quality of various pumped fluid systems, including for heating, cooling, and condensate water, will be regularly checked and topped up with chemicals manually as required to suit the required water quality. Most commonly inhibitors will be added to protect the pipework and components against corrosion, or a biocide will be added to stop the growth of bacteria in lower temperature systems. The required chemicals will be added to the fluid system by use of a dosing pot; a multi-valved chamber in which the chemical can be added, and then introduced to the fluid system in a controlled manner. In food industries, the dosing of ingredients is particularly im
https://en.wikipedia.org/wiki/Comparison%20of%20assemblers
This is an incomplete list of assemblers: computer programs that translate assembly language source code into binary programs. Some assemblers are components of a compiler system for a high level language and may have limited or no usable functionality outside of the compiler system. Some assemblers are hosted on the target processor and operating system, while other assemblers (cross-assemblers) may run under an unrelated operating system or processor. For example, assemblers for embedded systems are not usually hosted on the target system since it would not have the storage and terminal I/O to permit entry of a program from a keyboard. An assembler may have a single target processor or may have options to support multiple processor types. Very simple assemblers may lack features, such as macros, present in more powerful versions. As part of a compiler suite GNU Assembler (GAS): GPL: many target instruction sets, including ARM architecture, Atmel AVR, x86, x86-64, Freescale 68HC11, Freescale v4e, Motorola 680x0, MIPS, PowerPC, IBM System z, TI MSP430, Zilog Z80. SDAS (fork of ASxxxx Cross Assemblers and part of the Small Device C Compiler project): GPL: several target instruction sets including Intel 8051, Zilog Z80, Freescale 68HC08, PIC microcontroller. The Amsterdam Compiler Kit (ACK) targets many architectures of the 1980s, including 6502, 6800, 680x0, ARM, x86, Zilog Z80 and Z8000. LLVM targets many platforms, however its main focus is not machine-dependent code generation; instead a more high-level typed assembly-like intermediate representation is used. Nevertheless for the most common targets the LLVM MC (machine code) project provides an assembler both as an integrated component of the compilers and as an external tool. Some other self-hosted native-targeted language implementations (like Go, Free Pascal, SBCL) have their own assemblers with multiple targets. They may be used for inline assembly inside the language, or even included as a library, bu
https://en.wikipedia.org/wiki/St%C3%B8rmer%27s%20theorem
In number theory, Størmer's theorem, named after Carl Størmer, gives a finite bound on the number of consecutive pairs of smooth numbers that exist, for a given degree of smoothness, and provides a method for finding all such pairs using Pell equations. It follows from the Thue–Siegel–Roth theorem that there are only a finite number of pairs of this type, but Størmer gave a procedure for finding them all. Statement If one chooses a finite set of prime numbers then the -smooth numbers are defined as the set of integers that can be generated by products of numbers in . Then Størmer's theorem states that, for every choice of , there are only finitely many pairs of consecutive -smooth numbers. Further, it gives a method of finding them all using Pell equations. The procedure Størmer's original procedure involves solving a set of roughly 3k Pell equations, in each one finding only the smallest solution. A simplified version of the procedure, due to D. H. Lehmer, is described below; it solves fewer equations but finds more solutions in each equation. Let P be the given set of primes, and define a number to be P-smooth if all its prime factors belong to P. Assume p1 = 2; otherwise there could be no consecutive P-smooth numbers, because all P-smooth numbers would be odd. Lehmer's method involves solving the Pell equation for each P-smooth square-free number q other than 2. Each such number q is generated as a product of a subset of P, so there are 2k − 1 Pell equations to solve. For each such equation, let xi, yi be the generated solutions, for i in the range from 1 to max(3, (pk + 1)/2) (inclusive), where pk is the largest of the primes in P. Then, as Lehmer shows, all consecutive pairs of P-smooth numbers are of the form (xi − 1)/2, (xi + 1)/2. Thus one can find all such pairs by testing the numbers of this form for P-smoothness. Example To find the ten consecutive pairs of {2,3,5}-smooth numbers (in music theory, giving the superparticular ratios for just tunin
https://en.wikipedia.org/wiki/PCLSRing
PCLSRing (also known as Program Counter Lusering) is the term used in the ITS operating system for a consistency principle in the way one process accesses the state of another process. Problem scenario This scenario presents particular complications: Process A makes a time-consuming system call. By "time-consuming", it is meant that the system needs to put Process A into a wait queue and can schedule another process for execution if one is ready-to-run. A common example is an I/O operation. While Process A is in this wait state, Process B tries to interact with or access Process A, for example, send it a signal. What should be the visible state of the context of Process A at the time of the access by Process B? In fact, Process A is in the middle of a system call, but ITS enforces the appearance that system calls are not visible to other processes (or even to the same process). ITS-solution: transparent restart If the system call cannot complete before the access, then it must be restartable. This means that the context is backed up to the point of entry to the system call, while the call arguments are updated to reflect whatever portion of the operation has already been completed. For an I/O operation, this means that the buffer start address must be advanced over the data already transferred, while the length of data to be transferred must be decremented accordingly. After the Process B interaction is complete, Process A can resume execution, and the system call resumes from where it left off. This technique mirrors in software what the PDP-10 does in hardware. Some PDP-10 instructions like BLT may not run to completion, either due to an interrupt or a page fault. In the course of processing the instruction, the PDP-10 would modify the registers containing arguments to the instruction, so that later the instruction could be run again with new arguments that would complete any remaining work to be done. PCLSRing applies the same technique to system calls. This
https://en.wikipedia.org/wiki/Palm%20OS%20viruses
While some viruses did exist for Palm OS based devices, very few were ever designed. Typically, mobile devices are difficult for virus writers to target, since their simplicity provides fewer security holes to target compared to a desktop. Viruses for Palm OS References Palm OS software Mobile malware
https://en.wikipedia.org/wiki/Protection%20ring
In computer science, hierarchical protection domains, often called protection rings, are mechanisms to protect data and functionality from faults (by improving fault tolerance) and malicious behavior (by providing computer security). Computer operating systems provide different levels of access to resources. A protection ring is one of two or more hierarchical levels or layers of privilege within the architecture of a computer system. This is generally hardware-enforced by some CPU architectures that provide different CPU modes at the hardware or microcode level. Rings are arranged in a hierarchy from most privileged (most trusted, usually numbered zero) to least privileged (least trusted, usually with the highest ring number). On most operating systems, Ring 0 is the level with the most privileges and interacts most directly with the physical hardware such as certain CPU functionality (e.g. the control registers) and I/O controllers. Special call gates between rings are provided to allow an outer ring to access an inner ring's resources in a predefined manner, as opposed to allowing arbitrary usage. Correctly gating access between rings can improve security by preventing programs from one ring or privilege level from misusing resources intended for programs in another. For example, spyware running as a user program in Ring 3 should be prevented from turning on a web camera without informing the user, since hardware access should be a Ring 1 function reserved for device drivers. Programs such as web browsers running in higher numbered rings must request access to the network, a resource restricted to a lower numbered ring. Implementations Multiple rings of protection were among the most revolutionary concepts introduced by the Multics operating system, a highly secure predecessor of today's Unix family of operating systems. The GE 645 mainframe computer did have some hardware access control, but that was not sufficient to provide full support for rings in hardwar
https://en.wikipedia.org/wiki/Piezoelectric%20sensor
A piezoelectric sensor is a device that uses the piezoelectric effect to measure changes in pressure, acceleration, temperature, strain, or force by converting them to an electrical charge. The prefix piezo- is Greek for 'press' or 'squeeze'. Applications Piezoelectric sensors are versatile tools for the measurement of various processes. They are used for quality assurance, process control, and for research and development in many industries. Pierre Curie discovered the piezoelectric effect in 1880, but only in the 1950s did manufacturers begin to use the piezoelectric effect in industrial sensing applications. Since then, this measuring principle has been increasingly used, and has become a mature technology with excellent inherent reliability. They have been successfully used in various applications, such as in medical, aerospace, nuclear instrumentation, and as a tilt sensor in consumer electronics or a pressure sensor in the touch pads of mobile phones. In the automotive industry, piezoelectric elements are used to monitor combustion when developing internal combustion engines. The sensors are either directly mounted into additional holes into the cylinder head or the spark/glow plug is equipped with a built-in miniature piezoelectric sensor. The rise of piezoelectric technology is directly related to a set of inherent advantages. The high modulus of elasticity of many piezoelectric materials is comparable to that of many metals and goes up to 106 N/m². Even though piezoelectric sensors are electromechanical systems that react to compression, the sensing elements show almost zero deflection. This gives piezoelectric sensors ruggedness, an extremely high natural frequency and an excellent linearity over a wide amplitude range. Additionally, piezoelectric technology is insensitive to electromagnetic fields and radiation, enabling measurements under harsh conditions. Some materials used (especially gallium phosphate or tourmaline) are extremely stable at high t
https://en.wikipedia.org/wiki/Remote%20scripting
Remote scripting is a technology which allows scripts and programs that are running inside a browser to exchange information with a server. The local scripts can invoke scripts on the remote side and process the returned information. The earliest form of asynchronous remote scripting was developed before XMLHttpRequest existed, and made use of very simple process: a static web page opens a dynamic web page (e.g. at other target frame) that is reloaded with new JavaScript content, generated remotely on the server side. The XMLHttpRequest and similar "client-side script remote procedure call" functions, open the possibility of use and triggering web services from the web page interface. The web development community subsequently developed a range of techniques for remote scripting in order to enable consistent results across different browsers. Early examples include JSRS library from 2000, the introduction of the Image/Cookie technique in 2000. JavaScript Remote Scripting JavaScript Remote Scripting (JSRS) is a web development technique for creating interactive web applications using a combination of: HTML (or XHTML) The Document Object Model manipulated through JavaScript to dynamically display and interact with the information presented A transport layer. Different technologies may be used, though using a script tag or an iframe is used the most because it has better browser support than XMLHttpRequest A data format. XML with WDDX can be used as well as JSON or any other text format. Schematic A similar approach is Ajax, though it depends on the XmlHttpRequest in newer web browsers. Libraries Brent Ashley's original JSRS library released in 2000 BlueShoes JSRS with added encoding and OO RPC abstractions MSDN article See also Ajax Rich Internet application External links Web development
https://en.wikipedia.org/wiki/Carbon-based%20life
Carbon is a primary component of all known life on Earth, representing approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS). Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics refer to this assumption as carbon chauvinism. Characteristics Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. The enormous diversity of carbon-containing compounds, known as organic compounds, has led to a distinction between them and compounds that do not contain carbon, known as inorganic compounds. The branch of chemistry that studies organic compounds is known as organic chemistry. Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen. The most important characteristics of carbon as a basis for the chemistry of life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously
https://en.wikipedia.org/wiki/SiteKey
SiteKey is a web-based security system that provides one type of mutual authentication between end-users and websites. Its primary purpose is to deter phishing. SiteKey was deployed by several large financial institutions in 2006, including Bank of America and The Vanguard Group. Both Bank of America and The Vanguard Group discontinued use in 2015. The product is owned by RSA Data Security which in 2006 acquired its original maker, Passmark Security. How it works SiteKey uses the following challenge–response technique: The user identifies (not authenticates) themself to the site by entering their username (but not their password). If the username is a valid one, the site proceeds. If the user's browser does not contain a client-side state token (such as a Web cookie or a Flash cookie) from a previous visit, the user is prompted for answers to one or more of the "security questions" the user-specified at site sign-up time, such as "Which school did you last attend?" The site authenticates itself to the user by displaying an image and/or accompanying phrase that they have earlier configured. If the user does not recognize these as their own, they are to assume the site is a phishing site and immediately abandon it. If the user does recognize them, they may consider the site authentic and proceed. The user authenticates themself to the site by entering their password. If the password is not valid for that username, the whole process begins again. If it is valid, the user is considered authenticated and logged in. If the user is at a phishing site with a different Web site domain than the legitimate domain, the user's browser will refuse to send the state token in step (2); the phishing site owner will either need to skip displaying the correct security image, or prompt the user for the security question(s) obtained from the legitimate domain and pass on the answers. In theory, this could cause the user to become suspicious, since the user might be surprised
https://en.wikipedia.org/wiki/Orbital%20pole
An orbital pole is either point at the ends of the orbital normal, an imaginary line segment that runs through a focus of an orbit (of a revolving body like a planet, moon or satellite) and is perpendicular (or normal) to the orbital plane. Projected onto the celestial sphere, orbital poles are similar in concept to celestial poles, but are based on the body's orbit instead of its equator. The north orbital pole of a revolving body is defined by the right-hand rule. If the fingers of the right hand are curved along the direction of orbital motion, with the thumb extended and oriented to be parallel to the orbital axis, then the direction the thumb points is defined to be the orbital north. The poles of Earth's orbit are referred to as the ecliptic poles. For the remaining planets, the orbital pole in ecliptic coordinates is given by the longitude of the ascending node () and inclination (): In the following table, the planetary orbit poles are given in both celestial coordinates and the ecliptic coordinates for the Earth. When a satellite orbits close to another large body, it can only maintain continuous observations in areas near its orbital poles. The continuous viewing zone (CVZ) of the Hubble Space Telescope lies inside roughly 24° of Hubble's orbital poles, which precess around the Earth's axis every 56 days. Ecliptic Pole The ecliptic is the plane on which Earth orbits the Sun. The ecliptic poles are the two points where the ecliptic axis, the imaginary line perpendicular to the ecliptic, intersects the celestial sphere. The two ecliptic poles are mapped below. Due to axial precession, either celestial pole completes a circuit around the nearer ecliptic pole every 25,800 years. , the positions of the ecliptic poles expressed in equatorial coordinates, as a consequence of Earth's axial tilt, are the following: North: right ascension (exact), declination South: right ascension (exact), declination The North Ecliptic Pole is located near the Cat's E
https://en.wikipedia.org/wiki/ASM%20International%20%28society%29
ASM International, formerly known as the American Society for Metals, is an association of materials-centric engineers and scientists. As the charitable arm of ASM, the ASM Materials Education Foundation also operates ASM Materials Camp in the summers for high school students and teachers. These camps are intended to educate the public about the materials field, and encourage young people to pursue careers in materials science and engineering. History ASM has been in existence, under various names, since 1913, when it began as a local club in Detroit called the Steel Treaters Club. During World War I, the Steel Treaters Club became the Steel Treating Research Society, with groups in Detroit, Chicago, and Cleveland. After World War I, the Chicago group seceded and formed the American Steel Treaters Society. In 1920 the local chapters were reunified into the new American Society for Steel Treating (ASST). The society expanded its technical scope beyond steel during the 1920s. In 1933 it became the American Society for Metals (ASM). Gradually the society expanded its geographic scope beyond the U.S. and its technical scope beyond metals to include other materials. It became known as ASM International in 1986. , ASM claims 20,000 members worldwide. ASM provides several information resources, including technical journals, books, and databases. ASM also hosts numerous international conferences each year, including ASM's Annual Meeting: International Materials, Applications, and Technologies Conference and Exposition (IMAT). Affiliate Societies Six affiliate societies focused on specific areas of materials science also fall under the ASM umbrella: The Heat Treating Society (HTS), The Thermal Spray Society (TSS), The International Metallographic Society (IMS), The Electronic Device Failure Analysis Society (EDFAS), The Failure Analysis Society (FAS), and The International Organization on Shape Memory and Superelastic Technology (SMST). Each society
https://en.wikipedia.org/wiki/Apple%20Loops%20Utility
The Apple Loops Utility software was a small companion utility for Soundtrack Pro, GarageBand, Logic Express, and Logic Pro, all made by Apple Inc. that allowed users to create loops of audio that could be time-stretched. Audio files converted to Apple Loops via Apple Loops Utility could also be tagged with their publishing (Author, Comments tagged at the same time, a process known but it would convert the latter to AIFF when saved with tagging information). The most recent version available without purchasing the aforementioned software was 3.0.1, which was available from Apple's Developer Web site. Version 1.4, which was the first version of the software, was available with Logic Pro or Express 7.2. It allowed multiple files to have multiple tags added to them, and it also allowed content merging to occur with Logic Audio Express. Only version 1.4 and beyond worked natively with Intel Macs. Version 1.3.1 appeared to allow edits to be made and file information to be saved, but none of the essential tagging information could be retained on an Intel Mac. Version 3.0.1, the last one released, fully supported Intel Macs up to macOS Sierra 10.12.6. External links and references Apple Loops Utility (DMG) Apple Loops Utility SDK 3.0.1 (DMG) Loops Utility Audio editors MacOS multimedia software
https://en.wikipedia.org/wiki/Routing%20Policy%20Specification%20Language
The Routing Policy Specification Language (RPSL) is a language commonly used by Internet Service Providers to describe their routing policies. The routing policies are stored at various whois databases including RIPE, RADB and APNIC. ISPs (using automated tools) then generate router configuration files that match their business and technical policies. RFC2622 describes RPSL, and replaced RIPE-181. RFC2650 provides a reference tutorial to using RPSL in practice to support IPv6 routing policies. RPSL Tools and Programs RtConfig - automatically generate router configuration files from RPSL registry entries (This software is part of the IRRToolSet) irrPT - Tools for ISPs to collect and use information from Internet Routing Registry (IRR) databases External links RIPE RPSL page Internet architecture Routing
https://en.wikipedia.org/wiki/Separase
Separase, also known as separin, is a cysteine protease responsible for triggering anaphase by hydrolysing cohesin, which is the protein responsible for binding sister chromatids during the early stage of anaphase. In humans, separin is encoded by the ESPL1 gene. History In S. cerevisiae, separase is encoded by the esp1 gene. Esp1 was discovered by Kim Nasmyth and coworkers in 1998. In 2021, structures of human separase were determined in complex with either securin or CDK1-cyclin B1-CKS1 using cryo-EM by scientists of the University of Geneva. Function Stable cohesion between sister chromatids before anaphase and their timely separation during anaphase are critical for cell division and chromosome inheritance. In vertebrates, sister chromatid cohesion is released in 2 steps via distinct mechanisms. The first step involves phosphorylation of STAG1 or STAG2 in the cohesin complex. The second step involves cleavage of the cohesin subunit SCC1 (RAD21) by separase, which initiates the final separation of sister chromatids. In S. cerevisiae, Esp1 is coded by ESP1 and is regulated by the securin Pds1. The two sister chromatids are initially bound together by the cohesin complex until the beginning of anaphase, at which point the mitotic spindle pulls the two sister chromatids apart, leaving each of the two daughter cells with an equivalent number of sister chromatids. The proteins that bind the two sister chromatids, disallowing any premature sister chromatid separation, are a part of the cohesin protein family. One of these cohesin proteins crucial for sister chromatid cohesion is Scc1. Esp1 is a separase protein that cleaves the cohesin subunit Scc1 (RAD21), allowing sister chromatids to separate at the onset of anaphase during mitosis. Regulation When the cell is not dividing, separase is prevented from cleaving cohesin through its association with either securin or upon phosphorylation of a specific serine residue in separase by the cyclin-CDK complex. Sepa
https://en.wikipedia.org/wiki/Securin
Securin is a protein involved in control of the metaphase-anaphase transition and anaphase onset. Following bi-orientation of chromosome pairs and inactivation of the spindle checkpoint system, the underlying regulatory system, which includes securin, produces an abrupt stimulus that induces highly synchronous chromosome separation in anaphase. Securin and Separase Securin is initially present in the cytoplasm and binds to separase, a protease that degrades the cohesin rings that link the two sister chromatids. Separase is vital for onset of anaphase. This securin-separase complex is maintained when securin is phosphorylated by Cdk1, inhibiting ubiquitination. When bound to securin, separase is not functional. In addition, both securin and separase are well-conserved proteins (Figure 1). Note that separase cannot function without initially forming the securin-separase complex. This is because securin helps properly fold separase into the functional conformation. However, yeast does not appear to require securin to form functional separase as anaphase occurs in yeast with a securin deletion mutation. Role of Securin in the onset of Anaphase Basic mechanism Securin has 5 known phosphorylation sites that are targets of Cdk1; 2 sites at the N-terminal in the Ken-Box and D-box region are known to affect APC recognition and ubiquitination (Figure 2). To initiate the onset of anaphase, securin is dephosphorylated by Cdc14 and other phosphatases. Dephosphorylated securin is recognized by the Anaphase-Promoting Complex (APC) bound primarily to Cdc20 (Cdh1 is also an activating substrate of APC). The APCCdc20 complex ubiquitinates securin and targets it for degradation by 26S proteasome. This results in free separase that is able to destroy cohesin and initiate chromosome separation. Network characteristics It is thought that securin integrates multiple regulatory inputs to make separase activation switch-like, resulting in sudden, coordinated anaphase. This
https://en.wikipedia.org/wiki/Acorn%20MOS
The Machine Operating System (MOS) or OS is a discontinued computer operating system (OS) used in Acorn Computers' BBC computer range. It included support for four-channel sound, graphics, file system abstraction, and digital and analogue input/output (I/O) including a daisy-chained expansion bus. The system was single-tasking, monolithic and non-reentrant. Versions 0.10 to 1.20 were used on the BBC Micro, version 1.00 on the Electron, version 2 was used on the B+, and versions 3 to 5 were used in the BBC Master series. The final BBC computer, the BBC A3000, was 32-bit and ran RISC OS, which kept on portions of the Acorn MOS architecture and shared a number of characteristics (e.g. "star commands" CLI, "VDU" video control codes and screen modes) with the earlier 8-bit MOS. Versions 0 to 2 of the MOS were 16 KiB in size, written in 6502 machine code, and held in read-only memory (ROM) on the motherboard. The upper quarter of the 16-bit address space (0xC000 to 0xFFFF) is reserved for its ROM code and I/O space. Versions 3 to 5 were still restricted to a 16 KiB address space, but managed to hold more code and hence more complex routines, partly because of the alternative 65C102 central processing unit (CPU) with its denser instruction set plus the careful use of paging. User interface The original MOS versions, from 0 to 2, did not have a user interface per se: applications were expected to forward operating system command lines to the OS on its behalf, and the programming language BBC BASIC ROM, with 6502 assembler built in, supplied with the BBC Micro is the default application used for this purpose. The BBC Micro would halt with a Language? error if no ROM is present that advertises to the OS an ability to provide a user interface (called language ROMs). MOS version 3 onwards did feature a simple command-line interface, normally only seen when the CMOS memory did not contain a setting for the default language ROM. Application programs on ROM, and some casset
https://en.wikipedia.org/wiki/Prenol
Prenol, or 3-methyl-2-buten-1-ol, is a natural alcohol. It is one of the most simple terpenoids. It is a clear colorless oil that is reasonably soluble in water and miscible with most common organic solvents. It has a fruity odor and is used occasionally in perfumery. Prenol occurs naturally in citrus fruits, cranberry, bilberry, currants, grapes, raspberry, blackberry, tomato, white bread, hop oil, coffee, arctic bramble, cloudberry and passion fruit. It is also manufactured industrially by BASF (in Ludwigshafen, Germany) and by Kuraray (in Asia) as an intermediate to pharmaceuticals and aroma compounds. Global production in 2001 was between 6000 and 13,000 tons. Industrial production Prenol is produced industrially by the reaction of formaldehyde with isobutene, followed by the isomerization of the resulting isoprenol (3-methyl-3-buten-1-ol). Polyprenols Prenol is a building block of isoprenoid alcohols, which have the general formula: H–[CH2CCH3=CHCH2]n–OH The repeating C5H8 moiety in the brackets is called isoprene, and these compounds are sometimes called 'isoprenols'. They should not be confused with isoprenol, which is an isomer of prenol with a terminal double bond. The simplest isoprenoid alcohol is geraniol (n = 2): higher oligomers include farnesol (n = 3) and geranylgeraniol (n = 4). When the isoprene unit attached to the alcohol is saturated, the compound is referred to as a dolichol. Dolichols are important as glycosyl carriers in the synthesis of polysaccharides. They also play a major role in protecting cellular membranes, stabilising cell proteins and supporting the body's immune system. Prenol is polymerized by dehydration reactions; when there are at least five isoprene units (n in the above formula is greater than or equal to five), the polymer is called a polyprenol. Polyprenols can contain up to 100 isoprene units (n = 100) linked end to end with the hydroxyl group (–OH) remaining at the end. These long-chain isoprenoid alcohols are
https://en.wikipedia.org/wiki/Topological%20string%20theory
In theoretical physics, topological string theory is a version of string theory. Topological string theory appeared in papers by theoretical physicists, such as Edward Witten and Cumrun Vafa, by analogy with Witten's earlier idea of topological quantum field theory. Overview There are two main versions of topological string theory: the topological A-model and the topological B-model. The results of the calculations in topological string theory generically encode all holomorphic quantities within the full string theory whose values are protected by spacetime supersymmetry. Various calculations in topological string theory are closely related to Chern–Simons theory, Gromov–Witten invariants, mirror symmetry, geometric Langlands Program, and many other topics. The operators in topological string theory represent the algebra of operators in the full string theory that preserve a certain amount of supersymmetry. Topological string theory is obtained by a topological twist of the worldsheet description of ordinary string theory: the operators are given different spins. The operation is fully analogous to the construction of topological field theory which is a related concept. Consequently, there are no local degrees of freedom in topological string theory. Admissible spacetimes The fundamental strings of string theory are two-dimensional surfaces. A quantum field theory known as the N = (1,1) sigma model is defined on each surface. This theory consist of maps from the surface to a supermanifold. Physically the supermanifold is interpreted as spacetime and each map is interpreted as the embedding of the string in spacetime. Only special spacetimes admit topological strings. Classically, one must choose a spacetime such that the theory respects an additional pair of supersymmetries, making the spacetime an N = (2,2) sigma model. A particular case of this is if the spacetime is a Kähler manifold and the H-flux is identically equal to zero. Generalized Kähler manifolds
https://en.wikipedia.org/wiki/E-Bullion
e-Bullion was an Internet-based digital gold currency founded by Jim and Pamela Fayed of Moorpark, California, as part of their Goldfinger Coin & Bullion group of companies. The company was incorporated in 2000 and launched on July 4, 2001. Similar to competing systems such as e-gold, e-Bullion allowed for the instant transfer of gold and silver between user accounts. e-Bullion was a registered legal corporate entity of Panama. From 2001 to 2008 e-Bullion grew to have over one million users and substantial account transaction volume, and reserves of approximately 50,000 ounces of gold bullion. The company was a competitor to e-gold.com and goldmoney.com. In 2008, co-founder, Pamela Fayed, was murdered, leading to the indictment, trial and conviction of her husband Jim Fayed for hiring her murder. Fayed was sentenced to death, and is currently on death row in California. As a result of the murder, the U.S. Government seized all of the assets of e-Bullion, resulting in the closure of the company in August 2008. Features E-Bullion simply provided a way for users to hold and transfer balances in gold and silver. The company also offered a debit card to U.S. customers, which enabled them to convert their bullion balances to USD and withdraw at an automated teller machine (ATM) or use it for debit purchases. e-Bullion provided its own in-house currency exchange service through Goldfinger Coin & Bullion, Inc. An e-Bullion account could be funded directly via wire transfer from a bank account. e-Bullion was the first DGC to issue a debit card linked to an account. e-Bullion was the first DGC to use CRYPTOCard security tokens to protect user accounts from unauthorized access. Goldfinger Bullion Reserve Corporation, a sister company of e-Bullion, held the precious metals in bullion storage vaults located in Los Angeles, and at the Perth Mint in Australia. 2008 Murder of e-Bullion Principal The Fayeds had a troubled marriage which eventually led to divorce proceedings
https://en.wikipedia.org/wiki/Ghosting%20%28television%29
In television, a ghost is a replica of the transmitted image, offset in position, that is superimposed on top of the main image. It is often caused when a TV signal travels by two different paths to a receiving antenna, with a slight difference in timing. Analog ghosting Common causes of ghosts (in the more specific sense) are: Mismatched impedance along the communication channel, which causes unwanted reflections. The technical term for this phenomenon is ringing. Multipath distortion, because radio frequency waves may take paths of different length (by reflecting from buildings, transmission lines, aircraft, clouds, etc.) to reach the receiver. In addition, RF leaks may allow a signal to enter the set by a different path; this is most common in a large building such as a tower block or hotel where one TV antenna feeds many different rooms, each fitted with a TV aerial socket (known as pre-echo). By getting a better antenna or cable system it can be eliminated or mitigated. Note that ghosts are a problem specific to the video portion of television, largely because it uses AM for transmission. The audio portion uses FM, which has the desirable property that a stronger signal tends to overpower interference from weaker signals due to the capture effect. Even when ghosts are particularly bad in the picture, there may be little audio interference. SECAM TV uses FM for the chrominance signal, hence ghosting only affects the luma portion of its signal. TV is broadcast on VHF and UHF, which have line-of-sight propagation, and easily reflect off of buildings, mountains, and other objects. Pre-echo If the ghost is seen on the left of the main picture, then it is likely that the problem is pre-echo, which is seen in buildings with very long TV downleads where an RF leakage has allowed the TV signal to enter the tuner by a second route. For instance, plugging in an additional aerial to a TV which already has a communal TV aerial connection (or cable TV) can cause thi
https://en.wikipedia.org/wiki/Rusty%20bolt%20effect
The rusty bolt effect is a form of radio interference due to interactions of the radio waves with dirty connections or corroded parts. It is more properly known as passive intermodulation, and can result from a variety of different causes such as ferromagnetic conduction metals, or nonlinear microwave absorbers and loads. Corroded materials on antennas, waveguides, or even structural elements, can act as one or more diodes. (Crystal sets, early radio receivers, used the semiconductor properties of natural galena to demodulate the radio signal, and copper oxide was used in power rectifiers.) Galvanised fasteners and sheet roofing develop a coating of zinc oxide, a semiconductor commonly used for transient voltage suppression. This gives rise to undesired interference, including the generation of harmonics or intermodulation. Rusty objects that should not be in the signal-path, including antenna structures, can also reradiate radio signals with harmonics and other unwanted signals. As with all out-of-band noise, these spurious emissions can interfere with receivers. This effect can cause radiated signals out of the desired band, even if the signal into a passive antenna is carefully band-limited. Mathematics associated with the rusty bolt The transfer characteristic of an object can be represented as a power series: Or, taking only the first few terms (which are most relevant), For an ideal perfect linear object K2, K3, K4, K5, etc. are all zero. A good connection approximates this ideal case with sufficiently small values. For a 'rusty bolt' (or an intentionally designed frequency mixer stage), K2, K3, K4, K5, etc. are not all zero. These higher-order terms result in generation of harmonics. The following analysis applies the power series representation to an input sine-wave. Harmonic generation If the incoming signal is a sine wave {Ein sin(ωt)}, (and taking only first-order terms), then the output can be written: Clearly, the harmonic terms will be worse at
https://en.wikipedia.org/wiki/Ribbon%20controller
A ribbon controller is a tactile sensor used to control synthesizers. It generally consists of a resistive strip that acts as a potentiometer. Because of its continuous control, ribbon controllers are often used to produce glissando effects. Early examples of the use of ribbon controllers in a musical instrument are in the Ondes Martenot and Trautonium. In some early instruments, the slider of the potentiometer was worn as a ring by the player. In later ribbon controllers, the ring was replaced by a conductive layer that covered the resistive element. Ribbon controllers are found in early Moog synthesizers, but were omitted from most later synthesizers. The Yamaha CS-80 synthesizer is well-known for its inclusion of a ribbon controller, used by Vangelis to create many of the characteristic sounds in the Blade Runner soundtrack. Although ribbon controllers are less common in later synthesizers, they were used in the Moog Liberation and Micromoog. Roland incorporated a ribbon controller in their JP-8000 synthesizer. , ribbon controllers are available as control voltage and MIDI peripherals. An example of a modern synthesizer that uses a ribbon controller is the Swarmatron. Later in 2010/2011, Korg released a series of minisynths called Monotron using the ribbon controller, it became so popular that it still in production in 2023. References External links Sensors Electronic musical instruments
https://en.wikipedia.org/wiki/Cable%20Internet%20access
In telecommunications, cable Internet access, shortened to cable Internet, is a form of broadband internet access which uses the same infrastructure as cable television. Like digital subscriber line and fiber to the premises services, cable Internet access provides network edge connectivity (last mile access) from the Internet service provider to an end user. It is integrated into the cable television infrastructure analogously to DSL which uses the existing telephone network. Cable TV networks and telecommunications networks are the two predominant forms of residential Internet access. Recently, both have seen increased competition from fiber deployments, wireless, mobile networks and satellite internet access. Hardware and bit rates Broadband cable Internet access requires a cable modem at the customer's premises and a cable modem termination system (CMTS) at a cable operator facility, typically a cable television headend. The two are connected via coaxial cable to a hybrid fibre-coaxial (HFC) network. While access networks are referred to as last-mile technologies, cable Internet systems can typically operate where the distance between the modem and the termination system is up to . If the HFC network is large, the cable modem termination system can be grouped into hubs for efficient management. Several standards have been used for cable internet, but the most common is DOCSIS. A cable modem at the customer is connected via coaxial cable to an optical node, and thus into an HFC network. An optical node serves many modems as the modems are connected with coaxial cable to a coaxial cable "trunk" via distribution "taps" on the trunk, which then connects to the node, possibly using amplifiers along the trunk. The optical node converts the Radiofrequency (RF) signal in the coaxial cable trunk into light pulses to be sent through optical fibers in the HFC network. At the other end of the network, an optics platform or headend platform converts the light pulses into
https://en.wikipedia.org/wiki/Pseudorandom%20function%20family
In cryptography, a pseudorandom function family, abbreviated PRF, is a collection of efficiently-computable functions which emulate a random oracle in the following way: no efficient algorithm can distinguish (with significant advantage) between a function chosen randomly from the PRF family and a random oracle (a function whose outputs are fixed completely at random). Pseudorandom functions are vital tools in the construction of cryptographic primitives, especially secure encryption schemes. Pseudorandom functions are not to be confused with pseudorandom generators (PRGs). The guarantee of a PRG is that a single output appears random if the input was chosen at random. On the other hand, the guarantee of a PRF is that all its outputs appear random, regardless of how the corresponding inputs were chosen, as long as the function was drawn at random from the PRF family. A pseudorandom function family can be constructed from any pseudorandom generator, using, for example, the "GGM" construction given by Goldreich, Goldwasser, and Micali. While in practice, block ciphers are used in most instances where a pseudorandom function is needed, they do not, in general, constitute a pseudorandom function family, as block ciphers such as AES are defined for only limited numbers of input and key sizes. Motivations from random functions A PRF is an efficient (i.e. computable in polynomial time), deterministic function that maps two distinct sets (domain and range) and looks like a truly random function. Essentially, a truly random function would just be composed of a lookup table filled with uniformly distributed random entries. However, in practice, a PRF is given an input string in the domain and a hidden random seed and runs multiple times with the same input string and seed, always returning the same value. Nonetheless, given an arbitrary input string, the output looks random if the seed is taken from a uniform distribution. A PRF is considered to be good if its behavior
https://en.wikipedia.org/wiki/Superinsulation
Superinsulation is an approach to building design, construction, and retrofitting that dramatically reduces heat loss (and gain) by using much higher insulation levels and airtightness than average. Superinsulation is one of the ancestors of the passive house approach. Definition There is no universally agreed definition of superinsulation, but superinsulated buildings typically include: Very high levels of insulation, typically R-40 (RSI-7) walls and R-60 (RSI-10.6) roof, corresponding to SI U-values of 0.15 and 0.1 W/(m²·K) respectively) Details to ensure insulation continuity where walls meet roofs, foundations, and other walls Airtight construction, especially around doors and windows, to prevent air infiltration pushing heat in or out a heat recovery ventilation system to provide fresh air No large windows facing any particular direction Much smaller than a conventional heating system, sometimes just a small backup heater Nisson & Dutt (1985) suggest that a house might be described as "superinsulated" if the cost of space heating is lower than that of water heating. Besides the meaning mentioned above of high level of insulation, the terms superinsulation and superinsulating materials are in use for high R/inch insulation materials like vacuum insulation panels (VIPs) and aerogel. Theory A superinsulated house is intended to reduce heating needs significantly and may even be heated predominantly by intrinsic heat sources (waste heat generated by appliances and the body heat of the occupants) with small amounts of backup heat. This has been demonstrated to work even in frigid climates but requires close attention to construction details in addition to the insulation (see IEA Solar Heating & Cooling Implementing Agreement Task 13). History The term "superinsulation" was coined by Wayne Schick at the University of Illinois Urbana–Champaign. In 1976 he was part of a team that developed a design called the "Lo-Cal" house, using computer simulations based
https://en.wikipedia.org/wiki/Interrupt%20priority%20level
The interrupt priority level (IPL) is a part of the current system interrupt state, which indicates the interrupt requests that will currently be accepted. The IPL may be indicated in hardware by the registers in a programmable interrupt controller, or in software by a bitmask or integer value and source code of threads. Overview An integer based IPL may be as small as a single bit, with just two values: 0 (all interrupts enabled) or 1 (all interrupts disabled), as in the MOS Technology 6502. However, some architectures permit a greater range of values, where each value enables interrupt requests that specify a higher level, while blocking ones from the same or lower level. Assigning different priorities to interrupt requests can be useful in trying to balance system throughput versus interrupt latency. Some kinds of interrupts need to be responded to more quickly than others, but the amount of processing might not be large, so it makes sense to assign a higher priority to that kind of interrupt. Control of interrupt level was also used to synchronize access to kernel data structures. Thus, the level-3 scheduler interrupt handler would temporarily raise IPL to 7 before accessing any scheduler data structures, then lower back to 3 before switching process contexts. However, it was not allowed for an interrupt handler to lower IPL below that at which it was entered, since to do so could destroy the integrity of the synchronization system. Of course, multiprocessor systems add their complications, which are not addressed here. Regardless of what the hardware might support, typical UNIX-type systems only use two levels: the minimum (all interrupts disabled) and the maximum (all interrupts enabled). OpenVMS IPLs As an example of one of the more elaborate IPL-handling systems ever deployed, the VAX computer and associated VMS operating system supports 32 priority levels, from 0 to 31. Priorities 16 and above are for requests from external hardware, while values belo
https://en.wikipedia.org/wiki/Gravity%20model%20of%20trade
The gravity model of international trade in international economics is a model that, in its traditional form, predicts bilateral trade flows based on the economic sizes and distance between two units. Research shows that there is "overwhelming evidence that trade tends to fall with distance." The model was first introduced by Walter Isard in 1954. The basic model for trade between two countries (i and j) takes the form of In this formula G is a constant, F stands for trade flow, D stands for the distance and M stands for the economic dimensions of the countries that are being measured. The equation can be changed into a linear form for the purpose of econometric analyses by employing logarithms. The model has been used by economists to analyse the determinants of bilateral trade flows such as common borders, common languages, common legal systems, common currencies, common colonial legacies, and it has been used to test the effectiveness of trade agreements and organizations such as the North American Free Trade Agreement (NAFTA) and the World Trade Organization (WTO) (Head and Mayer 2014). The model has also been used in international relations to evaluate the impact of treaties and alliances on trade (Head and Mayer). The model has also been applied to other bilateral flow data (also known as "dyadic" data) such as migration, traffic, remittances and foreign direct investment. Theoretical justifications and research The model has been an empirical success in that it accurately predicts trade flows between countries for many goods and services, but for a long time some scholars believed that there was no theoretical justification for the gravity equation. However, a gravity relationship can arise in almost any trade model that includes trade costs that increase with distance. The gravity model estimates the pattern of international trade. While the model’s basic form consists of factors that have more to do with geography and spatiality, the gravity model h
https://en.wikipedia.org/wiki/6264
The 6264 is a JEDEC-standard static RAM integrated circuit. It has a capacity of 64 Kbit (8 KB). It is produced by a wide variety of different vendors, including Hitachi, Hynix, and Cypress Semiconductor. It is available in a variety of different configurations, such as DIP, SPDIP, and SOIC. Some versions of the 6264 can run in ultra-low-power mode and retain memory when not in use, thus making them suitable for battery backup applications. External links 6264 Datasheet (Cypress, PDF format) Computer memory
https://en.wikipedia.org/wiki/Pole%E2%80%93zero%20plot
In mathematics, signal processing and control theory, a pole–zero plot is a graphical representation of a rational transfer function in the complex plane which helps to convey certain properties of the system such as: Stability Causal system / anticausal system Region of convergence (ROC) Minimum phase / non minimum phase A pole-zero plot shows the location in the complex plane of the poles and zeros of the transfer function of a dynamic system, such as a controller, compensator, sensor, equalizer, filter, or communications channel. By convention, the poles of the system are indicated in the plot by an X while the zeros are indicated by a circle or O. A pole-zero plot is plotted in the plane of a complex frequency domain, which can represent either a continuous-time or a discrete-time system: Continuous-time systems use the Laplace transform and are plotted in the s-plane: Real frequency components are along its vertical axis (the imaginary line where ) Discrete-time systems use the Z-transform and are plotted in the z-plane: Real frequency components are along its unit circle Continuous-time systems In general, a rational transfer function for a continuous-time LTI system has the form: where and are polynomials in , is the order of the numerator polynomial, is the coefficient of the numerator polynomial, is the order of the denominator polynomial, and is the coefficient of the denominator polynomial. Either or or both may be zero, but in real systems, it should be the case that ; otherwise the gain would be unbounded at high frequencies. Poles and zeros the zeros of the system are roots of the numerator polynomial: such that the poles of the system are roots of the denominator polynomial: such that Region of convergence The region of convergence (ROC) for a given continuous-time transfer function is a half-plane or vertical strip, either of which contains no poles. In general, the ROC is not unique, and the particular ROC
https://en.wikipedia.org/wiki/Sony%20Watchman
The Sony Watchman is a line of portable pocket televisions trademarked and produced by Sony. The line was introduced in 1982 and discontinued in 2000. Its name came from a portmanteau formed of "Watch" (watching television) and "man" from Sony's Walkman personal cassette audio players. There were more than 65 models of the Watchman made before its discontinuation. As the models progressed, display size increased and new features were added. Due to the switch to digital broadcasting, most models of the Sony Watchman can no longer be used to receive live television broadcasts, without the use of a digital converter box. FD-210 The initial model was introduced in 1982 as the FD-210, which had a black & white five-centimeter (2") Cathode-ray tube display. The device weighed around 650 grams (23 oz), with a measurement of 87 x 198 x 33 millimeters (3½" x 7¾" x 1¼"). The device was sold in Japan with a price of 54,800 yen. Roughly two years later, in 1984, the device was introduced to Europe and North America. Later releases Sony manufactured more than 65 models of the Watchman before its discontinuation in 2000. Upon the release of further models after the FD-210, the display size increased, and new features were introduced. The FD-3, introduced in 1987, had a built-in digital clock. The FD-30, introduced in 1984 had a built-in AM/FM Stereo radio. The FD-40/42/44/45 were among the largest Watchmen, utilizing a 4" CRT display. The FD-40 introduced a single composite A/V input. The FD-45, introduced in 1986, was water-resistant. In 1988/1989, the FDL 330S color Watchman TV/Monitor with LCD display was introduced. In 1990, the FDL-310, a Watchman with a color LCD display was introduced. The FD-280/285, made from 1990 to 1994, was the last Watchman to use a black and white CRT display. One of the last Watchmen was the FDL-22 introduced in 1998, which featured an ergonomic body which made it easier to hold, and introduced Sony's Straptenna, where the wrist strap served as
https://en.wikipedia.org/wiki/Hyperplane%20section
In mathematics, a hyperplane section of a subset X of projective space Pn is the intersection of X with some hyperplane H. In other words, we look at the subset XH of those elements x of X that satisfy the single linear condition L = 0 defining H as a linear subspace. Here L or H can range over the dual projective space of non-zero linear forms in the homogeneous coordinates, up to scalar multiplication. From a geometrical point of view, the most interesting case is when X is an algebraic subvariety; for more general cases, in mathematical analysis, some analogue of the Radon transform applies. In algebraic geometry, assuming therefore that X is V, a subvariety not lying completely in any H, the hyperplane sections are algebraic sets with irreducible components all of dimension dim(V) − 1. What more can be said is addressed by a collection of results known collectively as Bertini's theorem. The topology of hyperplane sections is studied in the topic of the Lefschetz hyperplane theorem and its refinements. Because the dimension drops by one in taking hyperplane sections, the process is potentially an inductive method for understanding varieties of higher dimension. A basic tool for that is the Lefschetz pencil. References Algebraic geometry
https://en.wikipedia.org/wiki/Time%20base%20correction
Time base correction (TBC) is a technique to reduce or eliminate errors caused by mechanical instability present in analog recordings on mechanical media. Without time base correction, a signal from a videotape recorder (VTR) or videocassette recorder (VCR) cannot be mixed with other, more time-stable devices found in television studios and post-production facilities. Most broadcast quality VCRs have simple time base correctors built in though external TBCs are often used. Some high-end domestic analog video recorders and camcorders also include a TBC circuit, which typically can be switched off if required. Time base correction counteracts errors by buffering the video signal as it comes off the videotape at an unsteady rate, and releasing it at a steady rate. TBCs also allow a variable delay in the video stream. By adjusting the rate and delay using a waveform monitor and a vectorscope, the corrected signal can now match the timing of the other devices in the system. If all of the devices in a system are adjusted so their signals meet the video switcher at the same time and at the same rate, the signals can be mixed. A single master clock or sync generator provides the reference for all of the devices' clocks. Video correction As far back as 1956, professional reel-to-reel audio tape recorders relying on mechanical stability alone were stable enough that pitch distortion could be below audible level without time base correction. However, the higher sensitivity of video recordings meant that even the best mechanical solutions still resulted in detectable distortion of the video signals and difficulty locking to downstream devices. A video signal consists of picture information but also sync and subcarrier signals which allow the image to be framed up square on the monitor, reproduce colors accurately and, importantly, allow the combination and switching of two or more video signals. Types Physically there are only 4 types, dedicated IC, add-in cards for p
https://en.wikipedia.org/wiki/Ray-tracing%20hardware
Ray-tracing hardware is special-purpose computer hardware designed for accelerating ray tracing calculations. Introduction: Ray tracing and rasterization The problem of rendering 3D graphics can be conceptually presented as finding all intersections between a set of "primitives" (typically triangles or polygons) and a set of "rays" (typically one or more per pixel). Up to 2010, all typical graphic acceleration boards, called graphics processing units (GPUs), used rasterization algorithms. The ray tracing algorithm solves the rendering problem in a different way. In each step, it finds all intersections of a ray with a set of relevant primitives of the scene. Both approaches have their own benefits and drawbacks. Rasterization can be performed using devices based on a stream computing model, one triangle at the time, and access to the complete scene is needed only once. The drawback of rasterization is that non-local effects, required for an accurate simulation of a scene, such as reflections and shadows are difficult; and refractions nearly impossible to compute. The ray tracing algorithm is inherently suitable for scaling by parallelization of individual ray renders. However, anything other than ray casting requires recursion of the ray tracing algorithm (and random access to the scene graph) to complete their analysis, since reflected, refracted, and scattered rays require that various parts of the scene be re-accessed in a way not easily predicted. But it can easily compute various kinds of physically correct effects, providing much more realistic impression than rasterization. The complexity of a well implemented ray tracing algorithm scales logarithmically; this is due to objects (triangles and collections of triangles) being placed into BSP trees or similar structures, and only being analyzed if a ray intersects with the bounding volume of the binary space partition. Implementations Various implementations of ray tracing hardware have been created, both
https://en.wikipedia.org/wiki/Charles%20Buxton%20Going
Charles Buxton Going (April 4, 1863 - 1952 in France) was an American engineer, writer, and editor. Biography Born in Westchester N.Y., Going attended Columbia College School of Mines, where he graduated in 1882. Columbia University awarded him the honorary degree of M.Sc. in 1910. Mr. Going immediately began work in the Middle West in industrial and corporate management. He joined the staff of the Engineering Magazine in 1896, becoming managing editor in 1898 and editor in 1912. He did much to discern, define, and establish the profession of "industrial engineering." He became special lecturer on the subject of "industrial engineering" at Columbia, Harvard University, New York University, and the University of Chicago. Publications His writings include: 1909. Methods of the Santa Fé 1911. Principles of Industrial Engineering 1915. Preface to Ford methods and the Ford shops. Horace Lucian Arnold and Fay Leone Faurote. The Engineering magazine company, 1915. On less scholarly notes, he wrote: Summer-Fallow (1892) Star-Glow and Song (1909) Folklore and Fairy Plays (1927) In collaboration with Marie Overton Corbin (later Mrs. Going, d. May 1925), he wrote: Urchins of the Sea (1900) Urchins of the Pole (1901) References External links 1863 births 1952 deaths American business theorists American engineering writers American engineers Columbia School of Engineering and Applied Science alumni People from Westchester County, New York
https://en.wikipedia.org/wiki/IEEE%201901
The IEEE 1901 Standard, established in 2010, set the first worldwide benchmark for powerline communication tailored for uses like multimedia home networks, audio-video, and the smart grid. This standard underwent an amendment in IEEE 1901a-2019, introducing improvements to the HD-PLC physical layer (wavelet) for Internet of Things (IoT) applications. It was further updated in 2020, known as IEEE 1901-2020. The IEEE Std 1901 is a standard for high speed (up to 500 Mbit/s at the physical layer) communication devices via electric power lines, often called broadband over power lines (BPL). The standard uses transmission frequencies below 100 MHz. This standard is usable by all classes of BPL devices, including BPL devices used for the connection (<1500m to the premises) to Internet access services as well as BPL devices used within buildings for local area networks, smart energy applications, transportation platforms (vehicle), and other data distribution applications (<100m between devices). The IEEE Std 1901 standard replaced a dozen previous powerline specifications. It includes a mandatory coexistence Inter-System Protocol (ISP). The IEEE 1901 ISP prevents interference when the different BPL implementations are operated within close proximity of one another. To handle multiple devices attempting to use the line at the same time, IEEE Std 1901 supports TDMA, but CSMA/CA (also used in WiFi) is most commonly implemented by devices sold. The 1901 standard is mandatory to initiate SAE J1772 electric vehicle DC charging (AC uses PWM) and the sole powerline protocol for IEEE 1905.1 heterogeneous networking. It was highly recommended in the IEEE P1909.1 smart grid standards because those are primarily for control of AC devices, which by definition always have AC power connections - thus no additional connections are required. Updates overview The IEEE 1901 Standard was a significant step in the development of powerline communication (PLC) technologies. PLC allows for
https://en.wikipedia.org/wiki/Configuration%20Menu%20Language
Configuration Menu Language (CML) was used, in Linux kernel versions prior to 2.5.45, to configure the values that determine the composition and exact functionality of the kernel. Many possible variations in kernel functionality can exist; and customization is possible, for instance for the specifications of the exact hardware it will run on. It can also be tuned for administrator preferences. CML was written by Raymond Chen in 1993. Its question-and-answer interface allowed systematic selection particular behaviors without editing multiple system files. Eric S. Raymond wrote a menu-driven module named CML2 to replace it, but it was officially rejected. Linus Torvalds attributed the rejection in a 2007 lkml.org post to a preference for small incremental changes, and concern that the maintainer had not been involved in the rewrite. "You can't just...go do your own thing and expect it to be merged," he said, noting that Raymond "left with a splash" over the rejection. LinuxKernelConf replaced CML in kernel version 2.5.45, and remains in use for the 4.0 kernel. References External links The Linux Kernel HOWTO 2003 More recent documentation may exist but the TLDP kernel page is currently offline and under revision. The CML2 Language - Constraint based configuration for the Linux kernel and elsewhere CML2 Resources Page Linux kernel
https://en.wikipedia.org/wiki/Local%20system
In mathematics, a local system (or a system of local coefficients) on a topological space X is a tool from algebraic topology which interpolates between cohomology with coefficients in a fixed abelian group A, and general sheaf cohomology in which coefficients vary from point to point. Local coefficient systems were introduced by Norman Steenrod in 1943. Local systems are the building blocks of more general tools, such as constructible and perverse sheaves. Definition Let X be a topological space. A local system (of abelian groups/modules/...) on X is a locally constant sheaf (of abelian groups/modules...) on X. In other words, a sheaf is a local system if every point has an open neighborhood such that the restricted sheaf is isomorphic to the sheafification of some constant presheaf. Equivalent definitions Path-connected spaces If X is path-connected, a local system of abelian groups has the same stalk L at every point. There is a bijective correspondence between local systems on X and group homomorphisms and similarly for local systems of modules. The map giving the local system is called the monodromy representation of . This shows that (for X path-connected) a local system is precisely a sheaf whose pullback to the universal cover of X is a constant sheaf. This correspondence can be upgraded to an equivalence of categories between the category of local systems of abelian groups on X and the category of abelian groups endowed with an action of (equivalently, -modules). Stronger definition on non-connected spaces A stronger nonequivalent definition that works for non-connected X is: the following: a local system is a covariant functor from the fundamental groupoid of to the category of modules over a commutative ring , where typically . This is equivalently the data of an assignment to every point a module along with a group representation such that the various are compatible with change of basepoint and the induced map on fundamental grou
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Babai
László "Laci" Babai (born July 20, 1950, in Budapest) is a Hungarian professor of computer science and mathematics at the University of Chicago. His research focuses on computational complexity theory, algorithms, combinatorics, and finite groups, with an emphasis on the interactions between these fields. Life In 1968, Babai won a gold medal at the International Mathematical Olympiad. Babai studied mathematics at Faculty of Science of the Eötvös Loránd University from 1968 to 1973, received a PhD from the Hungarian Academy of Sciences in 1975, and received a DSc from the Hungarian Academy of Sciences in 1984. He held a teaching position at Eötvös Loránd University since 1971; in 1987 he took joint positions as a professor in algebra at Eötvös Loránd and in computer science at the University of Chicago. In 1995, he began a joint appointment in the mathematics department at Chicago and gave up his position at Eötvös Loránd. Work He is the author of over 180 academic papers. His notable accomplishments include the introduction of interactive proof systems, the introduction of the term Las Vegas algorithm, and the introduction of group theoretic methods in graph isomorphism testing. In November 2015, he announced a quasipolynomial time algorithm for the graph isomorphism problem. He is editor-in-chief of the refereed online journal Theory of Computing. Babai was also involved in the creation of the Budapest Semesters in Mathematics program and first coined the name. Graph isomorphism in quasipolynomial time After announcing the result in 2015, Babai presented a paper proving that the graph isomorphism problem can be solved in quasi-polynomial time in 2016, at the ACM Symposium on Theory of Computing. In response to an error discovered by Harald Helfgott, he posted an update in 2017. Honors In 1988, Babai won the Hungarian State Prize, in 1990 he was elected as a corresponding member of the Hungarian Academy of Sciences, and in 1994 he became a full member. In 1
https://en.wikipedia.org/wiki/Radeon%20X1000%20series
The R520 (codenamed Fudo) is a graphics processing unit (GPU) developed by ATI Technologies and produced by TSMC. It was the first GPU produced using a 90 nm photolithography process. The R520 is the foundation for a line of DirectX 9.0c and OpenGL 2.0 3D accelerator X1000 video cards. It is ATI's first major architectural overhaul since the R300 and is highly optimized for Shader Model 3.0. The Radeon X1000 series using the core was introduced on October 5, 2005, and competed primarily against Nvidia's GeForce 7000 series. ATI released the successor to the R500 series with the R600 series on May 14, 2007. ATI does not provide official support for any X1000 series cards for Windows 8 or Windows 10; the last AMD Catalyst for this generation is the 10.2 from 2010 up to Windows 7. AMD stopped providing drivers for Windows 7 for this series in 2015. A series of open source Radeon drivers are available when using a Linux distribution. The same GPUs are also found in some AMD FireMV products targeting multi-monitor set-ups. Delay during the development The Radeon X1800 video cards that included an R520 were released with a delay of several months because ATI engineers discovered a bug within the GPU in a very late stage of development. This bug, caused by a faulty 3rd party 90 nm chip design library, greatly hampered clock speed ramping, so they had to "respin" the chip for another revision (a new GDSII had to be sent to TSMC). The problem had been almost random in how it affected the prototype chips, making it difficult to identify. Architecture The R520 architecture is referred to by ATI as an "Ultra Threaded Dispatch Processor", which refers to ATI's plan to boost the efficiency of their GPU, instead of going with a brute force increase in the number of processing units. A central pixel shader "dispatch unit" breaks shaders down into threads (batches) of 16 pixels (4×4) and can track and distribute up to 128 threads per pixel "quad" (4 pipelines each). When a sh
https://en.wikipedia.org/wiki/Electrical%20load
An electrical load is an electrical component or portion of a circuit that consumes (active) electric power, such as electrical appliances and lights inside the home. The term may also refer to the power consumed by a circuit. This is opposed to a power supply source, such as a battery or generator, which provides power. The term is used more broadly in electronics for a device connected to a signal source, whether or not it consumes power. If an electric circuit has an output port, a pair of terminals that produces an electrical signal, the circuit connected to this terminal (or its input impedance) is the load. For example, if a CD player is connected to an amplifier, the CD player is the source, and the amplifier is the load. Load affects the performance of circuits with respect to output voltages or currents, such as in sensors, voltage sources, and amplifiers. Mains power outlets provide an easy example: they supply power at constant voltage, with electrical appliances connected to the power circuit collectively making up the load. When a high-power appliance switches on, it dramatically reduces the load impedance. The voltages will drop if the load impedance is not much higher than the power supply impedance. Therefore, switching on a heating appliance in a domestic environment may cause incandescent lights to dim noticeably. A more technical approach When discussing the effect of load on a circuit, it is helpful to disregard the circuit's actual design and consider only the Thévenin equivalent. (The Norton equivalent could be used instead, with the same results.) The Thévenin equivalent of a circuit looks like this: With no load (open-circuited terminals), all of falls across the output; the output voltage is . However, the circuit will behave differently if a load is added. Therefore, we would like to ignore the details of the load circuit, as we did for the power supply, and represent it as simply as possible. For example, if we use an input resist
https://en.wikipedia.org/wiki/Features%20new%20to%20Windows%20XP
As the next version of Windows NT after Windows 2000, as well as the successor to Windows Me, Windows XP introduced many new features but it also removed some others. User interface and appearance Graphics With the introduction of Windows XP, the C++ based software-only GDI+ subsystem was introduced to replace certain GDI functions. GDI+ adds anti-aliased 2D graphics, textures, floating point coordinates, gradient shading, more complex path management, bicubic filtering, intrinsic support for modern graphics-file formats like JPEG and PNG, and support for composition of affine transformations in the 2D view pipeline. GDI+ uses RGBA values to represent color. Use of these features is apparent in Windows XP's user interface (transparent desktop icon labels, drop shadows for icon labels on the desktop, shadows under menus, translucent blue selection rectangle in Windows Explorer, sliding task panes and taskbar buttons), and several of its applications such as Microsoft Paint, Windows Picture and Fax Viewer, Photo Printing Wizard, My Pictures Slideshow screensaver, and their presence in the basic graphics layer greatly simplifies implementations of vector-graphics systems such as Flash or SVG. The GDI+ dynamic library can be shipped with an application and used under older versions of Windows. The total number of GDI handles per session is also raised in Windows XP from 16,384 to 65,536 (configurable through the registry). Windows XP shipped with DirectX 8.1, which brings major new features to DirectX Graphics besides DirectX Audio (both DirectSound and DirectMusic), DirectPlay, DirectInput and DirectShow. Direct3D introduced programmability in the form of vertex and pixel shaders, enabling developers to write code without worrying about superfluous hardware state, and fog, bump mapping and texture mapping. DirectX 9 was released in 2003, which also sees major revisions to Direct3D, DirectSound, DirectMusic and DirectShow. Direct3D 9 added a new version of the High-
https://en.wikipedia.org/wiki/International%20Colloquium%20on%20Group%20Theoretical%20Methods%20in%20Physics
The International Colloquium on Group Theoretical Methods in Physics (ICGTMP) is an academic conference devoted to applications of group theory to physics. It was founded in 1972 by Henri Bacry and Aloysio Janner. It hosts a colloquium every two years. The ICGTMP is led by a Standing Committee, which helps select winners for the three major awards presented at the conference: the Wigner Medal (19782018), the Hermann Weyl Prize (since 2002) and the Weyl-Wigner Award (since 2022). Wigner Medal The Wigner Medal was an award designed "to recognize outstanding contributions to the understanding of physics through Group Theory". It was administered by The Group Theory and Fundamental Physics Foundation, a publicly supported organization. The first award was given in 1978 to Eugene Wigner at the Integrative Conference on Group Theory and Mathematical Physics. The collaboration between the Standing Committee of the ICGTMP and the Foundation ended in 2020. In 2023 a new process for awarding the Wigner Medal was created by the Foundation. The new Wigner Medal can be granted in any field of theoretical physics. The new Wigner Medals for 2020 and 2022 were granted retrospectively in 2023. The first winners of the new prize were Yvette Kosmann-Schwarzbach, and Daniel Greenberger. The Standing Committee does not recognize the post-2018 Wigner Medals awarded by the Foundation as the continuation of the prize from 1978 through 2018. Weyl-Wigner Award In 202021, the ICGTMP Standing Committee created a new prize to replace the Wigner Medal, called the Weyl-Wigner Award. The purpose of the Weyl-Wigner Award is "to recognize outstanding contributions to the understanding of physics through group theory, continuing the tradition of The Wigner Medal that was awarded at the International Colloquium on Group Theoretical Methods in Physics from 1978 to 2018." The recipients of this prize are chosen by an international selection committee elected by the Standing Committee. The first
https://en.wikipedia.org/wiki/Foton%20%28satellite%29
Foton (or Photon) is the project name of two series of Russian science satellite and reentry vehicle programs. Although uncrewed, the design was adapted from the crewed Vostok spacecraft capsule. The primary focus of the Foton project is materials science research, but some missions have also carried experiments for other fields of research including biology. The original Foton series included 12 launches from the Plesetsk Cosmodrome from 1985 to 1999. The second series, under the name Foton-M, incorporates many design improvements over the original Foton, and is still in use. So far, there have been four launch attempts of the Foton-M. The first was in 2002 from the Plesetsk Cosmodrome, which ended in failure due to a problem in the launch vehicle. The last three were from the Baikonur Cosmodrome, in 2005, 2007, and 2014; all were successful. Both the Foton and Foton-M series used Soyuz-U (11A511U and 11A511U2) rockets as launch vehicles. Starting with the Foton-7 mission, the European Space Agency has been a partner in the Foton program. Foton-M Foton-M is a new generation of Russian robotic spacecraft for research conducted in the microgravity environment of Earth orbit. The Foton-M design is based on the design of the Foton, with several improvements including a new telemetry and telecommand unit for increased data flow rate, increased battery capacity, and a better thermal control system. It is produced by TsSKB-Progress in Samara. The launch of Foton-M1 failed because of a malfunction of the Soyuz-U launcher. The second launch (of Foton-M2) was a success. Foton-M3 was launched on 14 September 2007, carried by a Soyuz-U rocket lifting off from the Baikonur Cosmodrome in Kazakhstan with Nadezhda, a cockroach that became the first Earth creature to produce offspring that had been conceived in space. It returned successfully to Earth on 26 September 2007, landing in Kazakhstan at 7:58 GMT. Reentry The Foton capsule has limited thruster capability. As such, t
https://en.wikipedia.org/wiki/WSTM-TV
WSTM-TV (channel 3) is a television station in Syracuse, New York, United States, affiliated with NBC and The CW. It is owned by Sinclair Broadcast Group, which provides certain services to CBS affiliate WTVH (channel 5) through a local marketing agreement with Granite Broadcasting. Both stations share studios on James Street/NY 290 in the Near Northeast section of Syracuse, while WSTM-TV's transmitter is located in the town of Onondaga, New York. History The station began operations on February 15, 1950, on VHF channel 5 with the call sign WSYR-TV, moving to VHF channel 3 in 1953. It was owned by Advance Publications (the Newhouse family's company) along with the Syracuse Post-Standard, Syracuse Herald-Journal, and WSYR radio (AM 570 and FM 94.5, now WYYY). It was Syracuse's second television station, signing on a year and three months after WHEN-TV (now WTVH). It originally had facilities at the Kemper Building in Downtown Syracuse. In 1958, WSYR-AM-FM-TV moved to new studios on James Street. Unlike most NBC affiliates in two station markets, WSYR-TV did not take a secondary ABC or DuMont affiliation. WSYR-TV doubled as the NBC affiliate for Binghamton until WINR-TV (now WICZ-TV) signed-on in 1957. The station also operated a satellite station in Elmira until 1980; that station, first known as WSYE-TV and now WETM-TV, is now owned by Nexstar Broadcasting Group and fed via centralcasting facilities of a Syracuse cross-town rival, which ironically now holds the WSYR-TV call letters. It remains affiliated with NBC. The Newhouse family largely exited broadcasting in 1980. The WSYR cluster had been grandfathered after the Federal Communications Commission (FCC) banned common ownership of newspaper and broadcasting outlets, but lost this protection when Advance dismantled its broadcasting division. Channel 3 was sold to the Times Mirror Company, who—so as to comply with an FCC rule in effect at the time that prohibited TV and radio stations in the same market, but wi
https://en.wikipedia.org/wiki/GoldWave
GoldWave is a commercial digital audio editing software product developed by GoldWave Inc, first released to the public in April 1993. Goldwave product lines GoldWave: Audio editor for Microsoft Windows, iOS, Android. iOS version runs on Mac OS 11 with Apple M1-compatible processor. GoldWave Infinity: Web browser-based audio editor that also supports Linux, Mac OS X, Android, iOS. Features GoldWave has an array of features bundled which define the program. They include: Real-time graphic visuals, such as bar, waveform, spectrogram, spectrum, and VU meter. Basic and advanced effects and filters such as noise reduction, compressor/expander, volume shaping, volume matcher, pitch, reverb, resampling, and parametric EQ. Effect previewing Saving and restoring effect presets DirectX Audio plug-in support A variety of audio file formats are supported, including WAV, MP3, Windows Media Audio, Ogg, FLAC, AIFF, AU, Monkey's Audio, VOX, mat, snd, and voc Batch processing and conversion support for converting a set of files to a different format and applying effects Multiple undo levels Edit multiple files at once Support for editing large files Storage option available to use RAM GoldWave Supported versions and compatibility Windows A version prior to the version 5 series still exists for download of its shareware version at the official website. Versions up to 3.03 are 16-bit applications and cannot run in 64-bit versions of Windows). All versions up to 4.26 can run on any 32-bit Windows operating system. Starting with version 5, the minimum supported operating system is changed to Windows ME. However, the requirements listed in the software package's HTML documentation was not updated. Starting with version 5.03, mininum hardware requirements were increased to Pentium III of 700 (500 in FAQ)MHz and DirectX 8 are now part of the minimum system requirements compared to the Pentium 2 of 300 MHz and DirectX 5 required by previous versions. Windows ME are supported up
https://en.wikipedia.org/wiki/Schlemm%27s%20canal
Schlemm's canal is a circular lymphatic-like vessel in the eye. It collects aqueous humor from the anterior chamber and delivers it into the episcleral blood vessels. Canaloplasty may be used to widen it. Structure Schlemm's canal is an endothelium-lined tube, resembling that of a lymphatic vessel. On the inside of the canal, nearest to the aqueous humor, it is covered and held open by the trabecular meshwork. This creates outflow resistance against the aqueous humor. Development While Schlemm's canal has generally been considered as a vein or a scleral venous sinus, the canal is similar to the lymphatic vasculature. It is never filled with blood in physiological settings as it does not receive arterial blood circulation. Schlemm's canal displays several features of lymphatic endothelium, including the expression of PROX1, VEGFR3, CCL21, FOXC2, but lacked the expression of LYVE1 and PDPN. It develops via a unique mechanism involving the transdifferentiation of venous endothelial cells in the eye into lymphatic-like endothelial cells. This developmental morphogenesis of the canal is sensitive to the inhibition of lymphangiogenic growth factors. In adults, the administration of the lymphangiogenic growth factor VEGFC enlarged the Schlemm's canal, which was associated with a reduction in intraocular pressure. In the combined absence of angiopoietin 1 and angiopoietin 2, Schlemm's canal and episcleral lymphatic vasculature completely failed to develop. Function Schlemm's canal collects aqueous humor from the anterior chamber. It delivers it into the episcleral blood vessels via aqueous veins. Clinical significance Canaloplasty Canaloplasty is a procedure to restore the eye’s natural drainage system to provide sustained reduction of intraocular pressure. Microcatheters are used in a simple and minimally invasive procedure. A surgeon will create a tiny incision to gain access to Schlemm's canal. A microcatheter circumnavigates Schlemm's canal around the iris, enl
https://en.wikipedia.org/wiki/Samsung%20Display
Samsung Display (Hangul: 삼성디스플레이) ) is a company selling display devices with OLED and QD-OLED technology. Display markets include smartphones, TVs, laptops, computer monitors, smartwatches, VR, game consoles, and automotive applications. Headquartered in South Korea, Samsung Display has production plants in China, Vietnam, and India, and operates sales offices in six countries. Samsung Display enabled the first mass-production of OLED and quantum dot display and aims to develop next-generation technology such as slidable, rollable and stretchable panels. As the LCD business spun off from Samsung Electronics, Samsung Display Corporation was established on April 1, 2012. The company launched on July 1 by merging Samsung Electronics’ LCD business, S-LCD Corporation(manufacturer of amorphous TFT LCD panels) and Samsung Mobile Display(Samsung’s OLED arm). By combining the OLED and LCD businesses, Samsung Display became the world's largest display company. History January 1991: Samsung Electronics launched TFT-LCD business February 1995: Operated TFT-LCD line for the first time domestically November 2003: Invested for 4.5 generation AMOLED mass-production for the first time in the world July 2004: A joint venture S-LCD Corporation between Samsung Electronics and Sony Corporation was established. April 2005: S-LCD begins shipment of seventh-generation TFT LCD panels for LCD TVs. August 2007: S-LCD begins shipment of eighth-generation TFT LCD panels for LCD TVs. October 2007: Started to mass produce AMOLED for the first time in the world March 2009: Exceed production of AMOLED one million monthly December 2011: The company's partners announce that Samsung will acquire Sony's entire stake in the joint venture, making S-LCD Corporation a wholly owned subsidiary of Samsung Electronics. July 1, 2012: S-LCD and Samsung Mobile Display merge to create Samsung Display August 2014: Samsung Display mass-produced the world’s first curved edge display panel, featured in the Galax
https://en.wikipedia.org/wiki/Uranium%20dioxide
Uranium dioxide or uranium(IV) oxide (), also known as urania or uranous oxide, is an oxide of uranium, and is a black, radioactive, crystalline powder that naturally occurs in the mineral uraninite. It is used in nuclear fuel rods in nuclear reactors. A mixture of uranium and plutonium dioxides is used as MOX fuel. Prior to 1960, it was used as yellow and black color in ceramic glazes and glass. Production Uranium dioxide is produced by reducing uranium trioxide with hydrogen. UO3 + H2 → UO2 + H2O at 700 °C (973 K) This reaction plays an important part in the creation of nuclear fuel through nuclear reprocessing and uranium enrichment. Chemistry Structure The solid is isostructural with (has the same structure as) fluorite (calcium fluoride), where each U is surrounded by eight O nearest neighbors in a cubic arrangement. In addition, the dioxides of cerium, thorium, and the transuranic elements from neptunium through californium have the same structures. No other elemental dioxides have the fluorite structure. Upon melting, the measured average U-O coordination reduces from 8 in the crystalline solid (UO8 cubes), down to 6.7±0.5 (at 3270 K) in the melt. Models consistent with these measurements show the melt to consist mainly of UO6 and UO7 polyhedral units, where roughly of the connections between polyhedra are corner sharing and are edge sharing. Oxidation Uranium dioxide is oxidized in contact with oxygen to the triuranium octaoxide. 3 UO2 + O2 → U3O8 at 700 °C (970 K) The electrochemistry of uranium dioxide has been investigated in detail as the galvanic corrosion of uranium dioxide controls the rate at which used nuclear fuel dissolves. See spent nuclear fuel for further details. Water increases the oxidation rate of plutonium and uranium metals. Carbonization Uranium dioxide is carbonized in contact with carbon, forming uranium carbide and carbon monoxide. UO2 \ + \ 4C -> UC2 \ + \ 2CO. This process must be done under an inert gas as uranium car
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Graham%20problem
In combinatorial number theory, the Erdős–Graham problem is the problem of proving that, if the set of integers greater than one is partitioned into finitely many subsets, then one of the subsets can be used to form an Egyptian fraction representation of unity. That is, for every , and every -coloring of the integers greater than one, there is a finite monochromatic subset of these integers such that In more detail, Paul Erdős and Ronald Graham conjectured that, for sufficiently large , the largest member of could be bounded by for some constant independent of . It was known that, for this to be true, must be at least Euler's constant . Ernie Croot proved the conjecture as part of his Ph.D thesis, and later (while a post-doctoral researcher at UC Berkeley) published the proof in the Annals of Mathematics. The value Croot gives for is very large: it is at most . Croot's result follows as a corollary of a more general theorem stating the existence of Egyptian fraction representations of unity for sets of smooth numbers in intervals of the form , where contains sufficiently many numbers so that the sum of their reciprocals is at least six. The Erdős–Graham conjecture follows from this result by showing that one can find an interval of this form in which the sum of the reciprocals of all smooth numbers is at least ; therefore, if the integers are -colored there must be a monochromatic subset satisfying the conditions of Croot's theorem. A stronger form of the result, that any set of integers with positive upper density includes the denominators of an Egyptian fraction representation of one, was announced in 2021 by Thomas Bloom, a postdoctoral researcher at the University of Oxford. See also Conjectures by Erdős References External links Ernie Croot's Webpage Combinatorics Conjectures that have been proved Theorems in number theory Egyptian fractions Graham problem
https://en.wikipedia.org/wiki/EBONE
EBONE (standing for European Backbone) was a pan-European Internet backbone. It went online in 1992 and was deactivated in July 2002. Some portions of the Ebone were sold to other companies and continue to operate today. History Formation In 1991 a certain number of research network managers, including Frode Greisen, from Denmark, Kees Neggers, director of the Dutch network SURFNet, and Francois Fluckiger at CERN sought to create a European Internet backbone open to publicly-financed academic networks and private commercial networks. The Ebone consortium was established at the RIPE meeting in Geneva in September 1991, and the network went online in 1992 after the initial IP backbone with 256 kbps links was completed. Frode Greisen became the general manager while Peter Löthberg served as de facto architect. Operation In 1996 the consortium was transformed into the Ebone Association which again established a private limited company Ebone Inc. based in Denmark. In 1998 the Ebone Association sold 75% of the company to Hermes Europe Railtel, and in 1999 the remaining 25% was bought by Global Telesystems Group Inc. (GTS) which had then acquired Hermes Europe Railtel. The Ebone backbone increased by a factor of 40,000 in speed over nine years from 256 kbit/s to 10 Gbit/s and the traffic roughly followed, see table below: In year 2000 Ebone provided international transit for around 100 Internet Service Providers based in most of the European countries. In 2001 GTS re-branded all its data communications products as Ebone and Ebone was one of Europe's leading broadband optical and IP network service providers. Shutdown In October 2001, KPNQwest acquired Ebone and the Central Europe businesses of GTS and completed their EuroRings network. Following the Dot com crash and various investigations, KPNQwest declared bankruptcy. In June 2002, it was announced that the Ebone Network Operations Center would be shut down, and the Ebone would be deactivated. Employees in the
https://en.wikipedia.org/wiki/End%20of%20interrupt
An end of interrupt (EOI) is a computing signal sent to a programmable interrupt controller (PIC) to indicate the completion of interrupt processing for a given interrupt. Interrupts are used to facilitate hardware signals sent to the processor that temporarily stop a running program and allow a special program, an interrupt handler, to run instead. An EOI is used to cause a PIC to clear the corresponding bit in the in-service register (ISR), and thus allow more interrupt requests (IRQs) of equal or lower priority to be generated by the PIC. EOIs may indicate the interrupt vector implicitly or explicitly. An explicit EOI vector is indicated with the EOI, whereas an implicit EOI vector will typically use a vector as indicated by the PICs priority schema, for example the highest vector in the ISR. Also, EOIs may be sent at the end of interrupt processing by an interrupt handler, or the operation of a PIC may be set to auto-EOI at the start of the interrupt handler. See also Intel 8259 – notable PIC from Intel Advanced Programmable Interrupt Controller (APIC) OpenPIC and IBM MPIC Inter-processor interrupt (IPI) Interrupt latency Non-maskable interrupt (NMI) IRQL (Windows) References Interrupts
https://en.wikipedia.org/wiki/Straight%20and%20Crooked%20Thinking
Straight and Crooked Thinking, first published in 1930 and revised in 1953, is a book by Robert H. Thouless which describes, assesses and critically analyses flaws in reasoning and argument. Thouless describes it as a practical manual, rather than a theoretical one. Synopsis Thirty-eight fallacies are discussed in the book. Among them are: No. 3. proof by example, biased sample, cherry picking No. 6. ignoratio elenchi: "red herring" No. 9. false compromise/middle ground No. 12. argument in a circle No. 13. begging the question No. 17. equivocation No. 18. false dilemma: black and white thinking No. 19. continuum fallacy (fallacy of the beard) No. 21. ad nauseam: "argumentum ad nauseam" or "argument from repetition" or "argumentum ad infinitum" No. 25. style over substance fallacy No. 28. appeal to authority No. 31. thought-terminating cliché No. 36. special pleading No. 37. appeal to consequences No. 38. appeal to motive See also List of cognitive biases List of common misconceptions List of fallacies List of memory biases List of topics related to public relations and propaganda References 1953 non-fiction books Cognitive biases Books about bias Logic books
https://en.wikipedia.org/wiki/SCSI%20Enclosure%20Services
SCSI Enclosure Services (SES) is a protocol for more modern SCSI enclosure products. An initiator can communicate with the enclosure using a specialized set of SCSI commands to access power, cooling, and other non-data characteristics. SES devices There are two major classes of SES devices: Attached enclosure services devices allow SES communication through a logical unit within one SCSI disk drive located in the enclosure. The disk-drive then communicates with the enclosure by some other method, the only commonly used one being Enclosure Services Interface (ESI). In fault-tolerant enclosures, more than one disk-drive slot has ESI enabled to allow SES communications to continue even after the failure of any of the disk-drives. The definition of the ESI protocols is owned by an ANSI committee and defined in their specifications ANSI SFF-8067 and SFF-8045. Standalone enclosure services enclosures have a separate SES processor which occupies its own address on the SCSI bus. The protocol for this uses direct SCSI commands. An enclosure can be fault-tolerant by containing two SES processors. SES commands The SCSI initiator communicates with an SES device using two SCSI commands: Send Diagnostic and Receive Diagnostic Results. Some universal SCSI commands such as Inquiry are also used with standalone enclosure services to perform basic functions such as initial discovery of the devices. SES elements The SCSI Send Diagnostic and Receive Diagnostic Results commands can be addressed to a specific SES element in the enclosure. There are many different element codes defined to cover a wide range of devices. The most common SES elements are power supply, cooling fan, temperature sensor, and UPS. The SCSI command protocols assume that there may be more than one of each device type so they must be each given an 8-bit address. When an SES controller is interrogated for the status of an SES element, the response includes a 4-bit element status code. The most commo
https://en.wikipedia.org/wiki/3D-Calc
3D-Calc is a 3-dimensional spreadsheet program for the Atari ST computer. The first version of the program was released in April 1989 and was distributed by ISTARI bvba, Ghent, Belgium. History Starting May 1991, the English version was distributed by MichTron/Microdeal, Cornwall, UK. In January 1992, version 2.3 of the program was licensed to Atari Corp., who released Dutch and French translations. In 1994, version 3 of 3D-Calc (renamed 3D-Calc+) was licensed to the UK magazine ST Applications. Today, 3D-Calc software is Freeware ("Public domain without source code") and can be downloaded freely. In 1992–1993, it was ported to MS-DOS to serve as the basis of a new statistics software package MedCalc. Features and reception The spreadsheet contains 13 pages of 2048 rows and 256 columns. Cells of different pages could be cross-referenced. 3D Calc offers GEM based user interface with icons, menus and function keys and users can work on three spreadsheets at the same time with up to three GEM windows for each. The application supports on-screen help via the "?" menu and can import data from Lotus 1-2-3 (with some limitations). The program includes an integrated scripting language, and an integrated text module with a data import feature from the spreadsheet, allowing formatted data output, mailmerge, label printing etc. Peter Crush writing for the ST Format and ST Applications magazines commended 3D Calc for rich features including easy to use graph generation, but criticized no support for colours. References 1989 software Spreadsheet software Atari ST software Freeware
https://en.wikipedia.org/wiki/World%20Geographic%20Reference%20System
The World Geographic Reference System (GEOREF) is a geocode, a grid-based method of specifying locations on the surface of the Earth. GEOREF is essentially based on the geographic system of latitude and longitude, but using a simpler and more flexible notation. GEOREF was used primarily in aeronautical charts for air navigation, particularly in military or inter-service applications, but it is rarely seen today. However, GEOREF can be used with any map or chart that has latitude and longitude printed on it. Quadrangles GEOREF is based on the standard system of latitude and longitude, but uses a simpler and more concise notation. GEOREF divides the Earth's surface into successively smaller quadrangles, with a notation system used to identify each quadrangle within its parent. Unlike latitude/longitude, GEOREF runs in one direction horizontally, east from the 180° meridian; and one direction vertically, north from the South Pole. GEOREF can easily be adapted to give co-ordinates with varying degrees of precision, using a 2–12 character geocode. GEOREF co-ordinates are defined by successive divisions of the Earth's surface, as follows: The first level of GEOREF divides the world into quadrangles each measuring 15 degrees of longitude by 15 degrees of latitude; this results in 24 zones of longitude and 12 bands of latitude. A longitude zone is identified by a letter from A to Z (omitting I and O) starting at 180 degrees and progressing eastward through the full 360 degrees of longitude; a latitude band is identified by a letter from A through M (omitting I) northward from the south pole. Hence, any 15 degree quadrangle can be identified by two letters; the easting (longitude) is given first, followed by the northing (latitude). These two letters are the first two characters of a full GEOREF coordinate. Each 15-degree quadrangle is further divided into smaller quadrangles, measuring 1 degree of longitude by 1 degree of latitude. These quadrangles are lettered A to
https://en.wikipedia.org/wiki/Molecular%20anatomy
Molecular anatomy is the subspecialty of microscopic anatomy concerned with the identification and description of molecular structures of cells, tissues, and organs in an organism. References Anatomy
https://en.wikipedia.org/wiki/DBC%201012
The DBC/1012 Data Base Computer was a database machine introduced by Teradata Corporation in 1984, as a back-end data base management system for mainframe computers. The DBC/1012 harnessed multiple Intel microprocessors, each with its own dedicated disk drive, by interconnecting them with the Ynet switching network in a massively parallel processing system. The DBC/1012 was designed to manage databases up to one terabyte (1,000,000,000,000 characters) in size; "1012" in the name refers to "10 to the power of 12". Major components included: Mainframe-resident software to manage users and transfer data Interface processor (IFP) - the hardware connection between the mainframe and the DBC/1012 Ynet - a custom-built system interconnect that supported broadcast and sorting Access module processor (AMP) - the unit of parallelism: includes microprocessor, disk drive, file system, and database software System console and printer TEQUEL (TEradata QUEry Language) - an extension of SQL The DBC/1012 was designed to scale up to 1024 Ynet interconnected processor-disk units. Rows of a relation (table) were distributed by hashing on the primary database index. The DBC/1012 used a 474 megabyte Winchester disk drive with an average seek time of 18 milliseconds. The disk drive was capable of transferring data at 1.9 MB/s although in practice the sustainable data rate was lower because the IO pattern tended towards random access and transfer lengths of 8 to 12 kilobytes. The processor cabinet was 60 inches high and 27 inches wide, weighed 450 pounds, and held up to 8 microprocessor units. The storage cabinet was 60 inches high and 27 inches wide, weighed 625 pounds, and held up to 4 disk storage units. The DBC/1012 preceded the advent of redundant array of independent disks (RAID) technology, so data protection was provided by the "fallback" feature, which kept a logical copy of rows of a relation on different AMPs. The collection of AMPs that provided this protection
https://en.wikipedia.org/wiki/List%20of%20metalworking%20occupations
Metalworking occupations include: The oldest of the metalworking occupations Smith (a.k.a. metalsmith), such as blacksmith or silversmith Jeweler Founder The machining trades Production machinist, which may involve various related machining occupations that often overlap: Manual machine tool operator CNC programmer, the person who takes the drawings made by engineers and draftspersons and creates a CNC program to cut the part CNC setup hand, the person who sets up the machine and its tooling before the operator takes over CNC operator, the person who: feeds stock to the machine, changes cutting inserts, checks quality, cleans and lubricates the machine, etc. Tool and die maker and related machining occupations: Moldmaker Patternmaker Modelmaker The fabricating and erecting trades Steel erector, also known as an ironworker Welder Boilermaker Pipefitter Millwright Blacksmith Gunsmith Marquetarian (though often dealing exclusively with wood, ivory or other non-metallic materials) Farrier Furniture maker Pewterer Damascener Other occupations within a metalworking plant Laborer, unskilled and semiskilled workers who support metalworking operations and who often develop their skills to move into the other occupations listed here. Sometimes laborers' positions may be called by more specific names, such as oiler. Technology in general, and automation especially, tends to exert pressure against laborer-type job creation, with the lowest-skilled positions being most at risk. For example, so-called labor gangs, groups of men assigned to shoveling or other manual tasks, are not employed nearly as much as they used to be, especially in developed economies. Some jobs, despite being classifiable as semiskilled work, actually require quite a bit of talent and experience to be done well, for example, band saw operators or buffing and polishing workers. Rigger, a person specializing in the skills needed to move large, heavy objects Heavy equipment o
https://en.wikipedia.org/wiki/Enhanced%20biological%20phosphorus%20removal
Enhanced biological phosphorus removal (EBPR) is a sewage treatment configuration applied to activated sludge systems for the removal of phosphate. The common element in EBPR implementations is the presence of an anaerobic tank (nitrate and oxygen are absent) prior to the aeration tank. Under these conditions a group of heterotrophic bacteria, called polyphosphate-accumulating organisms (PAO) are selectively enriched in the bacterial community within the activated sludge. In the subsequent aerobic phase, these bacteria can accumulate large quantities of polyphosphate within their cells and the removal of phosphorus is said to be enhanced. Generally speaking, all bacteria contain a fraction (1-2%) of phosphorus in their biomass due to its presence in cellular components, such as membrane phospholipids and DNA. Therefore, as bacteria in a wastewater treatment plant consume nutrients in the wastewater, they grow and phosphorus is incorporated into the bacterial biomass. When PAOs grow they not only consume phosphorus for cellular components but also accumulate large quantities of polyphosphate within their cells. Thus, the phosphorus fraction of phosphorus accumulating biomass is 5-7%. In mixed bacterial cultures the phosphorus content will be maximal 3 - 4 % on total organic mass. If additional chemical precipitation takes place, for example to reach discharge limits, the P-content could be higher, but that is not affected by EBPR. This biomass is then separated from the treated (purified) water at end of the process and the phosphorus is thus removed. Thus if PAOs are selectively enriched by the EBPR configuration, considerably more phosphorus is removed, compared to the relatively poor phosphorus removal in conventional activated sludge systems. See also List of waste-water treatment technologies References Further reading External links Handbook Biological Waste Water Treatment - Principles, Configuration and Model EPBR Metagenomics: The Solution to Pollu
https://en.wikipedia.org/wiki/Digital%20Universe
Digital Universe was a free online information service founded in 2006. The project aimed to create a "network of portals designed to provide high-quality information and services to the public". Subject matter experts were to have been responsible for reviewing and approving content; contributors were to have been both experts (researchers, scholars, educators) and the public. The project was founded in 2005 by Joe Firmage, CEO of ManyOne, with Bernard Haisch as the president. It launched in early 2006. Larry Sanger was a director, and helped with the launch of the project's Encyclopedia of Earth. Sanger left in late 2006 to launch Citizendium. As of 2019, the website was nonfunctional. Characteristics Goals In December 2005, when the project was announced, the founders' goal was to create a worldwide network of researchers, scholars, and educators, to become "the PBS of the Web." While the public will be invited to contribute to some articles in the Digital Universe encyclopedia, they will be supervised by "stewards" whose role is to guarantee quality and accuracy of the articles. In addition, parts of the Digital Universe will be editable only by credentialed experts. Multi-tiered system The expert wiki, is expected to be written and managed by experts. The public wiki, will be editable by members of the educated public. However, according to Sanger, only registered users who have provided their real names will be permitted to edit this wiki. According to Sanger, an article rating system will be used for articles in the public wiki. Some of the 3-D graphical interface features will require the use of a Mozilla-based browser developed by ManyOne Networks, which they say will be made available free of charge. Some content will be available only to ManyOne subscribers. Content Around 2000 content pages existed as of August 2007. The Digital Universe claims the following featured portals: Earth, Energy, The Arctic, Texas Environment, U.S. Government, and Sal
https://en.wikipedia.org/wiki/Zener%20pinning
Zener pinning is the influence of a dispersion of fine particles on the movement of low- and high-angle grain boundaries through a polycrystalline material. Small particles act to prevent the motion of such boundaries by exerting a pinning pressure which counteracts the driving force pushing the boundaries. Zener pinning is very important in materials processing as it has a strong influence on recovery, recrystallization and grain growth. Origin of the pinning force A boundary is an imperfection in the crystal structure and as such is associated with a certain quantity of energy. When a boundary passes through an incoherent particle then the portion of boundary that would be inside the particle essentially ceases to exist. In order to move past the particle some new boundary must be created, and this is energetically unfavourable. While the region of boundary near the particle is pinned, the rest of the boundary continues trying to move forward under its own driving force. This results in the boundary becoming bowed between those points where it is anchored to the particles. Mathematical description The figure illustrates a boundary intersecting with an incoherent particle of radius . The pinning force acts along the line of contact between the boundary and the particle, i.e., a circle of diameter . The force per unit length of boundary in contact is , where is the interfacial energy. Hence, the total force acting on the particle-boundary interface is The maximum restraining force occurs when , so . In order to determine the pinning force resulting from a given dispersion of particles, Clarence Zener made several important assumptions: The particles are spherical. The passage of the boundary does not alter the particle-boundary interaction. Each particle exerts the maximum pinning force on the boundary, regardless of contact position. The contacts between particles and boundaries are completely random. The number density of particles on the boundary is
https://en.wikipedia.org/wiki/Domain%20analysis
In software engineering, domain analysis, or product line analysis, is the process of analyzing related software systems in a domain to find their common and variable parts. It is a model of wider business context for the system. The term was coined in the early 1980s by James Neighbors. Domain analysis is the first phase of domain engineering. It is a key method for realizing systematic software reuse. Domain analysis produces domain models using methodologies such as domain specific languages, feature tables, facet tables, facet templates, and generic architectures, which describe all of the systems in a domain. Several methodologies for domain analysis have been proposed. The products, or "artifacts", of a domain analysis are sometimes object-oriented models (e.g. represented with the Unified Modeling Language (UML)) or data models represented with entity-relationship diagrams (ERD). Software developers can use these models as a basis for the implementation of software architectures and applications. This approach to domain analysis is sometimes called model-driven engineering. In information science, the term "domain analysis" was suggested in 1995 by Birger Hjørland and H. Albrechtsen. Domain analysis techniques Several domain analysis techniques have been identified, proposed and developed due to the diversity of goals, domains, and involved processes. DARE: Domain Analysis and Reuse Environment , Feature-Oriented Domain Analysis (FODA) IDEF0 for Domain Analysis Model Oriented Domain Analysis and Engineering References See also Domain engineering Feature Model Product Family Engineering Domain-specific language Model-driven engineering Software design
https://en.wikipedia.org/wiki/Out-of-band%20agreement
In the exchange of information over a communication channel, an out-of-band agreement is an agreement or understanding between the communicating parties that is not included in any message sent over the channel but which is relevant for the interpretation of such messages. By extension, in a client–server or provider-requester setting, an out-of-band agreement is an agreement or understanding that governs the semantics of the request/response interface but which is not part of the formal or contractual description of the interface specification itself. See also API Contract Out-of-band Off-balance-sheet External links SakaiProject definition Computer networking
https://en.wikipedia.org/wiki/The%20Sixth%20Finger
"The Sixth Finger" is an episode of the original The Outer Limits television show. It first aired on 14 October 1963, during the first season. Plot Working in a remote Welsh mining town, a rogue scientist, Professor Mathers, discovers a process that affects the speed of evolutionary mutation. Mathers suffers guilt for his role in developing a super-destructive atomic bomb, and hopes his new discovery will better the human race. A disgruntled miner, Gwyllm Griffiths, volunteers for an experiment that will enable the professor to create a being with enhanced mental capabilities. As a man sent forward equal to 20,000 years of evolution, Gwyllm soon begins growing an overdeveloped cortex and a sixth finger on each hand. When the mutation process begins to operate independently of the professor's influence, Gwyllm takes control of the experiment. Now equal to 1 million years of evolution, and equipped with superior intelligence and powers of mind, such as telekinesis, that are capable of great destruction, Gwyllm seeks vengeance on the mining town which he loathes. When he meets 2 motorcycle cops, he says, "Your ignorance makes me ill and angry ... your savageness ... must end!" However, before he acts on his thoughts, Gwyllm holds back, realizing he has by now "evolved beyond hatred or revenge, or even the desire for power," and instead longs for "when the mind will cast off the hamperings of the flesh and become all thought and no matter – a vortex of pure intelligence in space." Gwyllm enlists the help of his girlfriend, Cathy Evans, to operate the machine to push his evolution even further forward. Instead, out of love for him, Cathy reverses the process at the last second, bringing Gwyllm back to his former self. But, the out-of-control reversal is too much for Gwyllm, and he slowly succumbs to the adverse effects while Cathy comforts him. Production Regarding Ellis St. Joseph's original script, a number of scenes and characters were removed or condensed to sa
https://en.wikipedia.org/wiki/Finders%20Keepers%20%281985%20video%20game%29
Finders Keepers is a video game written by David Jones and the first game in the Magic Knight series. It was published on the Mastertronic label for the ZX Spectrum, Amstrad CPC, MSX, Commodore 64, and Commodore 16 in 1985. Published in the United Kingdom at the budget price of £1.99. Finders Keepers is a platform game with some maze sections. On the ZX Spectrum it sold more than 117,000 copies and across all 8-bit formats more than 330,000 copies, making it Mastertronic's second best-selling original game after BMX Racers. Plot Magic Knight has been sent to the Castle of Spriteland by the King of Ibsisima in order to find a special present for Princess Germintrude. If Magic Knight is successful in his quest, he may have proved himself worthy of joining the famous "Polygon Table", a reference to the mythical Round Table from the legends of King Arthur. Gameplay The hero starts in the King's throne room and is transported, via a teleporter, to the castle. The castle is made up of two types of playing area: flick-screen rooms in the manner of a platform game and two large scrolling mazes. On the ZX Spectrum, Amstrad CPC, and MSX these are "Cold Upper Maze" and the "Slimey Lower Maze"; on the Commodore 64 they consist of "The Castle Gardens" and "The Castle Dungeons". An additional aspect of the gameplay is the ability to collect objects (found in both the rooms and the mazes) scattered around the castle and sell them for money. Some of these objects can combine or react to create an object of higher value (for example, the bar of lead and the philosopher's stone react to create a bar of gold). Both the amount of money Magic Knight is carrying and the market value of his inventory are displayed on-screen. The buying and selling of objects is done with the various traders who live in the castle. The Castle of Spriteland is full of dangerous creatures who inhabit its many rooms as well as both of its mazes and collision with these saps Magic Knight's strength. If h
https://en.wikipedia.org/wiki/Email%20spoofing
Email spoofing is the creation of email messages with a forged sender address. The term applies to email purporting to be from an address which is not actually the sender's; mail sent in reply to that address may bounce or be delivered to an unrelated party whose identity has been faked. Disposable email address or "masked" email is a different topic, providing a masked email address that is not the user's normal address, which is not disclosed (for example, so that it cannot be harvested), but forwards mail sent to it to the user's real address. The original transmission protocols used for email do not have built-in authentication methods: this deficiency allows spam and phishing emails to use spoofing in order to mislead the recipient. More recent countermeasures have made such spoofing from internet sources more difficult but they have not eliminated it completely; few internal networks have defences against a spoof email from a colleague's compromised computer on that network. Individuals and businesses deceived by spoof emails may suffer significant financial losses; in particular, spoofed emails are often used to infect computers with ransomware. Technical details When a Simple Mail Transfer Protocol (SMTP) email is sent, the initial connection provides two pieces of address information: MAIL FROM: - generally presented to the recipient as the Return-path: header but not normally visible to the end user, and by default no checks are done that the sending system is authorized to send on behalf of that address. RCPT TO: - specifies which email address the email is delivered to, is not normally visible to the end user but may be present in the headers as part of the "Received:" header. Together, these are sometimes referred to as the "envelope" addressing – an analogy to a traditional paper envelope. Unless the receiving mail server signals that it has problems with either of these items, the sending system sends the "DATA" command, and typically sends severa
https://en.wikipedia.org/wiki/Placement%20%28electronic%20design%20automation%29
Placement is an essential step in electronic design automation — the portion of the physical design flow that assigns exact locations for various circuit components within the chip's core area. An inferior placement assignment will not only affect the chip's performance but might also make it non-manufacturable by producing excessive wire-length, which is beyond available routing resources. Consequently, a placer must perform the assignment while optimizing a number of objectives to ensure that a circuit meets its performance demands. Together, the placement and routing steps of IC design are known as place and route. A placer takes a given synthesized circuit netlist together with a technology library and produces a valid placement layout. The layout is optimized according to the aforementioned objectives and ready for cell resizing and buffering — a step essential for timing and signal integrity satisfaction. Clock-tree synthesis and Routing follow, completing the physical design process. In many cases, parts of, or the entire, physical design flow are iterated a number of times until design closure is achieved. In the case of application-specific integrated circuits, or ASICs, the chip's core layout area comprises a number of fixed height rows, with either some or no space between them. Each row consists of a number of sites which can be occupied by the circuit components. A free site is a site that is not occupied by any component. Circuit components are either standard cells, macro blocks, or I/O pads. Standard cells have a fixed height equal to a row's height, but have variable widths. The width of a cell is an integral number of sites. On the other hand, blocks are typically larger than cells and have variable heights that can stretch a multiple number of rows. Some blocks can have preassigned locations — say from a previous floorplanning process — which limit the placer's task to assigning locations for just the cells. In this case, the blocks are typicall
https://en.wikipedia.org/wiki/ASUE%20%28Germany%29
ASUE (Arbeitsgemeinschaft für Sparsamen und Umweltfreundlichen Energieverbrauch e. V.) - The Association for the Efficient and Environmentally Friendly Use of Energy - is a German association founded in 1977. The association's aim is to assist in the development and production of energy-saving and eco-friendly technologies. Its members include over 40 companies and corporations in the German gas industry. External links ASUE website References Environmental organisations based in Germany Energy organizations Energy conservation in Germany
https://en.wikipedia.org/wiki/Repetition%20code
In coding theory, the repetition code is one of the most basic linear error-correcting codes. In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea of the repetition code is to just repeat the message several times. The hope is that the channel corrupts only a minority of these repetitions. This way the receiver will notice that a transmission error occurred since the received data stream is not the repetition of a single message, and moreover, the receiver can recover the original message by looking at the received message in the data stream that occurs most often. Because of the bad error correcting performance coupled with the low code rate (ratio between useful information symbols and actual transmitted symbols), other error correction codes are preferred in most cases. The chief attraction of the repetition code is the ease of implementation. Code parameters In the case of a binary repetition code, there exist two code words - all ones and all zeros - which have a length of . Therefore, the minimum Hamming distance of the code equals its length . This gives the repetition code an error correcting capacity of (i.e. it will correct up to errors in any code word). If the length of a binary repetition code is odd, then it's a perfect code. The binary repetition code of length n is equivalent to the (n,1)-Hamming code. Example Consider a binary repetition code of length 3. The user wants to transmit the information bits 101. Then the encoding maps each bit either to the all ones or all zeros code word, so we get the 111 000 111, which will be transmitted. Let's say three errors corrupt the transmitted bits and the received sequence is 111 010 100. Decoding is usually done by a simple majority decision for each code word. That lead us to 100 as the decoded information bits, because in the first and second code word occurred less than two errors, so the majority of the bits are correct. But in the third
https://en.wikipedia.org/wiki/Noisy-channel%20coding%20theorem
In information theory, the noisy-channel coding theorem (sometimes Shannon's theorem or Shannon's limit), establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data (digital information) nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley. The Shannon limit or Shannon capacity of a communication channel refers to the maximum rate of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon and Warren Weaver entitled The Mathematical Theory of Communication (1949). This founded the modern discipline of information theory. Overview Stated by Claude Shannon in 1948, the theorem describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. Shannon's theorem has wide-ranging applications in both communications and data storage. This theorem is of foundational importance to the modern field of information theory. Shannon only gave an outline of the proof. The first rigorous proof for the discrete case is given in . The Shannon theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if there exist codes that allow the probability of error at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, C. The converse is also important. If , an arbitrarily small probability of error is not achievable. All codes will have a probability of error greater than a certain positive minimal level, and this level increases as the rat
https://en.wikipedia.org/wiki/Neuronal%20noise
Neuronal noise or neural noise refers to the random intrinsic electrical fluctuations within neuronal networks. These fluctuations are not associated with encoding a response to internal or external stimuli and can be from one to two orders of magnitude. Most noise commonly occurs below a voltage-threshold that is needed for an action potential to occur, but sometimes it can be present in the form of an action potential; for example, stochastic oscillations in pacemaker neurons in suprachiasmatic nucleus are partially responsible for the organization of circadian rhythms. Background Neuronal activity at the microscopic level has a stochastic character, with atomic collisions and agitation, that may be termed "noise." While it isn't clear on what theoretical basis neuronal responses involved in perceptual processes can be segregated into a "neuronal noise" versus a "signal" component, and how such a proposed dichotomy could be corroborated empirically, a number of computational models incorporating a "noise" term have been constructed. Single neurons demonstrate different responses to specific neuronal input signals. This is commonly referred to as neural response variability. If a specific input signal is initiated in the dendrites of a neuron, then a hypervariability exists in the number of vesicles released from the axon terminal fiber into the synapse. This characteristic is true for fibers without neural input signals, such as pacemaker neurons, as mentioned previously, and cortical pyramidal neurons that have highly-irregular firing pattern. Noise generally hinders neural performance, but recent studies show, in dynamical non-linear neural networks, this statement does not always hold true. Non-linear neural networks are a network of complex neurons that have many connections with one another such as the neuronal systems found within our brains. Comparatively, linear networks are an experimental view of analyzing a neural system by placing neurons in series w
https://en.wikipedia.org/wiki/F-15%20Strike%20Eagle%20%28video%20game%29
F-15 Strike Eagle is an F-15 Strike Eagle combat flight simulator originally released for the Atari 8-bit family in 1984 by MicroProse then ported to other systems. It is the first in the F-15 Strike Eagle series followed by F-15 Strike Eagle II and F-15 Strike Eagle III. An arcade version of the game was released simply as F-15 Strike Eagle in 1991, which uses higher-end hardware than was available in home systems, including the TMS34010 graphics-oriented CPU. Gameplay The game begins with the player selecting Libya (much like Operation El Dorado Canyon), the Persian Gulf, or Vietnam as a mission theater. Play then begins from the cockpit of an F-15 already in flight and equipped with a variety of missiles, bombs, drop tanks, flares and chaff. The player flies the plane in combat to bomb various targets including a primary and secondary target while also engaging in air-to-air combat with enemy fighters. The game ends when either the player's plane crashes, is destroyed, or when the player returns to base. Ports The game was first released for the Atari 8-bit family, with ports appearing from 1985-87 for the Apple II, Commodore 64, ZX Spectrum, MSX, and Amstrad CPC. It was also ported to the IBM PC as a self-booting disk, being one of the first games that MicroProse company released for IBM compatibles. The initial IBM release came on a self-booting 5.25" floppy disk and supported only CGA graphics, but a revised version in 1986 was offered on 3.5" disks and added limited EGA support (which added the ability to change color palettes if an EGA card was present). Versions for the Game Boy, Game Gear, and NES were published in the early 1990s. Reception F-15 Strike Eagle was a commercial blockbuster. It sold 250,000 copies by March 1987, and surpassed 1 million units in 1989. It ultimately reached over 1.5 million sales overall, and was MicroProse's best-selling Commodore game as of late 1987. Computer Gaming World in 1984 called F-15 "an excellent simulation"
https://en.wikipedia.org/wiki/AllAdvantage
AllAdvantage was an Internet advertising company that positioned itself as the world’s first "infomediary" by paying its users/members a portion of the advertising revenue generated by their online viewing habits. It became most well known for its slogan "Get Paid to Surf the Web," a phrase that has since become synonymous with a wide array of online ad revenue sharing systems (see, e.g., paid to surf). History AllAdvantage was launched on March 31, 1999, by Jim Jorgensen, Johannes Pohle, Carl Anderson, and Oliver Brock. During its nearly 2 years of operation, it raised nearly $200 Million in venture capital and grew to more than 10 million members in its first 18 months of operation. The company's practice of compensating existing members for referring new members led it to become one of the most heavily promoted websites of its time. In 1999, the company had over 4 million members worldwide, in over 240 countries, having delivered more than 4 billion ads in the month of November of that year. That popularity was reflected in the ranking of AllAdvantage.com among the top 20 of many website traffic indices during most of the company's existence, including Nielsen/NetRatings. That method of promotion also led the company to be heavily criticized for its early inability to prevent its members from spamming for referrals in order to collect additional income. It eventually overcame many of those problems and company executives were deeply involved in anti-spam legislative proposals, including the first anti-spam bill to pass the US House of Representatives. AllAdvantage ultimately fell victim to the sharp decline in advertising spending as the dot-com bubble burst and the U.S. economy entered a recessionary period in mid-2000. AllAdvantage planned an initial public offering of stock in early 2000, underwritten by investment banker Frank Quattrone of the firm Credit Suisse First Boston. As the IPO market continued to sour through mid-2000, the offering plans were canc
https://en.wikipedia.org/wiki/Magnesium%20transporter
Magnesium transporters are proteins that transport magnesium across the cell membrane. All forms of life require magnesium, yet the molecular mechanisms of Mg2+ uptake from the environment and the distribution of this vital element within the organism are only slowly being elucidated. The ATPase function of MgtA is highly cardiolipin dependent and has been shown to detect free magnesium in the μM range In bacteria, Mg2+ is probably mainly supplied by the CorA protein and, where the CorA protein is absent, by the MgtE protein. In yeast the initial uptake is via the Alr1p and Alr2p proteins, but at this stage the only internal Mg2+ distributing protein identified is Mrs2p. Within the protozoa only one Mg2+ transporter (XntAp) has been identified. In metazoa, Mrs2p and MgtE homologues have been identified, along with two novel Mg2+ transport systems TRPM6/TRPM7 and PCLN-1. Finally, in plants, a family of Mrs2p homologues has been identified along with another novel protein, AtMHX. Evolution The evolution of Mg2+ transport appears to have been rather complicated. Proteins apparently based on MgtE are present in bacteria and metazoa, but are missing in fungi and plants, whilst proteins apparently related to CorA are present in all of these groups. The two active transport transporters present in bacteria, MgtA and MgtB, do not appear to have any homologies in higher organisms. There are also Mg2+ transport systems that are found only in the higher organisms. Types There are a large number of proteins yet to be identified that transport Mg2+. Even in the best studied eukaryote, yeast, Borrelly has reported a Mg2+/H+ exchanger without an associated protein, which is probably localised to the Golgi. At least one other major Mg2+ transporter in yeast is still unaccounted for, the one affecting Mg2+ transport in and out of the yeast vacuole. In higher, multicellular organisms, it seems that many Mg2+ transporting proteins await discovery. The CorA-domain-containing
https://en.wikipedia.org/wiki/Shipping%20container%20architecture
Shipping container architecture is a form of architecture that uses steel intermodal containers (shipping containers) as the main structural element. It is also referred to as cargotecture or arkitainer, portmanteau words formed from "cargo" and "architecture". This form of architecture is often associated with the tiny-house movement as well as the sustainable living movement. The use of containers as building materials has been growing in popularity due to their strength, wide availability, low cost, and eco-friendliness. Advantages Due to their shape and material, shipping containers have the ability to be customized in many different ways and can be modified to fit various purposes. Standardized dimensions and various interlocking mechanisms make these containers modular, allowing them to be easily combined into larger structures that follow modular design. This also simplifies any extensions to the structure as new containers can easily be added on to create larger structures. When empty, shipping containers can be stacked up to 12 units high. Because shipping containers are designed to be stacked in high columns and to carry heavy loads, they are also strong and durable. They are designed to resist harsh environments, such as those on ocean-going vessels. Shipping containers conform to standard shipping sizes, which makes pre-fabricated modules easily transportable by ship, truck, or rail. Shipping container construction is still less expensive than conventional construction, despite metal fabrication and welding being considered specialized labor (which usually increases construction costs). Unlike wood-frame construction, attachments must be welded or drilled to the outer skin, which is more time-consuming, and requires different job site equipment. As a result of their widespread use, new and used shipping containers are available globally. This availability makes building tiny or container houses more affordable. Depending on the desired specification
https://en.wikipedia.org/wiki/Motronic
Motronic is the trade name given to a range of digital engine control units developed by Robert Bosch GmbH (commonly known as Bosch) which combined control of fuel injection and ignition in a single unit. By controlling both major systems in a single unit, many aspects of the engine's characteristics (such as power, fuel economy, drivability, and emissions) can be improved. Motronic 1.x Motronic M1.x is powered by various i8051 derivatives made by Siemens, usually SAB80C515 or SAB80C535. Code/data is stored in DIL or PLCC EPROM and ranges from 32k to 128k. 1.0 Often known as "Motronic basic", Motronic ML1.x was one of the first digital engine-management systems developed by Bosch. These early Motronic systems integrated the spark timing element with then-existing Jetronic fuel injection technology. It was originally developed and first used in the BMW 7 Series, before being implemented on several Volvo and Porsche engines throughout the 1980s. The components of the Motronic ML1.x systems for the most part remained unchanged during production, although there are some differences in certain situations. The engine control module (ECM) receives information regarding engine speed, crankshaft angle, coolant temperature and throttle position. An air flow meter also measures the volume of air entering the induction system. If the engine is naturally aspirated, an air temperature sensor is located in the air flow meter to work out the air mass. However, if the engine is turbocharged, an additional charge air temperature sensor is used to monitor the temperature of the inducted air after it has passed through the turbocharger and intercooler, in order to accurately and dynamically calculate the overall air mass. Main system characteristics Fuel delivery, ignition timing, and dwell angle incorporated into the same control unit. Crank position and engine speed is determined by a pair of sensors reading from the flywheel. Separate constant idle speed system monitors and re
https://en.wikipedia.org/wiki/Timeline%20of%20information%20theory
A timeline of events related to  information theory,  quantum information theory and statistical physics,  data compression,  error correcting codes and related subjects. 1872 – Ludwig Boltzmann presents his H-theorem, and with it the formula Σpi log pi for the entropy of a single gas particle 1878 – J. Willard Gibbs defines the Gibbs entropy: the probabilities in the entropy formula are now taken as probabilities of the state of the whole system 1924 – Harry Nyquist discusses quantifying "intelligence" and the speed at which it can be transmitted by a communication system 1927 – John von Neumann defines the von Neumann entropy, extending the Gibbs entropy to quantum mechanics 1928 – Ralph Hartley introduces Hartley information as the logarithm of the number of possible messages, with information being communicated when the receiver can distinguish one sequence of symbols from any other (regardless of any associated meaning) 1929 – Leó Szilárd analyses Maxwell's Demon, showing how a Szilard engine can sometimes transform information into the extraction of useful work 1940 – Alan Turing introduces the deciban as a measure of information inferred about the German Enigma machine cypher settings by the Banburismus process 1944 – Claude Shannon's theory of information is substantially complete 1947 – Richard W. Hamming invents Hamming codes for error detection and correction (to protect patent rights, the result is not published until 1950) 1948 – Claude E. Shannon publishes A Mathematical Theory of Communication 1949 – Claude E. Shannon publishes Communication in the Presence of Noise – Nyquist–Shannon sampling theorem and Shannon–Hartley law 1949 – Claude E. Shannon's Communication Theory of Secrecy Systems is declassified 1949 – Robert M. Fano publishes Transmission of Information. M.I.T. Press, Cambridge, Massachusetts – Shannon–Fano coding 1949 – Leon G. Kraft discovers Kraft's inequality, which shows the limits of prefix codes 1949 –
https://en.wikipedia.org/wiki/Rencontres%20numbers
In combinatorial mathematics, the rencontres numbers are a triangular array of integers that enumerate permutations of the set { 1, ..., n } with specified numbers of fixed points: in other words, partial derangements. (Rencontre is French for encounter. By some accounts, the problem is named after a solitaire game.) For n ≥ 0 and 0 ≤ k ≤ n, the rencontres number Dn, k is the number of permutations of { 1, ..., n } that have exactly k fixed points. For example, if seven presents are given to seven different people, but only two are destined to get the right present, there are D7, 2 = 924 ways this could happen. Another often cited example is that of a dance school with 7 couples, where, after tea-break the participants are told to randomly find a partner to continue, then once more there are D7, 2 = 924 possibilities that 2 previous couples meet again by chance. Numerical values Here is the beginning of this array : Formulas The numbers in the k = 0 column enumerate derangements. Thus for non-negative n. It turns out that where the ratio is rounded up for even n and rounded down for odd n. For n ≥ 1, this gives the nearest integer. More generally, for any , we have The proof is easy after one knows how to enumerate derangements: choose the k fixed points out of n; then choose the derangement of the other n − k points. The numbers are generated by the power series ; accordingly, an explicit formula for Dn, m can be derived as follows: This immediately implies that for n large, m fixed. Probability distribution The sum of the entries in each row for the table in "Numerical Values" is the total number of permutations of { 1, ..., n }, and is therefore n!. If one divides all the entries in the nth row by n!, one gets the probability distribution of the number of fixed points of a uniformly distributed random permutation of { 1, ..., n }. The probability that the number of fixed points is k is For n ≥ 1, the expected number of fixed points is 1
https://en.wikipedia.org/wiki/B%C3%A1nh%20tr%C3%A1ng
Bánh tráng or bánh đa nem, a Vietnamese term (literally, coated bánh), sometimes called rice paper wrappers, rice crepes, rice wafers or nem wrappers, are edible Vietnamese wrappers used in Vietnamese cuisine, primarily in finger foods and appetizers such as Vietnamese nem dishes. The term rice paper wrappers can sometimes be a misnomer, as some banh trang wrappers are made from rice flour supplemented with tapioca flour or sometimes replaced completely with tapioca starch. The roasted version is bánh tráng nướng. Description Vietnamese banh trang are rice paper wrappers that are edible. They are made from steamed rice batter, then sun-dried. A more modern method is to use machines that can steam and dry the wrapper for a thinner and more hygienic product, suitable for the export market. Types Vietnamese banh trang wrappers come in various textures, shapes and types. Textures may vary from thin, soft to thick (much like a rice cracker). Banh trang wrappers come in various shapes, though circular and squared shapes are most commonly used. A plethora of local Vietnamese ingredients and spices are added to Vietnamese banh trang wrappers for the purpose of creating different flavors and textures, such as sesame seeds, chili, coconut milk, bananas, and durian, to name a few. Bánh tráng Southern Vietnamese term for rice wrappers, which are also commonly used overseas. These banh trang wrappers are made from a mixture of rice flour with tapioca starch, water and salt. These wrappers are thin and light in texture. They are often used for chả giò and gỏi cuốn. There are also certain rice wrappers products that are specifically for frying. Bánh đa nướng / Bánh tráng nướng (grilled rice cracker) Bánh đa nướng or bánh tráng nướng are roasted or grilled rice crackers. Some can be thicker than the standard rice wrapper and can also include sesame seeds. It is often used in dishes like Mì Quảng as a topping. Not be confused with the street food dish from Đà Lạt (also known
https://en.wikipedia.org/wiki/VIX
VIX is the ticker symbol and the popular name for the Chicago Board Options Exchange's CBOE Volatility Index, a popular measure of the stock market's expectation of volatility based on S&P 500 index options. It is calculated and disseminated on a real-time basis by the CBOE, and is often referred to as the fear index or fear gauge. The VIX traces its origin to the financial economics research of Menachem Brenner and Dan Galai. In a series of papers beginning in 1989, Brenner and Galai proposed the creation of a series of volatility indices, beginning with an index on stock market volatility, and moving to interest rate and foreign exchange rate volatility. In their papers, Brenner and Galai proposed, "[the] volatility index, to be named 'Sigma Index', would be updated frequently and used as the underlying asset for futures and options. ... A volatility index would play the same role as the market index plays for options and futures on the index." In 1992, the CBOE hired consultant Bob Whaley to calculate values for stock market volatility based on this theoretical work. Whaley utilized data series in the index options market, and computed daily VIX levels from January 1986 to May 1992. The resulting VIX index formulation provides a measure of market volatility on which expectations of further stock market volatility in the near future might be based. The current VIX index value quotes the expected annualized change in the S&P 500 index over the following 30 days, as computed from options-based theory and current options-market data. To summarize, VIX is a volatility index derived from S&P 500 options for the 30 days following the measurement date, with the price of each option representing the market's expectation of 30-day forward-looking volatility. The resulting VIX index formulation provides a measure of expected market volatility on which expectations of further stock market volatility in the near future might be based. Like conventional indexes, the VI
https://en.wikipedia.org/wiki/Ethnocomputing
Ethnocomputing is the study of the interactions between computing and culture. It is carried out through theoretical analysis, empirical investigation, and design implementation. It includes research on the impact of computing on society, as well as the reverse: how cultural, historical, personal, and societal origins and surroundings cause and affect the innovation, development, diffusion, maintenance, and appropriation of computational artifacts or ideas. From the ethnocomputing perspective, no computational technology is culturally "neutral," and no cultural practice is a computational void. Instead of considering culture to be a hindrance for software engineering, culture should be seen as a resource for innovation and design. Subject matter Social categories for ethnocomputing include: Indigenous computing: In some cases, ethnocomputing "translates" from indigenous culture to high tech frameworks: for example, analyzing the African board game Owari as a one-dimensional cellular automaton. Social/historical studies of computing: In other cases ethnocomputing seeks to identify the social, cultural, historical, or personal dimensions of high tech computational ideas and artifacts: for example, the relationship between the Turing Test and Alan Turing's closeted gay identity. Appropriation in computing: lay persons who did not participate in the original design of a computing system can still affect it by modifying its interpretation, use, or structure. Such "modding" may be as subtle as the key board character "emoticons" created through lay use of email, or as blatant as the stylized customization of computer cases. Equity tools: a software "Applications Quest" has been developed for generating a "diversity index" that allows consideration of multiple identity characteristics in college admissions. Technical categories in ethnocomputing include: Organized structures and models used to represent information (data structures) Ways of manipulating the organiz
https://en.wikipedia.org/wiki/Immunoscreening
Immunoscreening is a method of biotechnology used to detect a polypeptide produced from a cloned gene. The term encompasses several different techniques designed for protein identification, such as Western blotting, using recombinant DNA, and analyzing antibody-peptide interactions. Clones are screened for the presence of the gene product: the resulting protein. This strategy requires first that a gene library is implemented in an expression vector, and that antiserum to the protein is available. Radioactivity or an enzyme is coupled generally with the secondary antibody. The radioactivity/enzyme linked secondary antibody can be purchased commercially and can detect different antigens. In commercial diagnostics labs, labelled primary antibodies are also used. The antigen-antibody interaction is used in the immunoscreening of several diseases. See also ELISA Blots References Biotechnology
https://en.wikipedia.org/wiki/Polyphosphate-accumulating%20organisms
Polyphosphate-accumulating organisms (PAOs) are a group of microorganisms that, under certain conditions, facilitate the removal of large amounts of phosphorus from their environments. The most studied example of this phenomenon is in polyphosphate-accumulating bacteria (PAB) found in a type of wastewater processing known as enhanced biological phosphorus removal (EBPR), however phosphate hyperaccumulation has been found to occur in other conditions such as soil and marine environments, as well as in non-bacterial organisms such as fungi and algae. PAOs accomplish this removal of phosphate by accumulating it within their cells as polyphosphate. PAOs are by no means the only microbes that can accumulate phosphate within their cells and in fact, the production of polyphosphate is a widespread ability among microbes. However, PAOs have many characteristics that other organisms that accumulate polyphosphate do not have that make them amenable to use in wastewater treatment. Specifically, in the case of classical PAOs, is the ability to consume simple carbon compounds (energy source) without the presence of an external electron acceptor (such as nitrate or oxygen) by generating energy from internally stored polyphosphate and glycogen. Most other bacteria cannot consume under these conditions and therefore PAOs gain a selective advantage within the mixed microbial community present in the activated sludge. Therefore, wastewater treatment plants that operate for enhanced biological phosphorus removal have an anaerobic tank (where there is no nitrate or oxygen present as external electron acceptor) prior to the other tanks to give PAOs preferential access to the simple carbon compounds in the wastewater that is influent to the plant. Metabolisms Classical (Canonical) PAO Metabolism The classical or "canonical" behavior of PAOs is considered to be the release of phosphate (as orthophosphate) to the environment and transformation of intracellular polyphosphate reserves int
https://en.wikipedia.org/wiki/Design%20flow%20%28EDA%29
Design flows are the explicit combination of electronic design automation tools to accomplish the design of an integrated circuit. Moore's law has driven the entire IC implementation RTL to GDSII design flows from one which uses primarily stand-alone synthesis, placement, and routing algorithms to an integrated construction and analysis flows for design closure. The challenges of rising interconnect delay led to a new way of thinking about and integrating design closure tools. The RTL to GDSII flow underwent significant changes from 1980 through 2005. The continued scaling of CMOS technologies significantly changed the objectives of the various design steps. The lack of good predictors for delay has led to significant changes in recent design flows. New scaling challenges such as leakage power, variability, and reliability will continue to require significant changes to the design closure process in the future. Many factors describe what drove the design flow from a set of separate design steps to a fully integrated approach, and what further changes are coming to address the latest challenges. In his keynote at the 40th Design Automation Conference entitled The Tides of EDA, Alberto Sangiovanni-Vincentelli distinguished three periods of EDA: The Age of Invention: During the invention era, routing, placement, static timing analysis and logic synthesis were invented. The Age of Implementation: In the age of implementation, these steps were drastically improved by designing sophisticated data structures and advanced algorithms. This allowed the tools in each of these design steps to keep pace with the rapidly increasing design sizes. However, due to the lack of good predictive cost functions, it became impossible to execute a design flow by a set of discrete steps, no matter how efficiently each of the steps was implemented. The Age of Integration: This led to the age of integration where most of the design steps are performed in an integrated environment, d
https://en.wikipedia.org/wiki/Zarankiewicz%20problem
The Zarankiewicz problem, an unsolved problem in mathematics, asks for the largest possible number of edges in a bipartite graph that has a given number of vertices and has no complete bipartite subgraphs of a given size. It belongs to the field of extremal graph theory, a branch of combinatorics, and is named after the Polish mathematician Kazimierz Zarankiewicz, who proposed several special cases of the problem in 1951. Problem statement A bipartite graph consists of two disjoint sets of vertices and , and a set of edges each of which connects a vertex in to a vertex in . No two edges can both connect the same pair of vertices. A complete bipartite graph is a bipartite graph in which every pair of a vertex from and a vertex from is connected to each other. A complete bipartite graph in which has vertices and has vertices is denoted . If is a bipartite graph, and there exists a set of vertices of and vertices of that are all connected to each other, then these vertices induce a subgraph of the form . (In this formulation, the ordering of and is significant: the set of vertices must be from and the set of vertices must be from , not vice versa.) The Zarankiewicz function denotes the maximum possible number of edges in a bipartite graph for which and , but which does not contain a subgraph of the form . As a shorthand for an important special case, is the same as . The Zarankiewicz problem asks for a formula for the Zarankiewicz function, or (failing that) for tight asymptotic bounds on the growth rate of assuming that is a fixed constant, in the limit as goes to infinity. For this problem is the same as determining cages with girth six. The Zarankiewicz problem, cages and finite geometry are strongly interrelated. The same problem can also be formulated in terms of digital geometry. The possible edges of a bipartite graph can be visualized as the points of a rectangle in the integer lattice, and a complete subgraph is a set of rows a
https://en.wikipedia.org/wiki/Polarization-division%20multiple%20access
Polarization-division multiple access (PDMA) is a channel access method used in some cellular networks and broadcast satellite services. Separate antennas are used in this type, each with different polarization and followed by separate receivers, allowing simultaneous regional access of satellites. Each corresponding ground station antenna needs to be polarized in the same way as its counterpart in the satellite. This is generally accomplished by providing each participating ground station with an antenna that has dual polarization. The frequency band allocated to each antenna beam can be identical because the uplink signals are orthogonal in polarization. This technique allows frequency reuse. See also Frequency-division multiple access Code-division multiple access Time-division multiple access Channel access methods Polarization (waves)
https://en.wikipedia.org/wiki/Hardware%20security%20module
A hardware security module (HSM) is a physical computing device that safeguards and manages secrets (most importantly digital keys), performs encryption and decryption functions for digital signatures, strong authentication and other cryptographic functions. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server. A hardware security module contains one or more secure cryptoprocessor chips. Design HSMs may have features that provide tamper evidence such as visible signs of tampering or logging and alerting, or tamper resistance which makes tampering difficult without making the HSM inoperable, or tamper responsiveness such as deleting keys upon tamper detection. Each module contains one or more secure cryptoprocessor chips to prevent tampering and bus probing, or a combination of chips in a module that is protected by the tamper evident, tamper resistant, or tamper responsive packaging. A vast majority of existing HSMs are designed mainly to manage secret keys. Many HSM systems have means to securely back up the keys they handle outside of the HSM. Keys may be backed up in wrapped form and stored on a computer disk or other media, or externally using a secure portable device like a smartcard or some other security token. HSMs are used for real time authorization and authentication in critical infrastructure thus are typically engineered to support standard high availability models including clustering, automated failover, and redundant field-replaceable components. A few of the HSMs available in the market have the capability to execute specially developed modules within the HSM's secure enclosure. Such an ability is useful, for example, in cases where special algorithms or business logic has to be executed in a secured and controlled environment. The modules can be developed in native C language, .NET, Java, or other programming languages. Further, upcoming next-generation HSMs
https://en.wikipedia.org/wiki/Denitrifying%20bacteria
Denitrifying bacteria are a diverse group of bacteria that encompass many different phyla. This group of bacteria, together with denitrifying fungi and archaea, is capable of performing denitrification as part of the nitrogen cycle. Denitrification is performed by a variety of denitrifying bacteria that are widely distributed in soils and sediments and that use oxidized nitrogen compounds in absence of oxygen as a terminal electron acceptor. They metabolise nitrogenous compounds using various enzymes, turning nitrogen oxides back to nitrogen gas (N2) or nitrous oxide (N2O). Diversity of denitrifying bacteria There is a great diversity in biological traits. Denitrifying bacteria have been identified in over 50 genera with over 125 different species and are estimated to represent 10-15% of bacteria population in water, soil and sediment. Denitrifying include for example several species of Pseudomonas, Alcaligenes , Bacillus and others. The majority of denitrifying bacteria are facultative aerobic heterotrophs that switch from aerobic respiration to denitrification when oxygen as an available terminal electron acceptor (TEA) runs out. This forces the organism to use nitrate to be used as a TEA. Because the diversity of denitrifying bacteria is so large, this group can thrive in a wide range of habitats including some extreme environments such as environments that are highly saline and high in temperature. Aerobic denitrifiers can conduct an aerobic respiratory process in which nitrate is converted gradually to N2 (NO3− →NO2− → NO → N2O → N2 ), using nitrate reductase (Nar or Nap), nitrite reductase (Nir), nitric oxide reductase (Nor), and nitrous oxide reductase (Nos). Phylogenetic analysis revealed that aerobic denitrifiers mainly belong to α-, β- and γ-Proteobacteria. Denitrification mechanism Denitrifying bacteria use denitrification to generate ATP. The most common denitrification process is outlined below, with the nitrogen oxides being converted back to g
https://en.wikipedia.org/wiki/Building%20science
Building science is the science and technology-driven collection of knowledge in order to provide better indoor environmental quality (IEQ), energy-efficient built environments, and occupant comfort and satisfaction. Building physics, architectural science, and applied physics are terms used for the knowledge domain that overlaps with building science. In building science, the methods used in natural and hard sciences are widely applied, which may include controlled and quasi-experiments, randomized control, physical measurements, remote sensing, and simulations. On the other hand, methods from social and soft sciences, such as case study, interviews & focus group, observational method, surveys, and experience sampling, are also widely used in building science to understand occupant satisfaction, comfort, and experiences by acquiring qualitative data. One of the recent trends in building science is a combination of the two different methods. For instance, it is widely known that occupants’ thermal sensation and comfort may vary depending on their sex, age, emotion, experiences, etc. even in the same indoor environment. Despite the advancement in data extraction and collection technology in building science, objective measurements alone can hardly represent occupants' state of mind such as comfort and preference. Therefore, researchers are trying to measure both physical contexts and understand human responses to figure out complex interrelationships. Building science traditionally includes the study of indoor thermal environment, indoor acoustic environment, indoor light environment, indoor air quality, and building resource use, including energy and building material use. These areas are studied in terms of physical principles, relationship to building occupant health, comfort, and productivity, and how they can be controlled by the building envelope and electrical and mechanical systems. The National Institute of Building Sciences (NIBS) additionally includes t
https://en.wikipedia.org/wiki/Bob%20Widlar
Robert John Widlar (pronounced wide-lar; November 30, 1937 – February 27, 1991) was an American electronics engineer and a designer of linear integrated circuits (ICs). Early years Widlar was born November 30, 1937 in Cleveland to parents of Czech, Irish and German ethnicity. His mother, Mary Vithous, was born in Cleveland to Czech immigrants Frank Vithous (František Vitouš) and Marie Zakova (Marie Žáková). His father, Walter J. Widlar, came from prominent German and Irish American families whose ancestors settled in Cleveland in the middle of the 19th century. A self-taught radio engineer, Walter Widlar worked for the radio station and designed pioneering ultra high frequency transmitters. The world of electronics surrounded him since birth: one of his brothers became the first baby monitored by wireless radio. Guided by his father, Bob developed a strong interest in electronics in early childhood. Widlar never talked about his early years and personal life. He graduated from Saint Ignatius High School in Cleveland and enrolled at the University of Colorado at Boulder. In February 1958 Widlar joined the United States Air Force. He instructed servicemen in electronic equipment and devices and authored his first book, Introduction to Semiconductor Devices (1960), a textbook that demonstrated his ability to simplify complex problems. His liberal mind was a poor match for the military environment, and in 1961 Widlar left the service. He joined the Ball Brothers Research Corporation in Boulder to develop analog and digital equipment for NASA. He simultaneously continued studies at the University of Colorado and graduated with high grades in the summer of 1963. Achievements Widlar invented the basic building blocks of linear ICs including the Widlar current source, the Widlar bandgap voltage reference and the Widlar output stage. From 1964 to 1970, Widlar, together with David Talbert, created the first mass-produced operational amplifier ICs (μA702, μA709), some of
https://en.wikipedia.org/wiki/Parallels%20%28company%29
Parallels is a software company based in Bellevue, Washington; it is primarily involved in the development of virtualization software for macOS. The company has offices in 14 countries, including the United States, Germany, United Kingdom, France, Japan, China, Spain, Malta, Australia and Mauritius and has over 800 employees. Company history SWSoft, a privately held server automation and virtualization software company, developed software for running data centers, particularly for web-hosting services companies and application service providers. Their Virtuozzo product was an early system-level server virtualization solution, and in 2003 they bought Plesk, a commercial web hosting platform. In 2004, SWsoft acquired Parallels, Inc. and Parallels Workstation for Windows and Linux 2.0 was released, with Parallels Desktop for Mac following in mid-2006. SWsoft's acquisition of Parallels was kept confidential until January 2004, two years before Parallels became mainstream. Later the same year the corporate headquarters moved from Herndon, Virginia to Renton, Washington. Historically, their primary development labs were in Moscow and Novosibirsk, Russia. Parallels was founded by Serguei Beloussov, who was born in the former Soviet Union and later immigrated to Singapore. At Apple's Worldwide Developers Conference 2007 in San Francisco, California, Parallels announced and demonstrated its upcoming Parallels Server for Mac. Parallels Server for Mac will reportedly allow IT managers to run multiple server operating systems on a single Mac Xserve. In 2007, the German company Netsys GmbH sued Parallels' German distributor Avanquest for copyright violation (see Parallels Desktop for Mac for details), then Parallels Server for Mac was announced at WWDC, and later Parallels Technology Network. In 2008, SWsoft merged into Parallels to become one company under the Parallels branding which then acquired ModernGigabyte, LLC. Parallels Server for Mac was launched in June then i
https://en.wikipedia.org/wiki/Alternating%20factorial
In mathematics, an alternating factorial is the absolute value of the alternating sum of the first n factorials of positive integers. This is the same as their sum, with the odd-indexed factorials multiplied by −1 if n is even, and the even-indexed factorials multiplied by −1 if n is odd, resulting in an alternation of signs of the summands (or alternation of addition and subtraction operators, if preferred). To put it algebraically, or with the recurrence relation in which af(1) = 1. The first few alternating factorials are 1, 1, 5, 19, 101, 619, 4421, 35899, 326981, 3301819, 36614981, 442386619, 5784634181, 81393657019 For example, the third alternating factorial is 1! – 2! + 3!. The fourth alternating factorial is −1! + 2! − 3! + 4! = 19. Regardless of the parity of n, the last (nth) summand, n!, is given a positive sign, the (n – 1)th summand is given a negative sign, and the signs of the lower-indexed summands are alternated accordingly. This pattern of alternation ensures the resulting sums are all positive integers. Changing the rule so that either the odd- or even-indexed summands are given negative signs (regardless of the parity of n) changes the signs of the resulting sums but not their absolute values. proved that there are only a finite number of alternating factorials that are also prime numbers, since 3612703 divides af(3612702) and therefore divides af(n) for all n ≥ 3612702. , the known primes and probable primes are af(n) for n = 3, 4, 5, 6, 7, 8, 10, 15, 19, 41, 59, 61, 105, 160, 661, 2653, 3069, 3943, 4053, 4998, 8275, 9158, 11164 Only the values up to n = 661 have been proved prime in 2006. af(661) is approximately 7.818097272875 × 101578. Notes References Yves Gallot, Is the number of primes finite? Paul Jobling, Guy's problem B43: search for primes of form n!-(n-1)!+(n-2)!-(n-3)!+...+/-1! Integer sequences Factorial and binomial topics
https://en.wikipedia.org/wiki/Phylogenomics
Phylogenomics is the intersection of the fields of evolution and genomics. The term has been used in multiple ways to refer to analysis that involves genome data and evolutionary reconstructions. It is a group of techniques within the larger fields of phylogenetics and genomics. Phylogenomics draws information by comparing entire genomes, or at least large portions of genomes. Phylogenetics compares and analyzes the sequences of single genes, or a small number of genes, as well as many other types of data. Four major areas fall under phylogenomics: Prediction of gene function Establishment and clarification of evolutionary relationships Gene family evolution Prediction and retracing lateral gene transfer. The ultimate goal of phylogenomics is to reconstruct the evolutionary history of species through their genomes. This history is usually inferred from a series of genomes by using a genome evolution model and standard statistical inference methods (e.g. Bayesian inference or maximum likelihood estimation). Prediction of gene function When Jonathan Eisen originally coined phylogenomics, it applied to prediction of gene function. Before the use of phylogenomic techniques, predicting gene function was done primarily by comparing the gene sequence with the sequences of genes with known functions. When several genes with similar sequences but differing functions are involved, this method alone is ineffective in determining function. A specific example is presented in the paper "Gastronomic Delights: A movable feast". Gene predictions based on sequence similarity alone had been used to predict that Helicobacter pylori can repair mismatched DNA. This prediction was based on the fact that this organism has a gene for which the sequence is highly similar to genes from other species in the "MutS" gene family which included many known to be involved in mismatch repair. However, Eisen noted that H. pylori lacks other genes thought to be essential for this function (specif
https://en.wikipedia.org/wiki/Implementation%20of%20mathematics%20in%20set%20theory
This article examines the implementation of mathematical concepts in set theory. The implementation of a number of basic mathematical concepts is carried out in parallel in ZFC (the dominant set theory) and in NFU, the version of Quine's New Foundations shown to be consistent by R. B. Jensen in 1969 (here understood to include at least axioms of Infinity and Choice). What is said here applies also to two families of set theories: on the one hand, a range of theories including Zermelo set theory near the lower end of the scale and going up to ZFC extended with large cardinal hypotheses such as "there is a measurable cardinal"; and on the other hand a hierarchy of extensions of NFU which is surveyed in the New Foundations article. These correspond to different general views of what the set-theoretical universe is like, and it is the approaches to implementation of mathematical concepts under these two general views that are being compared and contrasted. It is not the primary aim of this article to say anything about the relative merits of these theories as foundations for mathematics. The reason for the use of two different set theories is to illustrate that multiple approaches to the implementation of mathematics are feasible. Precisely because of this approach, this article is not a source of "official" definitions for any mathematical concept. Preliminaries The following sections carry out certain constructions in the two theories ZFC and NFU and compare the resulting implementations of certain mathematical structures (such as the natural numbers). Mathematical theories prove theorems (and nothing else). So saying that a theory allows the construction of a certain object means that it is a theorem of that theory that that object exists. This is a statement about a definition of the form "the x such that exists", where is a formula of our language: the theory proves the existence of "the x such that " just in case it is a theorem that "there is one and only
https://en.wikipedia.org/wiki/Memory%20bandwidth
Memory bandwidth is the rate at which data can be read from or stored into a semiconductor memory by a processor. Memory bandwidth is usually expressed in units of bytes/second, though this can vary for systems with natural data sizes that are not a multiple of the commonly used 8-bit bytes. Memory bandwidth that is advertised for a given memory or system is usually the maximum theoretical bandwidth. In practice the observed memory bandwidth will be less than (and is guaranteed not to exceed) the advertised bandwidth. A variety of computer benchmarks exist to measure sustained memory bandwidth using a variety of access patterns. These are intended to provide insight into the memory bandwidth that a system should sustain on various classes of real applications. Measurement conventions There are three different conventions for defining the quantity of data transferred in the numerator of "bytes/second": The bcopy convention: counts the amount of data copied from one location in memory to another location per unit time. For example, copying 1 million bytes from one location in memory to another location in memory in one second would be counted as 1 million bytes per second. The bcopy convention is self-consistent, but is not easily extended to cover cases with more complex access patterns, for example three reads and one write. The Stream convention: sums the amount of data that the application code explicitly reads plus the amount of data that the application code explicitly writes. Using the previous 1 million byte copy example, the STREAM bandwidth would be counted as 1 million bytes read plus 1 million bytes written in one second, for a total of 2 million bytes per second. The STREAM convention is most directly tied to the user code, but may not count all the data traffic that the hardware is actually required to perform. The hardware convention: counts the actual amount of data read or written by the hardware, whether the data motion was explicitly reques