source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Fractional%20quantum%20Hall%20effect
|
The fractional quantum Hall effect (FQHE) is a physical phenomenon in which the Hall conductance of 2-dimensional (2D) electrons shows precisely quantized plateaus at fractional values of . It is a property of a collective state in which electrons bind magnetic flux lines to make new quasiparticles, and excitations have a fractional elementary charge and possibly also fractional statistics. The 1998 Nobel Prize in Physics was awarded to Robert Laughlin, Horst Störmer, and Daniel Tsui "for their discovery of a new form of quantum fluid with fractionally charged excitations"
The microscopic origin of the FQHE is a major research topic in condensed matter physics.
Descriptions
The fractional quantum Hall effect (FQHE) is a collective behavior in a 2D system of electrons. In particular magnetic fields, the electron gas condenses into a remarkable liquid state, which is very delicate, requiring high quality material with a low carrier concentration, and extremely low temperatures. As in the integer quantum Hall effect, the Hall resistance undergoes certain quantum Hall transitions to form a series of plateaus. Each particular value of the magnetic field corresponds to a filling factor (the ratio of electrons to magnetic flux quanta)
where p and q are integers with no common factors. Here q turns out to be an odd number with the exception of two filling factors 5/2 and 7/2. The principal series of such fractions are
and
Fractionally charged quasiparticles are neither bosons nor fermions and exhibit anyonic statistics. The fractional quantum Hall effect continues to be influential in theories about topological order. Certain fractional quantum Hall phases appear to have the right properties for building a topological quantum computer.
History and developments
The FQHE was experimentally discovered in 1982 by Daniel Tsui and Horst Störmer, in experiments performed on gallium arsenide heterostructures developed by Arthur Gossard.
There were several major steps in
|
https://en.wikipedia.org/wiki/Langley%20%28unit%29
|
The langley (Ly) is a unit of heat transmission, especially used to express the rate of solar radiation (or insolation) received by the earth. The unit was proposed by Franz Linke in 1942 and named after Samuel Langley (1834–1906) in 1947.
Definition
One langley is
1 thermochemical calorie per square centimetre,
41 840 J/m2 (joules per square metre)
See also
Solar constant
Radiant exposure
References
Units of measurement
Non-SI metric units
|
https://en.wikipedia.org/wiki/Cray%20T3D
|
The T3D (Torus, 3-Dimensional) was Cray Research's first attempt at a massively parallel supercomputer architecture. Launched in 1993, it also marked Cray's first use of another company's microprocessor. The T3D consisted of between 32 and 2048 Processing Elements (PEs), each comprising a 150 MHz DEC Alpha 21064 (EV4) microprocessor and either 16 or 64 MB of DRAM. PEs were grouped in pairs, or nodes, which incorporated a 6-way processor interconnect switch. These switches had a peak bandwidth of 300 MB/second in each direction and were connected to form a three-dimensional torus network topology.
The T3D was designed to be hosted by a Cray Y-MP Model E, M90 or C90-series "front-end" system and rely on it and its UNICOS operating system for all I/O and most system services. The T3D PEs ran a simple microkernel called UNICOS MAX.
Several different configurations of T3D were available. The SC (Single Cabinet) models shared a cabinet with a host Y-MP system and were available with either 128 or 256 PEs. The MC (Multi-Cabinet) models were housed in one or more liquid-cooled cabinet(s) separately from the host, while the MCA models were smaller (32 to 128 PEs) air-cooled multi-cabinet configurations. There was also a liquid-cooled MCN model which had an alternative interconnect wire mat allowing non-power-of-2 numbers of PEs.
The Cray T3D MC cabinet had an Apple Macintosh PowerBook laptop built into its front. Its only purpose was to display animated Cray Research and T3D logos on its color LCD screen.
The first T3D delivered was a prototype installed at the Pittsburgh Supercomputing Center in early September 1993. The supercomputer was formally introduced on 27 September 1993.
The T3D was superseded in 1995 by the faster and more sophisticated Cray T3E.
Gallery
References
External links
CRAY T3D System Architecture Overview Manual
Computer-related introductions in 1993
T3d
Supercomputers
|
https://en.wikipedia.org/wiki/SAP%20NetWeaver%20Business%20Warehouse
|
SAP Business Warehouse (SAP BW) is SAP’s Enterprise Data Warehouse product. It can transform and consolidate business information from virtually any source system. It ran on industry standard RDBMS until version 7.3 at which point it began to transition onto SAP's HANA in-memory DBMS, particularly with the release of version 7.4.
Latterly, it evolved into a product called BW/4HANA so as to align with SAP's sister ERP Product called S/4HANA. This strategy allowed SAP to engineer the database to use the HANA in-Memory database. Consequently, this facilitates the push down of complex OLAP based functions to the database as opposed to NetWeaver ABAP Application Server to improve performance. The product is also more open and can incorporate SAP and Non-SAP data more easily.
History
In 1998 SAP released the first version of SAP BW, providing a model-driven approach to EDW that made data warehousing easier and more efficient, particularly for SAP R/3 data. Since then, SAP BW has evolved to become a key component for thousands of companies. An article, provided the history of SAP BW from inception to the newer releases powered by SAP HANA.
References
Online analytical processing
Business Intelligence
|
https://en.wikipedia.org/wiki/Spigot%20algorithm
|
A spigot algorithm is an algorithm for computing the value of a transcendental number (such as or e) that generates the digits of the number sequentially from left to right providing increasing precision as the algorithm proceeds. Spigot algorithms also aim to minimize the amount of intermediate storage required. The name comes from the sense of the word "spigot" for a tap or valve controlling the flow of a liquid. Spigot algorithms can be contrasted with algorithms that store and process complete numbers to produce successively more accurate approximations to the desired transcendental.
Interest in spigot algorithms was spurred in the early days of computational mathematics by extreme constraints on memory, and such an algorithm for calculating the digits of e appeared in a paper by Sale in 1968. In 1970, Abdali presented a more general algorithm to compute the sums of series in which the ratios of successive terms can be expressed as quotients of integer functions of term positions. This algorithm is applicable to many familiar series for trigonometric functions, logarithms, and transcendental numbers because these series satisfy the above condition. The name "spigot algorithm" seems to have been coined by Stanley Rabinowitz and Stan Wagon, whose algorithm for calculating the digits of is sometimes referred to as "the spigot algorithm for ".
The spigot algorithm of Rabinowitz and Wagon is bounded, in the sense that the number of terms of the infinite series that will be processed must be specified in advance. The term "streaming algorithm" indicates an approach without this restriction. This allows the calculation to run indefinitely varying the amount of intermediate storage as the calculation progresses.
A variant of the spigot approach uses an algorithm which can be used to compute a single arbitrary digit of the transcendental without computing the preceding digits: an example is the Bailey–Borwein–Plouffe formula, a digit extraction algorithm for whi
|
https://en.wikipedia.org/wiki/Lily%20pad%20network
|
A lily pad network is a series of wireless access points spread over a large area, each connected to a different network and owned by different enterprises and people, providing hotspots where wireless clients can connect to the Internet without regard for the particular networks to which they link.
This is in contrast with wireless community networks, where the access points route traffic between them, as well as a corporate wireless LAN where several access points are connected to the corporate network, and the members of the organization are supposed to stick to their own network.
Unlike a traditional corporate wireless LAN, which allows access to other networks only via access points connected to the main corporate network, a lily pad does not restrict a network user to connecting only to their own network. In this way, a lily pad network enables the network users to roam over a large area while staying connected, without needing the overheads of the access points to route traffic between the individual networks.
Lily pad networks derive their name from their frog-like "hopping" facility, where mobile stations which roam over a large area are akin to frogs, hopping from lily pad to lily pad, and because of the technology, remaining continuously connected. Unlike typical wireless mesh networks, where each network client needs to manage their own network connection continuity, a lily pad network topology enables roaming by linking a number of wireless access points together.
A lily pad network is particularly suitable for mobile wireless network connectivity over a large geographic area, such as a combination of coffee houses, libraries, and other public spaces. In these locations wireless access infrastructure is available for configuring the lily pad network to provide "hot spots", allowing a mobile station to connect to the Internet for both surfing or VoIP.
Wi-Fi
Network access
|
https://en.wikipedia.org/wiki/X-linked%20recessive%20inheritance
|
X-linked recessive inheritance is a mode of inheritance in which a mutation in a gene on the X chromosome causes the phenotype to be always expressed in males (who are necessarily homozygous for the gene mutation because they have one X and one Y chromosome) and in females who are homozygous for the gene mutation, see zygosity. Females with one copy of the mutated gene are carriers.
X-linked inheritance means that the gene causing the trait or the disorder is located on the X chromosome. Females have two X chromosomes while males have one X and one Y chromosome. Carrier females who have only one copy of the mutation do not usually express the phenotype, although differences in X-chromosome inactivation (known as skewed X-inactivation) can lead to varying degrees of clinical expression in carrier females, since some cells will express one X allele and some will express the other. The current estimate of sequenced X-linked genes is 499, and the total, including vaguely defined traits, is 983.
Patterns of inheritance
In humans, inheritance of X-linked recessive traits follows a unique pattern made up of three points.
The first is that affected fathers cannot pass X-linked recessive traits to their sons because fathers give Y chromosomes to their sons. This means that males affected by an X-linked recessive disorder inherited the responsible X chromosome from their mothers.
Second, X-linked recessive traits are more commonly expressed in males than females. This is due to the fact that males possess only a single X chromosome, and therefore require only one mutated X in order to be affected. Women possess two X chromosomes, and thus must receive two of the mutated recessive X chromosomes (one from each parent). A popular example showing this pattern of inheritance is that of the descendants of Queen Victoria and the blood disease hemophilia.
The last pattern seen is that X-linked recessive traits tend to skip generations, meaning that an affected grandfather will
|
https://en.wikipedia.org/wiki/Dirichlet%20integral
|
In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real line:
This integral is not absolutely convergent, meaning is not Lebesgue-integrable, because the Dirichlet integral is infinite in the sense of Lebesgue integration. It is, however, finite in the sense of the improper Riemann integral or the generalized Riemann or Henstock–Kurzweil integral. This can be seen by using Dirichlet's test for improper integrals.
It is a good illustration of special techniques for evaluating definite integrals. The sine integral, an antiderivative of the sinc function, is not an elementary function. However the improper definite integral can be determined in several ways: the Laplace transform, double integration, differentiating under the integral sign, contour integration, and the Dirichlet kernel.
Evaluation
Laplace transform
Let be a function defined whenever Then its Laplace transform is given by
if the integral exists.
A property of the Laplace transform useful for evaluating improper integrals is
provided exists.
In what follows, one needs the result which is the Laplace transform of the function (see the section 'Differentiating under the integral sign' for a derivation) as well as a version of Abel's theorem (a consequence of the final value theorem for the Laplace transform).
Therefore,
Double integration
Evaluating the Dirichlet integral using the Laplace transform is equivalent to calculating the same double definite integral by changing the order of integration, namely,
Differentiation under the integral sign (Feynman's trick)
First rewrite the integral as a function of the additional variable namely, the Laplace transform of So let
In order to evaluate the Dirichlet integral, we need to determine The continuity of can be justified by applying the dominated convergence theorem
|
https://en.wikipedia.org/wiki/Pipeline%20%28Unix%29
|
In Unix-like computer operating systems, a pipeline is a mechanism for inter-process communication using message passing. A pipeline is a set of processes chained together by their standard streams, so that the output text of each process (stdout) is passed directly as input (stdin) to the next one. The second process is started as the first process is still executing, and they are executed concurrently.
The concept of pipelines was championed by Douglas McIlroy at Unix's ancestral home of Bell Labs, during the development of Unix, shaping its toolbox philosophy. It is named by analogy to a physical pipeline. A key feature of these pipelines is their "hiding of internals" (Ritchie & Thompson, 1974). This in turn allows for more clarity and simplicity in the system.
This article is about anonymous pipes, where data written by one process is buffered by the operating system until it is read by the next process, and this uni-directional channel disappears when the processes are completed. This differs from named pipes, where messages are passed to or from a pipe that is named by making it a file, and remains after the processes are completed. The standard shell syntax for anonymous pipes is to list multiple commands, separated by vertical bars ("pipes" in common Unix verbiage):
command1 | command2 | command3
For example, to list files in the current directory (), retain only the lines of output containing the string (), and view the result in a scrolling page (), a user types the following into the command line of a terminal:
ls -l | grep key | less
The command ls -l is executed as a process, the output (stdout) of which is piped to the input (stdin) of the process for grep key; and likewise for the process for less. Each process takes input from the previous process and produces output for the next process via standard streams. Each | tells the shell to connect the standard output of the command on the left to the standard input of the command on the right by
|
https://en.wikipedia.org/wiki/Reassortment
|
Reassortment is the mixing of the genetic material of a species into new combinations in different individuals. Several different processes contribute to reassortment, including assortment of chromosomes, and chromosomal crossover. It is particularly used when two similar viruses that are infecting the same cell exchange genetic material. In particular, reassortment occurs among influenza viruses, whose genomes consist of eight distinct segments of RNA. These segments act like mini-chromosomes, and each time a flu virus is assembled, it requires one copy of each segment.
If a single host (a human, a chicken, or other animal) is infected by two different strains of the influenza virus, then it is possible that new assembled viral particles will be created from segments whose origin is mixed, some coming from one strain and some coming from another. The new reassortant strain will share properties of both of its parental lineages.
Reassortment is responsible for some of the major genetic shifts in the history of the influenza virus. In the 1957 "Asian flu" and 1968 "Hong Kong flu" pandemics, flu strains were caused by reassortment between an avian virus and a human virus. In addition, the H1N1 virus responsible for the 2009 swine flu pandemic has an unusual mix of swine, avian and human influenza genetic sequences.
The reptarenavirus family, responsible for inclusion body disease in snakes, shows a very high degree of genetic diversity due to reassortment of genetic material from multiple strains in the same infected animal.
Multiplicity reactivation
When influenza viruses are inactivated by UV irradiation or ionizing radiation, they remain capable of multiplicity reactivation in infected host cells. If any of a virus’s genome segments is damaged in such a way as to prevent replication or expression of an essential gene, the virus is inviable when it, alone, infects a host cell (single infection). However when two or more damaged viruses infect the same cell (
|
https://en.wikipedia.org/wiki/Apple%20Pascal
|
Apple Pascal is an implementation of Pascal for the Apple II and Apple III computer series. It is based on UCSD Pascal.
Just like other UCSD Pascal implementations, it ran on its own operating system (Apple Pascal Operating System, a derivative of UCSD p-System with graphical extensions).
Originally released for the Apple II in August 1979, just after Apple DOS 3.2, Apple Pascal pioneered a number of features that would later be incorporated into DOS 3.3, as well as others that would not be seen again until the introduction of ProDOS.
The Apple Pascal software package also included disk maintenance utilities, and an assembler meant to complement Apple's built-in "monitor" assembler. A FORTRAN compiler (written by Silicon Valley Software, Sunnyvale California) compiling to the same p-code as Pascal was also available.
Comparison of Pascal OS with DOS 3.2
Apple Pascal Operating System introduced a new disk format. Instead of dividing the disk into 256-byte sectors as in DOS 3.2, Apple Pascal divides it into "blocks" of 512 bytes each. The p-System also introduced a different method for saving and retrieving files. Under Apple DOS, files were saved to any available sector that the OS could find, regardless of location. Over time, this could lead to file system fragmentation, slowing access to the disk. Apple Pascal attempted to rectify this by saving only to consecutive blocks on the disk.
Other innovations introduced in the file system included the introduction of a timestamp feature. Previously only a file's name, basic type, and size would be shown. Disks could also be named for the first time.
Limitations of the p-System included new restrictions on the naming of files.
Writing files only on consecutive blocks also created problems, because over time free space tended to become too fragmented to store new files. A utility called Krunch was included in the package to consolidate free space.
The biggest problem with the Apple Pascal system was that it wa
|
https://en.wikipedia.org/wiki/Pipeline%20%28computing%29
|
In computing, a pipeline, also known as a data pipeline, is a set of data processing elements connected in series, where the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion. Some amount of buffer storage is often inserted between elements.
Computer-related pipelines include:
Instruction pipelines, such as the classic RISC pipeline, which are used in central processing units (CPUs) and other microprocessors to allow overlapping execution of multiple instructions with the same circuitry. The circuitry is usually divided up into stages and each stage processes a specific part of one instruction at a time, passing the partial results to the next stage. Examples of stages are instruction decode, arithmetic/logic and register fetch. They are related to the technologies of superscalar execution, operand forwarding, speculative execution and out-of-order execution.
Graphics pipelines, found in most graphics processing units (GPUs), which consist of multiple arithmetic units, or complete CPUs, that implement the various stages of common rendering operations (perspective projection, window clipping, color and light calculation, rendering, etc.).
Software pipelines, which consist of a sequence of computing processes (commands, program runs, tasks, threads, procedures, etc.), conceptually executed in parallel, with the output stream of one process being automatically fed as the input stream of the next one. The Unix system call pipe is a classic example of this concept.
HTTP pipelining, the technique of issuing multiple HTTP requests through the same TCP connection, without waiting for the previous one to finish before issuing a new one.
Some operating systems may provide UNIX-like syntax to string several program runs in a pipeline, but implement the latter as simple serial execution, rather than true pipelining—namely, by waiting for each program to finish before starting the next o
|
https://en.wikipedia.org/wiki/Bellman%20equation
|
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used.
The Bellman equation was first applied to engineering control theory and to other topics in applied mathematics, and subsequently became an important tool in economic theory; though the basic concepts of dynamic programming are prefigured in John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior and Abraham Wald's sequential analysis. The term 'Bellman equation' usually refers to the dynamic programming equation associated with discrete-time optimization problems. In continuous-time optimization problems, the analogous equation is a partial differential equation that is called the Hamilton–Jacobi–Bellman equation.
In discrete time any multi-stage optimization problem can be solved by analyzing the appropriate Bellman equation. The appropriate Bellman equation can be found by introducing new state variables (state augmentation). However, the resulting augmented-state multi-stage optimization problem has a higher dimensional state space than the original multi-stage optimization problem - an issue that can potentially render the augmented problem intractable due to the “curse of dimensionality”. Alternatively, it has been shown that if the cost function of the multi-stage optimization problem satisfies a "backward separable" structure, then the appropriate Bellman equation
|
https://en.wikipedia.org/wiki/MP3%20Surround
|
MP3 Surround is an extension of MP3 for multi-channel audio support including 5.1 surround sound. It was developed by Fraunhofer IIS in collaboration with Thomson and Agere Systems, and released in December 2004.
MP3 Surround is backward compatible with standard MP3. The data overhead is 16 kbit/s, which allows for file sizes similar to standard stereo MP3 files. The file size is approximately 10% larger than that of a typical MP3 file. The current evaluation encoder is licensed for personal and non-commercial uses. An MP3 Surround file can be created from 5 or 6 channels of WAV audio.
Several companies, such as DivX, Inc. and Magix, have announced support for the new codec. DivX, Inc. released their first player with MP3 Surround support on September 6, 2006.
In January 2006, Thomson and Fraunhofer IIS also released two new companion technologies: Ensonido, which allows playback of MP3 Surround 5.1 channel sound through stereo headphones, and MP3 SX, which upgrades standard stereo mp3 file to mp3 surround files.
On its 5.5 release, Nullsoft Winamp has included the MP3 Surround format as a part of its integrated MPEG audio decoder (released October 10, 2007).
As of 2 July 2008, with system software v2.40, PlayStation 3 supports MP3 Surround playback.
References
External links
Thomson Releases MP3 Surround
Audio codecs
Digital audio
MP3
Surround sound
|
https://en.wikipedia.org/wiki/Projective%20hierarchy
|
In the mathematical field of descriptive set theory, a subset of a Polish space is projective if it is for some positive integer . Here is
if is analytic
if the complement of , , is
if there is a Polish space and a subset such that is the projection of onto ; that is,
The choice of the Polish space in the third clause above is not very important; it could be replaced in the definition by a fixed uncountable Polish space, say Baire space or Cantor space or the real line.
Relationship to the analytical hierarchy
There is a close relationship between the relativized analytical hierarchy on subsets of Baire space (denoted by lightface letters and ) and the projective hierarchy on subsets of Baire space (denoted by boldface letters and ). Not every subset of Baire space is . It is true, however, that if a subset X of Baire space is then there is a set of natural numbers A such that X is . A similar statement holds for sets. Thus the sets classified by the projective hierarchy are exactly the sets classified by the relativized version of the analytical hierarchy. This relationship is important in effective descriptive set theory.
A similar relationship between the projective hierarchy and the relativized analytical hierarchy holds for subsets of Cantor space and, more generally, subsets of any effective Polish space.
Table
References
Descriptive set theory
Mathematical logic hierarchies
|
https://en.wikipedia.org/wiki/Pointed%20space
|
In mathematics, a pointed space or based space is a topological space with a distinguished point, the basepoint. The distinguished point is just simply one particular point, picked out from the space, and given a name, such as that remains unchanged during subsequent discussion, and is kept track of during all operations.
Maps of pointed spaces (based maps) are continuous maps preserving basepoints, i.e., a map between a pointed space with basepoint and a pointed space with basepoint is a based map if it is continuous with respect to the topologies of and and if This is usually denoted
Pointed spaces are important in algebraic topology, particularly in homotopy theory, where many constructions, such as the fundamental group, depend on a choice of basepoint.
The pointed set concept is less important; it is anyway the case of a pointed discrete space.
Pointed spaces are often taken as a special case of the relative topology, where the subset is a single point. Thus, much of homotopy theory is usually developed on pointed spaces, and then moved to relative topologies in algebraic topology.
Category of pointed spaces
The class of all pointed spaces forms a category Top with basepoint preserving continuous maps as morphisms. Another way to think about this category is as the comma category, ( Top) where is any one point space and Top is the category of topological spaces. (This is also called a coslice category denoted Top.) Objects in this category are continuous maps Such maps can be thought of as picking out a basepoint in Morphisms in ( Top) are morphisms in Top for which the following diagram commutes:
It is easy to see that commutativity of the diagram is equivalent to the condition that preserves basepoints.
As a pointed space, is a zero object in Top, while it is only a terminal object in Top.
There is a forgetful functor Top Top which "forgets" which point is the basepoint. This functor has a left adjoint which assigns to each topologic
|
https://en.wikipedia.org/wiki/Audio%20Engineering%20Society
|
The Audio Engineering Society (AES) is a professional body for engineers, scientists, other individuals with an interest or involvement in the professional audio industry. The membership largely comprises engineers developing devices or products for audio, and persons working in audio content production. It also includes acousticians, audiologists, academics, and those in other disciplines related to audio. The AES is the only worldwide professional society devoted exclusively to audio technology.
Established in 1948, the Society develops, reviews and publishes engineering standards for the audio and related media industries, and produces the AES Conventions, which are held twice a year alternating between Europe and the US. The AES and individual regional or national sections also hold AES Conferences on different topics during the year.
History
The idea of a society dedicated solely to audio engineering had been discussed for some time before the first meeting, but was first proposed in print in a letter by Frank E. Sherry, of Victoria, Texas, in the December 1947 issue of the magazine Audio Engineering. A New York engineer and audio consultant, C.J. LeBel, then published a letter agreeing, and saying that a group of audio professionals had already been discussing such a thing, and that they were interested in holding an organizational meeting. He asked interested persons to contact him for details. The response was enthusiastic and encouraging. Fellow engineer Norman C. Pickering published the date for an organizational meeting, and announced the appointment of LeBel as acting chairman, and himself as acting secretary.
The organizational meeting was held at the RCA Victor Studios in New York City on February 17, 1948. Acting chairman Mr. LeBel spoke first, emphasizing the professional, non-commercial, independent nature of the proposed organization. Acting Secretary Norman Pickering then discussed the need for a professional organization that could foster an e
|
https://en.wikipedia.org/wiki/National%20University%20of%20Computer%20and%20Emerging%20Sciences
|
The National University of Computer and Emerging Sciences (NUCES) (), also known as Foundation for Advancement of Science and Technology (FAST), is a private research university with multiple campuses in different cities of Pakistan.
Overview
The university is the first multi-campus university in Pakistan, having five modern campuses based in different cities. These campuses are located in Chiniot-Faisalabad, Islamabad, Karachi, Lahore and Peshawar, providing a standard educational environment and recreational facilities to 11,000 students, out of which 500 are faculty members and a quarter is covered by female students. Founded as Federally Chartered University, it was inaugurated by President Pervez Musharraf in July 2000. It is consistently ranked among the leading institutions of higher education in the country and ranked top in computer sciences and information technology by the Higher Education Commission of Pakistan in 2020. Its engineering programs are accredited with Pakistan Engineering Council. FAST is a not-for-profit educational institution charging subsidized fees from its students. Besides this, FAST offers different financial assistance programs for deserving students in the form of loans.
FAST is considered to be very strict in Pakistan.
History
The Foundation of Advancement of Science and Technology was founded and established by Bank of Credit and Commerce International financier. Hasan Abidi, founder of BCCI, provided a large financial capital for the university to promote research in computer sciences and emerging technologies during 1980s. Later this foundation established the National University of Computer and Emerging Sciences which was inaugurated by Former President and Chief of Army Staff General Pervez Musharraf in 2000. It is privileged to be the first private sector university, having multiple campuses set up under the Federal Charter granted by Ordinance No.XXIII of 2000, dated July 1, 2000.
Established in 1980, the sponsoring bod
|
https://en.wikipedia.org/wiki/Automatic%20picture%20transmission
|
The Automatic Picture Transmission (APT) system is an analog image transmission system developed for use on weather satellites. It was introduced in the 1960s and over four decades has provided image data to relatively low-cost user stations at locations in most countries of the world. A user station anywhere in the world can receive local data at least twice a day from each satellite as it passes nearly overhead.
Transmission
Structure
The broadcast transmission is composed of two image channels, telemetry information, and synchronization data, with the image channels typically referred to as Video A and Video B. All this data is transmitted as a horizontal scan line. A complete line is 2080 pixels long, with each image using 909 pixels and the remainder going to the telemetry and synchronization. Lines are transmitted at 2 per second, which equates to a 4160 words per second, or 4160 baud.
Images
On NOAA POES system satellites, the two images are 4 km / pixel smoothed 8-bit images derived from two channels of the advanced very-high-resolution radiometer (AVHRR) sensor. The images are corrected for nearly constant geometric resolution prior to being broadcast; as such, the images are free of distortion caused by the curvature of the Earth.
Of the two images, one is typically long-wave infrared (10.8 micrometers) with the second switching between near-visible (0.86 micrometers) and mid-wave infrared (3.75 micrometers) depending on whether the ground is illuminated by sunlight. However, NOAA can configure the satellite to transmit any two of the AVHRR's image channels.
Synchronization and telemetry
Included in the transmission are a series of synchronization pulses, minute markers, and telemetry information.
The synchronization information, transmitted at the start of each video channel, allows the receiving software to align its sampling with the baud rate of the signal, which can vary slightly over time. The minute markers are four lines of alternating bla
|
https://en.wikipedia.org/wiki/Lin%20Wang
|
Lin Wang (; 1917 – 26 February 2003) was an Asian elephant that served with the Chinese Expeditionary Force during the Second Sino-Japanese War (1937–1945) and later relocated to Taiwan with the Kuomintang forces. Lin Wang lived out most of his life in the Taipei Zoo and was the most famous animal in Taiwan. Many adults and children alike affectionately called the bull elephant "Grandpa Lin Wang."
Sino-Japanese War
After Japan attacked Pearl Harbor in 1941, the Sino-Japanese War, which began in 1937, became a part of the greater conflict of World War II. When the Japanese proceeded to attack British colonies in Burma, Generalissimo Chiang Kai-shek formed the Chinese Expeditionary Force under the leadership of General Sun Li-jen, to fight in the Burma Campaign. After a battle at a Japanese camp in 1943, Lin Wang, along with twelve other elephants, were captured by the Chinese. These elephants were used by the Japanese army to transport supplies and pull artillery pieces. The Allied forces also used these elephants to do similar tasks. At this time, Lin Wang was named "Ah Mei" (阿美), meaning "The Beautiful".
In 1945, the Expeditionary Force was recalled to China. The elephants and their handlers marched along the Burma Road, and six elephants died during the difficult trek. By the time they arrived in Guangdong, the war had ended. However, the elephants' service with the army was not over. They participated in building some monuments for the martyrs of the war, and in the spring of 1946, they also performed for a circus to raise money for famine relief in Hunan province.
Later, four elephants in the group were sent to the zoos of Beijing, Shanghai, Nanjing, and Changsha. The remaining three elephants, including Lin Wang, were relocated to a park in Guangzhou.
In Taiwan
In 1947, Sun Li-jen was sent to Taiwan to train new troops. He took the three elephants with him, though one sick elephant died during the trip across the strait. The two remaining elephants were us
|
https://en.wikipedia.org/wiki/Polarizability
|
Polarizability usually refers to the tendency of matter, when subjected to an electric field, to acquire an electric dipole moment in proportion to that applied field. It is a property of all matter, considering that matter is made up of elementary particles which have an electric charge, namely protons and electrons. When subject to an electric field, the negatively charged electrons and positively charged atomic nuclei are subject to opposite forces and undergo charge separation. Polarizability is responsible for a material's dielectric constant and, at high (optical) frequencies, its refractive index.
The polarizability of an atom or molecule is defined as the ratio of its induced dipole moment to the local electric field; in a crystalline solid, one considers the dipole moment per unit cell. Note that the local electric field seen by a molecule is generally different from the macroscopic electric field that would be measured externally. This discrepancy is taken into account by the Clausius–Mossotti relation (below) which connects the bulk behaviour (polarization density due to an external electric field according to the electric susceptibility ) with the molecular polarizability due to the local field.
Magnetic polarizability likewise refers to the tendency for a magnetic dipole moment to appear in proportion to an external magnetic field. Electric and magnetic polarizabilities determine the dynamical response of a bound system (such as a molecule or crystal) to external fields, and provide insight into a molecule's internal structure. "Polarizability" should not be confused with the intrinsic magnetic or electric dipole moment of an atom, molecule, or bulk substance; these do not depend on the presence of an external field.
Electric polarizability
Definition
Electric polarizability is the relative tendency of a charge distribution, like the electron cloud of an atom or molecule, to be distorted from its normal shape by an external electric field.
The p
|
https://en.wikipedia.org/wiki/Weyl%20transformation
|
See also Wigner–Weyl transform, for another definition of the Weyl transform.
In theoretical physics, the Weyl transformation, named after Hermann Weyl, is a local rescaling of the metric tensor:
which produces another metric in the same conformal class. A theory or an expression invariant under this transformation is called conformally invariant, or is said to possess Weyl invariance or Weyl symmetry. The Weyl symmetry is an important symmetry in conformal field theory. It is, for example, a symmetry of the Polyakov action. When quantum mechanical effects break the conformal invariance of a theory, it is said to exhibit a conformal anomaly or Weyl anomaly.
The ordinary Levi-Civita connection and associated spin connections are not invariant under Weyl transformations. Weyl connections are a class of affine connections that is invariant, although no Weyl connection is individual invariant under Weyl transformations.
Conformal weight
A quantity has conformal weight if, under the Weyl transformation, it transforms via
Thus conformally weighted quantities belong to certain density bundles; see also conformal dimension. Let be the connection one-form associated to the Levi-Civita connection of . Introduce a connection that depends also on an initial one-form via
Then is covariant and has conformal weight .
Formulas
For the transformation
We can derive the following formulas
Note that the Weyl tensor is invariant under a Weyl rescaling.
References
Conformal geometry
Differential geometry
Scaling symmetries
Symmetry
Theoretical physics
|
https://en.wikipedia.org/wiki/Z22%20%28computer%29
|
The Z22 was the seventh computer model Konrad Zuse developed (the first six being the Z1, Z2, Z3, Z4, Z5 and Z11, respectively). One of the early commercial computers, the Z22's design was finished about 1955. The major version jump from Z11 to Z22 was due to the use of vacuum tubes, as opposed to the electromechanical systems used in earlier models. The first machines built were shipped to Berlin and Aachen.
By the end of 1958 the ZMMD-group had built a working ALGOL 58 compiler for the Z22 computer. ZMMD was an abbreviation for Zürich (where Rutishauser worked), München (workplace of Bauer and Samelson), Mainz (location of the Z22 computer), Darmstadt (workplace of Bottenbruch).
In 1961, the Z22 was followed by a logically very similar transistorized version, the Z23. Already in 1954, Zuse had come to an agreement with Heinz Zemanek that his Zuse KG would finance the work of Rudolf Bodo, who helped Zemanek build the early European transistorized computer Mailüfterl, and that after that project Bodo should work for the Zuse KG—there he helped build the transistorized Z23. Furthermore, all circuit diagrams of the Z22 were supplied to Bodo and Zemanek.
The University of Applied Sciences, Karlsruhe still has an operational Z22 which is on permanent loan at the ZKM in Karlsruhe.
Altogether 55 Z22 computers were produced.
In the 1970s, clones of the Z22 using TTL were built by the company Thiemicke Computer.
Technical data
The typical setup of a Z22 was:
14 words of 38-bit as fast access RAM implemented as core memory
8192 word (38-bit each) magnetic drum memory as RAM
One teletype as console and main input/output device
Additional punch tape devices as fast input/output devices
600 tubes working as flip-flops
electrical cooling unit, needing a water tap connection (water cooling, so to say)
380 V 16 A three-phase power supply
The Z22 operated at 3 kHz operating frequency, which was synchronous with the speed of the drum storage. The input of data and
|
https://en.wikipedia.org/wiki/Geophagia
|
Geophagia (), also known as geophagy (), is the intentional practice of eating earth or soil-like substances such as clay, chalk, or termite mounds. It is a behavioural adaptation that occurs in many non-human animals and has been documented in more than 100 primate species. Geophagy in non-human primates is primarily used for protection from parasites, to provide mineral supplements and to help metabolize toxic compounds from leaves. Geophagy also occurs in humans and is most commonly reported among children and pregnant women.
Human geophagia is a form of pica – the craving and purposive consumption of non-food items – and is classified as an eating disorder in the Diagnostic and Statistical Manual of Mental Disorders (DSM) if not socially or culturally appropriate. Sometimes geophagy is a consequence of carrying a hookworm infection. Although its etiology remains unknown, geophagy has many potential adaptive health benefits as well as negative consequences.
Animals
Geophagia is widespread in the animal kingdom. Galen, the Greek
philosopher and physician, was the first to record the use of clay by sick or injured animals in the second century AD. This type of geophagia has been documented in "many species of mammals, birds, reptiles, butterflies and isopods, especially among herbivores".
Birds
Many species of South American parrots have been observed at clay licks, and sulphur-crested cockatoos have been observed ingesting clays in Papua New Guinea. Analysis of soils consumed by wild birds show that they often prefer soils with high clay content, usually with the smectite clay families being well represented.
The preference for certain types of clay or soil can lead to unusual feeding behaviour. For example, Peruvian Amazon rainforest parrots congregate not just at one particular bend of the Manu River but at one specific layer of soil which runs hundreds of metres horizontally along that bend. The parrots avoid eating the substrate in layers one metre abo
|
https://en.wikipedia.org/wiki/M3U
|
M3U (MP3 URL or Moving Picture Experts Group Audio Layer 3 Uniform Resource Locator in full) is a computer file format for a multimedia playlist. One common use of the M3U file format is creating a single-entry playlist file pointing to a stream on the Internet. The created file provides easy access to that stream and is often used in downloads from a website, for emailing, and for listening to Internet radio.
Although originally designed for audio files, such as MP3, it is commonly used to point media players to audio and video sources, including online sources. M3U was originally developed by Fraunhofer for use with their Winplay3 software, but numerous media players and software applications now support the format.
Careless handling of M3U playlists has been the cause of vulnerabilities in many music players such as VLC media player, iTunes, Winamp, and many others.
File format
There is no formal specification for the M3U format; it is a de facto standard.
An M3U file is a plain text file that specifies the locations of one or more media files. The file is saved with the "m3u" filename extension if the text is encoded in the local system's default non-Unicode encoding (e.g., a Windows codepage), or with the "m3u8" extension if the text is UTF-8 encoded.
Each entry carries one specification. The specification can be any one of the following:
an absolute local pathname; e.g., C:\My Music\Heavysets.mp3
a local pathname relative to the M3U file location; e.g. Heavysets.mp3
a URL
Each entry ends with a line break which separates it from the following one. Furthermore, some devices only accept line breaks represented as CR LF, but do not recognize a single LF.
Extended M3U
The M3U file can also include comments, prefaced by the "#" character. In extended M3U, "#" also introduces extended M3U directives which are terminated by a colon ":" if they support parameters.
Apple used the extended M3U format as a base for their HTTP Live Streaming (HLS) which was d
|
https://en.wikipedia.org/wiki/Permutation%20automaton
|
In automata theory, a permutation automaton, or pure-group automaton, is a deterministic finite automaton such that each input symbol permutes the set of states.
Formally, a deterministic finite automaton may be defined by the tuple (Q, Σ, δ, q0, F),
where Q is the set of states of the automaton, Σ is the set of input symbols, δ is the transition function that takes a state q and an input symbol x to a new state δ(q,x), q0 is the initial state of the automaton, and F is the set of accepting states (also: final states) of the automaton. is a permutation automaton if and only if, for every two distinct states and in Q and every input symbol in Σ, δ(qi,x) ≠ δ(qj,x).
A formal language is p-regular (also: a pure-group language) if it is accepted by a permutation automaton. For example, the set of strings of even length forms a p-regular language: it may be accepted by a permutation automaton with two states in which every transition replaces one state by the other.
Applications
The pure-group languages were the first interesting family of regular languages for which the star height problem was proved to be computable.
Another mathematical problem on regular languages is the separating words problem, which asks for the size of a smallest deterministic finite automaton that distinguishes between two given words of length at most n – by accepting one word and rejecting the other. The known upper bound in the general case is . The problem was later studied for the restriction to permutation automata. In this case, the known upper bound changes to .
References
Permutations
Finite automata
|
https://en.wikipedia.org/wiki/Postural%20orthostatic%20tachycardia%20syndrome
|
Postural orthostatic tachycardia syndrome (POTS) is a condition characterized by an abnormally large increase in heart rate upon standing. POTS is a disorder of the autonomic nervous system that can lead the individual to experience a variety of symptoms. Symptoms may include lightheadedness, brain fog, blurred vision, weakness, fatigue, headaches, heart palpitations, exercise intolerance, nausea, diminished concentration, tremulousness (shaking), syncope (fainting), coldness or pain in the extremities, chest pain and shortness of breath. Other conditions associated with POTS include Ehlers–Danlos syndrome, mast cell activation syndrome, irritable bowel syndrome, insomnia, chronic headaches, chronic fatigue syndrome, fibromyalgia, and amplified musculoskeletal pain syndrome. POTS symptoms may be treated with lifestyle changes such as increasing fluid and salt intake, wearing compression stockings, gentler and slow postural changes, avoiding prolonged bedrest, medication and physical therapy.
The causes of POTS are varied. POTS may develop after a viral infection, surgery, trauma or pregnancy. It has been shown to emerge in previously healthy patients after COVID-19, or in rare cases after COVID-19 vaccination. Risk factors include a family history of the condition. A POTS diagnosis in adults is characterized by an increased heart rate of 30 beats per minute within ten minutes of standing up, while accompanied by symptoms. This increased heart rate should occur in the absence of orthostatic hypotension (>20 mm Hg drop in systolic blood pressure) to be considered POTS. A spinal fluid leak (called spontaneous intracranial hypotension) may have the same signs and symptoms as POTS and should be excluded. Prolonged bedrest may lead to multiple symptoms, including blood volume loss and postural tachycardia. Other conditions which can cause similar symptoms, such as dehydration, orthostatic hypotension, heart problems, adrenal insufficiency, epilepsy, and Parkinson's dise
|
https://en.wikipedia.org/wiki/Matrix%20unit
|
In linear algebra, a matrix unit is a matrix with only one nonzero entry with value 1. The matrix unit with a 1 in the ith row and jth column is denoted as . For example, the 3 by 3 matrix unit with i = 1 and j = 2 is
A vector unit is a standard unit vector.
A single-entry matrix generalizes the matrix unit for matrices with only one nonzero entry of any value, not necessarily of value 1.
Properties
The set of m by n matrix units is a basis of the space of m by n matrices.
The product of two matrix units of the same square shape satisfies the relation
where is the Kronecker delta.
The group of scalar n-by-n matrices over a ring R is the centralizer of the subset of n-by-n matrix units in the set of n-by-n matrices over R.
The matrix norm (induced by the same two vector norms) of a matrix unit is equal to 1.
When multiplied by another matrix, it isolates a specific row or column in arbitrary position. For example, for any 3-by-3 matrix A:
References
Sparse matrices
1 (number)
|
https://en.wikipedia.org/wiki/Smoothwall
|
Smoothwall (formerly styled as SmoothWall) is a Linux distribution designed to be used as an open source firewall. Smoothwall is configured via a web-based GUI and requires little or no knowledge of Linux to install or use.
Smoothwall is also a private software company based in the UK who specializes in the development of web content filtering, safeguarding and internet security solutions, which also maintains the SmoothWall open source project.
History
Smoothwall began life as Smoothwall GPL, a freely redistributable open source version. In August 2000, with a proprietary version sold by Smoothwall LTD from November 2001. Smoothwall still maintains its open source roots with Smoothwall Express still available today (latest release V3.1 in 2014), however the main Smoothwall solution is now paid for and is in use by millions of users worldwide in both the public and private sector. Smoothwall's filtering and safeguarding products are typically sold into educational organisations and businesses.
In 2017, Smoothwall announced a management buyout backed by private equity fund, Tenzing. The new management team was led by Georg Ell, previously Director of Western Europe at Tesla, who was appointed as Group CEO in May 2018. Georg was joined by existing Board members Gavin Logan, Douglas Hanley and Manprit Randhawan in the management of the Smoothwall business, along with Lisa Stone, who took the position as chairperson.
On 5 August 2021, Tenzing announced they had agreed to sell their investment in Smoothwall to Australian security firm Family Zone Cyber Safety for £75.5 million (A$142m cash consideration. The deal was completed on 17 August 2021, with a £10.5m deferred balance of the sale price paid on 1 September 2021.
Smoothwall Express
Smoothwall Express, originally Smoothwall GPL, is the freely distributable version of Smoothwall, developed by the Smoothwall Open Source Project team and members of Smoothwall Ltd.
Released in August 2000, Smoothwall GPL was de
|
https://en.wikipedia.org/wiki/REFSMMAT
|
REFSMMAT is a term used by guidance, navigation, and control system flight controllers during the Apollo program, which carried over into the Space Shuttle program. REFSMMAT stands for "Reference to Stable Member Matrix". It is a numerical definition of a fixed orientation in space and is usually (but not always) defined with respect to the stars. It was used by the Apollo Primary Guidance, Navigation and Control System (PGNCS) as a reference to which the gimbal-mounted platform at its core should be oriented. Every operation within the spacecraft that required knowledge of direction was carried out with respect to the orientation of the guidance platform, itself aligned according to a particular REFSMMAT.
During an Apollo flight, the REFSMMAT being used, and therefore the orientation of the guidance platform, would change as operational needs required it, but never during a guidance process—that is, one REFSMMAT might be in use from launch through Trans-Lunar Injection, another from TLI to Midpoint, but would not change during the middle of a burn or set of maneuvers.
One consideration in choosing each respective REFSMMAT was to avoid taking the spacecraft near the gimbal lock zone of its Inertial Measurement Unit during any expected spacecraft maneuvers, since the exact orientation of the "forbidden" range of spacecraft attitudes would depend on the current REFSMMAT.
Additionally, it was considered good practice to have the spacecraft displays show some meaningful attitude value that would be easy to monitor during an important engine burn. Flight controllers at mission control in Houston would calculate what attitude the spacecraft had to be at for that burn and would devise a REFSMMAT that matched it in some way. Then, when it came time for the burn, if the spacecraft was in its correct attitude, the crew would see their 8-ball display a simple attitude that would be easy to interpret, allowing errors to be easily tracked and corrected.
In the hallowed halls
|
https://en.wikipedia.org/wiki/Cardiac%20marker
|
Cardiac markers are biomarkers measured to evaluate heart function. They can be useful in the early prediction or diagnosis of disease. Although they are often discussed in the context of myocardial infarction, other conditions can lead to an elevation in cardiac marker level.
Most of the early markers identified were enzymes, and as a result, the term "cardiac enzymes" is sometimes used. However, not all of the markers currently used are enzymes. For example, in formal usage, troponin would not be listed as a cardiac enzyme.
Applications of measurement
Measuring cardiac biomarkers can be a step toward making a diagnosis for a condition. Whereas cardiac imaging often confirms a diagnosis, simpler and less expensive cardiac biomarker measurements can advise a physician whether more complicated or invasive procedures are warranted. In many cases medical societies advise doctors to make biomarker measurements an initial testing strategy especially for patients at low risk of cardiac death.
Many acute cardiac marker IVD products are targeted at nontraditional markets, e.g., the hospital ER instead of traditional hospital or clinical laboratory environments. Competition in the development of cardiac marker diagnostic products and their expansion into new markets is intense.
Recently, the intentional destruction of myocardium by alcohol septal ablation has led to the identification of additional potential markers.
Types
Types of cardiac markers include the following:
Limitations
Depending on the marker, it can take between 2 and 24 hours for the level to increase in the blood. Additionally, determining the levels of cardiac markers in the laboratory - like many other lab measurements - takes substantial time. Cardiac markers are therefore not useful in diagnosing a myocardial infarction in the acute phase. The clinical presentation and results from an ECG are more appropriate in the acute situation.
However, in 2010, research at the Baylor College of Medicine reve
|
https://en.wikipedia.org/wiki/Dirichlet%27s%20principle
|
In mathematics, and particularly in potential theory, Dirichlet's principle is the assumption that the minimizer of a certain energy functional is a solution to Poisson's equation.
Formal statement
Dirichlet's principle states that, if the function is the solution to Poisson's equation
on a domain of with boundary condition
on the boundary ,
then u can be obtained as the minimizer of the Dirichlet energy
amongst all twice differentiable functions such that on (provided that there exists at least one function making the Dirichlet's integral finite). This concept is named after the German mathematician Peter Gustav Lejeune Dirichlet.
History
The name "Dirichlet's principle" is due to Riemann, who applied it in the study of complex analytic functions.
Riemann (and others such as Gauss and Dirichlet) knew that Dirichlet's integral is bounded below, which establishes the existence of an infimum; however, he took for granted the existence of a function that attains the minimum. Weierstrass published the first criticism of this assumption in 1870, giving an example of a functional that has a greatest lower bound which is not a minimum value. Weierstrass's example was the functional
where is continuous on , continuously differentiable on , and subject to boundary conditions , where and are constants and . Weierstrass showed that , but no admissible function can make equal 0. This example did not disprove Dirichlet's principle per se, since the example integral is different from Dirichlet's integral. But it did undermine the reasoning that Riemann had used, and spurred interest in proving Dirichlet's principle as well as broader advancements in the calculus of variations and ultimately functional analysis.
In 1900, Hilbert later justified Riemann's use of Dirichlet's principle by developing the direct method in the calculus of variations.
See also
Dirichlet problem
Hilbert's twentieth problem
Plateau's problem
Green's first identity
Notes
|
https://en.wikipedia.org/wiki/List%20of%20canal%20engineers
|
A canal engineer is a civil engineer responsible for planning (architectural and otherwise) related to the construction of a canal.
Canal engineers include:
China
Yu the Great (c.2200BCE-c.2100BCE), first Dynast of China, founder of the first dynasty, who dedicated his life establishing flood control structures across the Chinese Hegemony, including canals, establishing the new hegemony in the process, across flood ruined competing kingdoms.
Ximen Bao
Li Bing (c. 3rd century BC), Dujiangyan
France
Louis Maurice Adolphe Linant de Bellefonds (1799-1883), Suez Canal
Ferdinand de Lesseps (1805-1894), Suez Canal and the failed first attempt at a canal in Panama
Hungary
István Türr (1825-1908), Corinth Canal
United Kingdom
James Brindley
James Dadford
John Dadford
Thomas Dadford
Thomas Dadford, Jr.
Hugh Henshall
John Hore
Josias Jessop
William Jessop
Benjamin Outram
John Rennie the Elder
Thomas Sheasby
John Smeaton
William Smith
Thomas Telford
United States
James Geddes, Ohio and Erie Canal
John B. Jervis, Delaware and Hudson Canal
Loammi Baldwin, Middlesex Canal to Boston
Orlando Metcalfe Poe, Poe Lock at Soo Locks
William Weston
Benjamin Wright, Erie Canal and the Chesapeake and Ohio Canal
See also
List of civil engineers
Lists of canals
Lists of engineers
|
https://en.wikipedia.org/wiki/Symmetry%20breaking
|
In physics, symmetry breaking is a phenomenon where a disordered but symmetric state collapses into an ordered, but less symmetric state. This collapse is often one of many possible bifurcations that a particle can take as it approaches a lower energy state. Due to the many possibilities, an observer may assume the result of the collapse to be arbitrary. This phenomenon is fundamental to quantum field theory (QFT), and further, contemporary understandings of physics. Specifically, it plays a central role in the Glashow–Weinberg–Salam model which forms part of the Standard model modelling the electroweak sector.In an infinite system (Minkowski spacetime) symmetry breaking occurs, however in a finite system (that is, any real super-condensed system), the system is less predictable, but in many cases quantum tunneling occurs. Symmetry breaking and tunneling relate through the collapse of a particle into non-symmetric state as it seeks a lower energy.
Symmetry breaking can be distinguished into two types, explicit and spontaneous. They are characterized by whether the equations of motion fail to be invariant, or the ground state fails to be invariant.
Non-technical description
This section describes spontaneous symmetry breaking. In layman's terms, this is the idea that for a physical system, the lowest energy configuration (the vacuum state) is not the most symmetric configuration of the system. Roughly speaking there are three types of symmetry that can be broken: discrete, continuous and gauge, ordered in increasing technicality.
An example of a system with discrete symmetry is given by the figure with the red graph: consider a particle moving on this graph, subject to gravity. A similar graph could be given by the function . This system is symmetric under reflection in the y-axis. There are three possible stationary states for the particle: the top of the hill at , or the bottom, at . When the particle is at the top, the configuration respects the reflection sym
|
https://en.wikipedia.org/wiki/Scotochromogenic
|
Scotochromogenic bacteria develop pigment in the dark. Runyon Group II nontuberculous mycobacteria such as Mycobacterium gordonae are examples but the term could apply to many other organisms.
References
Bacteria
|
https://en.wikipedia.org/wiki/TFT%20LCD
|
A thin-film-transistor liquid-crystal display (TFT LCD) is a variant of a liquid-crystal display that uses thin-film-transistor technology to improve image qualities such as addressability and contrast. A TFT LCD is an active matrix LCD, in contrast to passive matrix LCDs or simple, direct-driven (i.e. with segments directly connected to electronics outside the LCD) LCDs with a few segments.
TFT LCDs are used in appliances including television sets, computer monitors, mobile phones, handheld devices, video game systems, personal digital assistants, navigation systems, projectors, and dashboards in some automobiles and in medium to high end motorcycles.
History
In February 1957, John Wallmark of RCA filed a patent for a thin film MOSFET. Paul K. Weimer, also of RCA implemented Wallmark's ideas and developed the thin-film transistor (TFT) in 1962, a type of MOSFET distinct from the standard bulk MOSFET. It was made with thin films of cadmium selenide and cadmium sulfide. The idea of a TFT-based liquid-crystal display (LCD) was conceived by Bernard Lechner of RCA Laboratories in 1968. In 1971, Lechner, F. J. Marlowe, E. O. Nester and J. Tults demonstrated a 2-by-18 matrix display driven by a hybrid circuit using the dynamic scattering mode of LCDs. In 1973, T. Peter Brody, J. A. Asars and G. D. Dixon at Westinghouse Research Laboratories developed a CdSe (cadmium selenide) TFT, which they used to demonstrate the first CdSe thin-film-transistor liquid-crystal display (TFT LCD). Brody and Fang-Chen Luo demonstrated the first flat active-matrix liquid-crystal display (AM LCD) using CdSe TFTs in 1974, and then Brody coined the term "active matrix" in 1975. , all modern high-resolution and high-quality electronic visual display devices use TFT-based active matrix displays.
Construction
The liquid crystal displays used in calculators and other devices with similarly simple displays have direct-driven image elements, and therefore a voltage can be easily applied across j
|
https://en.wikipedia.org/wiki/Lickorish%E2%80%93Wallace%20theorem
|
In mathematics, the Lickorish–Wallace theorem in the theory of 3-manifolds states that any closed, orientable, connected 3-manifold may be obtained by performing Dehn surgery on a framed link in the 3-sphere with ±1 surgery coefficients. Furthermore, each component of the link can be assumed to be unknotted.
The theorem was proved in the early 1960s by W. B. R. Lickorish and Andrew H. Wallace, independently and by different methods. Lickorish's proof rested on the Lickorish twist theorem, which states that any orientable automorphism of a closed orientable surface is generated by Dehn twists along 3g − 1 specific simple closed curves in the surface, where g denotes the genus of the surface. Wallace's proof was more general and involved adding handles to the boundary of a higher-dimensional ball.
A corollary of the theorem is that every closed, orientable 3-manifold bounds a simply-connected compact 4-manifold.
By using his work on automorphisms of non-orientable surfaces, Lickorish also showed that every closed, non-orientable, connected 3-manifold is obtained by Dehn surgery on a link in the non-orientable 2-sphere bundle over the circle. Similar to the orientable case, the surgery can be done in a special way which allows the conclusion that every closed, non-orientable 3-manifold bounds a compact 4-manifold.
References
3-manifolds
Theorems in topology
Theorems in geometry
|
https://en.wikipedia.org/wiki/Parity%20%28physics%29
|
In physics, a parity transformation (also called parity inversion) is the flip in the sign of one spatial coordinate. In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection):
It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image.
All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity. As established by the Wu experiment conducted at the US National Bureau of Standards by Chinese-American scientist Chien-Shiung Wu, the weak interaction is chiral and thus provides a means for probing chirality in physics. In her experiment, Wu took advantage of the controlling role of weak interactions in radioactive decay of atomic isotopes to establish the chirality of the weak force.
By contrast, in interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions.
A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation; it is the same as a 180° rotation.
In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions.
Simple symmetry relations
Under rotations, classical geometrical objects can be classified into scalars, vectors, and tensors of higher rank. In classical physics, physical configurations need to transform under representations of every symmetry group.
Quantum theory predicts that states in a Hilbert space do not need to transform under representations of the group of rot
|
https://en.wikipedia.org/wiki/Adjunction%20space
|
In mathematics, an adjunction space (or attaching space) is a common construction in topology where one topological space is attached or "glued" onto another. Specifically, let X and Y be topological spaces, and let A be a subspace of Y. Let f : A → X be a continuous map (called the attaching map). One forms the adjunction space X ∪f Y (sometimes also written as X +f Y) by taking the disjoint union of X and Y and identifying a with f(a) for all a in A. Formally,
where the equivalence relation ~ is generated by a ~ f(a) for all a in A, and the quotient is given the quotient topology. As a set, X ∪f Y consists of the disjoint union of X and (Y − A). The topology, however, is specified by the quotient construction.
Intuitively, one may think of Y as being glued onto X via the map f.
Examples
A common example of an adjunction space is given when Y is a closed n-ball (or cell) and A is the boundary of the ball, the (n−1)-sphere. Inductively attaching cells along their spherical boundaries to this space results in an example of a CW complex.
Adjunction spaces are also used to define connected sums of manifolds. Here, one first removes open balls from X and Y before attaching the boundaries of the removed balls along an attaching map.
If A is a space with one point then the adjunction is the wedge sum of X and Y.
If X is a space with one point then the adjunction is the quotient Y/A.
Properties
The continuous maps h : X ∪f Y → Z are in 1-1 correspondence with the pairs of continuous maps hX : X → Z and hY : Y → Z that satisfy hX(f(a))=hY(a) for all a in A.
In the case where A is a closed subspace of Y one can show that the map X → X ∪f Y is a closed embedding and (Y − A) → X ∪f Y is an open embedding.
Categorical description
The attaching construction is an example of a pushout in the category of topological spaces. That is to say, the adjunction space is universal with respect to the following commutative diagram:
Here i is the inclusion map and ϕX, ϕY are the map
|
https://en.wikipedia.org/wiki/Pipeline%20%28software%29
|
In software engineering, a pipeline consists of a chain of processing elements (processes, threads, coroutines, functions, etc.), arranged so that the output of each element is the input of the next; the name is by analogy to a physical pipeline. Usually some amount of buffering is provided between consecutive elements. The information that flows in these pipelines is often a stream of records, bytes, or bits, and the elements of a pipeline may be called filters; this is also called the pipe(s) and filters design pattern. Connecting elements into a pipeline is analogous to function composition.
Narrowly speaking, a pipeline is linear and one-directional, though sometimes the term is applied to more general flows. For example, a primarily one-directional pipeline may have some communication in the other direction, known as a return channel or backchannel, as in the lexer hack, or a pipeline may be fully bi-directional. Flows with one-directional tree and directed acyclic graph topologies behave similarly to (linear) pipelines – the lack of cycles makes them simple – and thus may be loosely referred to as "pipelines".
Implementation
Pipelines are often implemented in a multitasking OS, by launching all elements at the same time as processes, and automatically servicing the data read requests by each process with the data written by the upstream process – this can be called a multiprocessed pipeline. In this way, the CPU will be naturally switched among the processes by the scheduler so as to minimize its idle time. In other common models, elements are implemented as lightweight threads or as coroutines to reduce the OS overhead often involved with processes. Depending upon the OS, threads may be scheduled directly by the OS or by a thread manager. Coroutines are always scheduled by a coroutine manager of some form.
Usually, read and write requests are blocking operations, which means that the execution of the source process, upon writing, is suspended until all dat
|
https://en.wikipedia.org/wiki/Drilling%20rig
|
A drilling rig is an integrated system that drills wells, such as oil or water wells, or holes for piling and other construction purposes, into the earth's subsurface. Drilling rigs can be massive structures housing equipment used to drill water wells, oil wells, or natural gas extraction wells, or they can be small enough to be moved manually by one person and such are called augers. Drilling rigs can sample subsurface mineral deposits, test rock, soil and groundwater physical properties, and also can be used to install sub-surface fabrications, such as underground utilities, instrumentation, tunnels or wells. Drilling rigs can be mobile equipment mounted on trucks, tracks or trailers, or more permanent land or marine-based structures (such as oil platforms, commonly called 'offshore oil rigs' even if they don't contain a drilling rig). The term "rig" therefore generally refers to the complex equipment that is used to penetrate the surface of the Earth's crust.
Small to medium-sized drilling rigs are mobile, such as those used in mineral exploration drilling, blast-hole, water wells and environmental investigations. Larger rigs are capable of drilling through thousands of metres of the Earth's crust, using large "mud pumps" to circulate drilling mud (slurry) through the drill bit and up the casing annulus, for cooling and removing the "cuttings" while a well is drilled. Hoists in the rig can lift hundreds of tons of pipe. Other equipment can force acid or sand into reservoirs to facilitate extraction of the oil or natural gas; and in remote locations there can be permanent living accommodation and catering for crews (which may be more than a hundred). Marine rigs may operate thousands of miles distant from the supply base with infrequent crew rotation or cycle.
History
Until internal combustion engines were developed in the late 19th century, the main method for drilling rock was muscle power of man or animal. The technique of oil drilling through percussion or
|
https://en.wikipedia.org/wiki/Medieval%20weights%20and%20measures
|
The following systems arose from earlier systems, and in many cases utilise parts of much older systems. For the most part they were used to varying degrees in the Middle Ages and surrounding time periods. Some of these systems found their way into later systems, such as the Imperial system and even SI.
English system
Before Roman units were reintroduced in 1066 by William the Conqueror, there was an Anglo-Saxon (Germanic) system of measure, of which few details survive. It probably included the following units of length:
fingerbreadth or digit
inch
ell or cubit
foot
perch, used variously to measure length or area
acre and acre's breadth
furlong
mile
The best-attested of these is the perch, which varied in length from 10 to 25 feet, with the most common value (16 feet or 5.03 m) remaining in use until the twentieth century.
Later development of the English system continued in 1215 in the Magna Carta. Standards were renewed in 1496, 1588 and 1758.
Some of these units would go on to be used in later Imperial units and in the US system, which are based on the English system from the 1700s.
Danish system
From May 1, 1683, King Christian V of Denmark introduced an office to oversee weights and measures, a justervæsen, to be led by Ole Rømer. The definition of the alen was set to 2 Rhine feet. Rømer later discovered that differing standards for the Rhine foot existed, and in 1698 an iron Copenhagen standard was made. A pendulum definition for the foot was first suggested by Rømer, introduced in 1820, and changed in 1835. The metric system was introduced in 1907.
Length
skrupel – Scruple, linie
linie – Line, tomme
tomme – Inch, fod
palme – Palm, for circumference, 8.86 cm
kvarter – Quarter, alen
fod – Defined as a Rheinfuss 31.407 cm from 1683, before that 31.41 cm with variations.
alen – Forearm, 2 fod
mil – Danish mile. Towards the end of the 17th century, Ole Rømer connected the mile to the circumference of the earth, and defined it as 1
|
https://en.wikipedia.org/wiki/Shipping%20%28fandom%29
|
Shipping (derived from the word relationship) is the desire by followers of a fandom for two or more people, either real-life people or fictional characters (in film, literature, television series, etc.), to be in a romantic or sexual relationship. Shipping often takes the form of unofficial creative works, including fanfiction and fan art.
Etymology
The usage of the term "ship" in its relationship sense appears to have been originated around 1995 by Internet fans of the TV show The X-Files, who believed that the two main characters, Fox Mulder and Dana Scully, should be engaged in a romantic relationship. They called themselves "relationshippers" at first; then "R'shipper", and finally just "shipper".
The oldest recorded uses of the noun ship and the noun shipper, according to the Oxford English Dictionary, date back to 1996 postings on the Usenet group alt.tv.x-files; shipping is first attested slightly later, in 1997 and the verb to ship in 1998.
Notation and terminology
"Ship" and its derivatives in this context have since come to be in widespread usage. "Shipping" refers to the phenomenon; a "ship" is the concept of a fictional couple; to "ship" a couple means to have an affinity for it in one way or another; a "shipper" or a "fangirl/boy" is somebody significantly involved with such an affinity; and a "shipping war" is when two ships contradict each other, causing fans of each ship to argue. A ship that a particular fan prefers over all others is called an OTP, which stands for one true pairing.
When discussing shipping, a ship that has been confirmed by its series is called a canon ship or sailed ship, whereas a sunk ship is a ship that has been proven unable to exist in canon, or in other words, will never be real nor confirmed.
Naming conventions
Various naming conventions have developed in different online communities to refer to shipped couples, likely due to the ambiguity and cumbersomeness of the "Character 1 and Character 2" format.
The first me
|
https://en.wikipedia.org/wiki/Exaptation
|
Exaptation and the related term co-option describe a shift in the function of a trait during evolution. For example, a trait can evolve because it served one particular function, but subsequently it may come to serve another. Exaptations are common in both anatomy and behaviour.
Bird feathers are a classic example. Initially they may have evolved for temperature regulation, but later were adapted for flight. When feathers were first used to aid in flight, that was an exaptive use. They have since then been shaped by natural selection to improve flight, so in their current state they are best regarded as adaptations for flight. So it is with many structures that initially took on a function as an exaptation: once molded for a new function, they become further adapted for that function.
Interest in exaptation relates to both the process and products of evolution: the process that creates complex traits and the products (functions, anatomical structures, biochemicals, etc.) that may be imperfectly developed. The term "exaptation" was proposed by Stephen Jay Gould and Elisabeth Vrba, as a replacement for 'pre-adaptation', which they considered to be a teleologically loaded term.
History and definitions
The idea that the function of a trait might shift during its evolutionary history originated with Charles Darwin (). For many years the phenomenon was labeled "preadaptation", but since this term suggests teleology in biology, appearing to conflict with natural selection, it has been replaced by the term exaptation.
The idea had been explored by several scholars when in 1982 Stephen Jay Gould and Elisabeth Vrba introduced the term "exaptation". However, this definition had two categories with different implications for the role of adaptation.
(1) A character, previously shaped by natural selection for a particular function (an adaptation), is coopted for a new use—cooptation.
(2) A character whose origin cannot be ascribed to the direct action of natural selection (
|
https://en.wikipedia.org/wiki/Pointed%20set
|
In mathematics, a pointed set (also based set or rooted set) is an ordered pair where is a set and is an element of called the base point, also spelled basepoint.
Maps between pointed sets and —called based maps, pointed maps, or point-preserving maps—are functions from to that map one basepoint to another, i.e. maps such that . Based maps are usually denoted .
Pointed sets are very simple algebraic structures. In the sense of universal algebra, a pointed set is a set together with a single nullary operation which picks out the basepoint. Pointed maps are the homomorphisms of these algebraic structures.
The class of all pointed sets together with the class of all based maps forms a category. Every pointed set can be converted to an ordinary set by forgetting the basepoint (the forgetful functor is faithful), but the reverse is not true. In particular, the empty set cannot be pointed, because it has no element that can be chosen as the basepoint.
Categorical properties
The category of pointed sets and based maps is equivalent to the category of sets and partial functions. The base point serves as a "default value" for those arguments for which the partial function is not defined. One textbook notes that "This formal completion of sets and partial maps by adding 'improper', 'infinite' elements was reinvented many times, in particular, in topology (one-point compactification) and in theoretical computer science." This category is also isomorphic to the coslice category (), where is (a functor that selects) a singleton set, and (the identity functor of) the category of sets. This coincides with the algebraic characterization, since the unique map extends the commutative triangles defining arrows of the coslice category to form the commutative squares defining homomorphisms of the algebras.
There is a faithful functor from pointed sets to usual sets, but it is not full and these categories are not equivalent.
The category of pointed sets is a poi
|
https://en.wikipedia.org/wiki/Graphical%20Models
|
Graphical Models is an academic journal in computer graphics and geometry processing publisher by Elsevier. , its editor-in-chief is Bedrich Benes of the Purdue University.
History
This journal has gone through multiple names. Founded in 1972 as Computer Graphics and Image Processing by Azriel Rosenfeld, it became the first journal to focus on computer image analysis.
Its first change of name came in 1983, when it became Computer Vision, Graphics, and Image Processing. In 1991 it split into two journals, CVGIP: Graphical Models and Image Processing,
and CVGIP: Image Understanding, which later became Computer Vision and Image Understanding. Meanwhile, in 1995, the journal Graphical Models and Image Processing removed the "CVGIP" prefix from its former name, and finally took its current title, Graphical Models, in 2002.
Ranking
Although initially ranked by SCImago Journal Rank as a top-quartile journal in 1999 in its main topic areas, computer graphics and computer-aided design, and then for many years ranked as second-quartile, by 2020 it had fallen to the third quartile.
References
Geometry processing
Computer science journals
|
https://en.wikipedia.org/wiki/MP3%20blog
|
An MP3 blog is a type of blog in which the creator makes music files, normally in the MP3 format, available for download. They are also known as musicblogs, audioblogs or soundblogs (the latter two can also mean podcasts). MP3 blogs have become increasingly popular since 2003. The music posted ranges from hard-to-find rarities that have not been issued in many years to more contemporary offerings, and selections are often restricted to a particular musical genre or theme. Some musicblogs offer music in Advanced Audio Coding (AAC) or Ogg formats.
History
Among the few first MP3 blogs were Tonspion, Buzzgrinder, Fluxblog, Stereogum and Said the Gramophone. Tonspion is the first MP3 blog in Germany and started in 1998 with reviews and downloads that international artists and labels gave out free on the web. Buzzgrinder began in 2001 as a way for musician SethW to fill time on the road. Stereogum began as a music-related LiveJournal in 2002, though its format was focused on indie/pop gossip rather than MP3s. Fluxblog (also founded in 2002) trumpeted LCD Soundsystem's "Yeah (Stupid Version)" in early 2004 brought increased attention to MP3 blogs, while Montreal-based Said the Gramophone, founded in 2003, was among the first websites to write about artists like Arcade Fire, Wolf Parade and Tune-Yards. A July, 2004 story by Reuters and an August, 2004 story on National Public Radio further galvanized the trend, and today there are thousands of MP3 blogs covering a cornucopia of musical styles.
A significant number of indie music labels, promotional agencies and hundreds of artists regularly send promo CDs to MP3 blogs in the hopes of gaining free publicity. Major labels with small acts to promote have also attempted to use MP3 blogs. In 2004, Warner Bros. gave permission for a song by their act The Secret Machines to be posted by the MP3 blog Music (For Robots). This drew attention not only for the song and the label granting permissions, but also because several co
|
https://en.wikipedia.org/wiki/A%20Mathematician%27s%20Apology
|
A Mathematician's Apology is a 1940 essay by British mathematician G. H. Hardy, which offers a defence of the pursuit of mathematics. Central to Hardy's "apology" – in the sense of a formal justification or defence (as in Plato's Apology of Socrates) – is an argument that mathematics has value independent of possible applications. Hardy located this value in the beauty of mathematics, and gave some examples of and criteria for mathematical beauty. The book also includes a brief autobiography, and gives the layman an insight into the mind of a working mathematician.
Background
Hardy felt the need to justify his life's work in mathematics at this time mainly for two reasons. Firstly, at age 62, Hardy felt the approach of old age (he had survived a heart attack in 1939) and the decline of his mathematical creativity and skills.
By devoting time to writing the Apology, Hardy was admitting that his own time as a creative mathematician was finished. In his foreword to the 1967 edition of the book, C. P. Snow describes the Apology as
"a passionate lament for creative powers that used to be and that will never come again".
In Hardy's words, "Exposition, criticism, appreciation, is work for second-rate minds. [...] It is a melancholy experience for a professional mathematician to find himself writing about mathematics. The function of a mathematician is to do something, to prove new theorems, to add to mathematics, and not to talk about what he or other mathematicians have done."
Secondly, at the start of World War II, Hardy, a committed pacifist, wanted to justify his belief that mathematics should be pursued for its own sake rather than for the sake of its applications. He began writing on this subject when he was invited to contribute an article to Eureka, the journal of The Archimedeans (the Cambridge University student mathematical society). One of the topics the editor suggested was "something about mathematics and the war", and the result was the article "Mathemati
|
https://en.wikipedia.org/wiki/13th%20root
|
Extracting the 13th root of a number is a famous category for the mental calculation world records. The challenge consists of being given a large number (possibly over 100 digits) and asked to return the number that, when taken to the 13th power, equals the given number. For example, the 13th root of 8,192 is 2 and the 13th root of 96,889,010,407 is 7.
Properties of the challenge
Extracting the 13th root has certain properties. One is that the 13th root of a number is much smaller: a 13th root will have approximately 1/13th the number of digits. Thus, the 13th root of a 100-digit number only has 8 digits and the 13th root of a 200-digit number will have 16 digits. Furthermore, the last digit of the 13th root is always the same as the last digit of the power.
For the 13th root of a 100-digit number there are 7,992,563 possibilities, in the range 41,246,264 – 49,238,826. This is considered a relatively easy calculation. There are 393,544,396,177,593 possibilities, in the range 2,030,917,620,904,736 – 2,424,462,017,082,328, for the 13th root of a 200-digit number. This is considered a difficult calculation.
Records
The Guinness Book of World Records has published records for extracting the 13th root of a 100-digit number. All world records for mentally extracting a 13th root have been for numbers with an integer root:
The first record was 23 minutes by De Grote (Mexico).
The most published time was at one time 88.8 seconds by Klein (Netherlands).
Mittring calculated it in 39 seconds.
Alexis Lemaire has broken this record with 13.55 seconds. This is the last official world record for extracting the 13th root of a 100-digit number.
Mittring attempted to break this record with 11.8 seconds, but it was rejected by all organizations (Saxonia Record club, Guinness, 13th root group).
Lemaire broke this record unofficially 6 times, twice within 4 seconds: the best was 3.625 seconds.
Lemaire has also set the first world record for the 13th root of a 200-dig
|
https://en.wikipedia.org/wiki/USB%20mass%20storage%20device%20class
|
The USB mass storage device class (also known as USB MSC or UMS) is a set of computing communications protocols, specifically a USB Device Class, defined by the USB Implementers Forum that makes a USB device accessible to a host computing device and enables file transfers between the host and the USB device. To a host, the USB device acts as an external hard drive; the protocol set interfaces with a number of storage devices.
Uses
Devices connected to computers via this standard include:
External magnetic hard drives
External optical drives, including CD and DVD reader and writer drives
USB flash drives
Solid-state drives
Adapters between standard flash memory cards and USB connections
Digital cameras
Portable media players
Card readers
PDAs
Mobile phones
Devices supporting this standard are known as MSC (Mass Storage Class) devices. While MSC is the original abbreviation, UMS (Universal Mass Storage) has also come into common use.
Operating system support
Most mainstream operating systems include support for USB mass storage devices; support on older systems is usually available through patches.
Microsoft Windows
Microsoft Windows has supported MSC since Windows 2000. There is no support for USB supplied by Microsoft in Windows before Windows 95 and Windows NT 4.0. Windows 95 OSR2.1, an update to the operating system, featured limited support for USB. During that time no generic USB mass-storage driver was produced by Microsoft (including for Windows 98), and a device-specific driver was needed for each type of USB storage device. Third-party, freeware drivers became available for Windows 98 and Windows 98SE, and third-party drivers are also available for Windows NT 4.0. Windows 2000 has support (via a generic driver) for standard USB mass-storage devices; Windows Me and all later Windows versions also include support.
Windows Mobile supports accessing most USB mass-storage devices formatted with FAT on devices with USB Host. However, portable de
|
https://en.wikipedia.org/wiki/Thom%20space
|
In mathematics, the Thom space, Thom complex, or Pontryagin–Thom construction (named after René Thom and Lev Pontryagin) of algebraic topology and differential topology is a topological space associated to a vector bundle, over any paracompact space.
Construction of the Thom space
One way to construct this space is as follows. Let
be a rank n real vector bundle over the paracompact space B. Then for each point b in B, the fiber is an -dimensional real vector space. Choose an orthogonal structure on E, a smoothly varying inner product on the fibers; we can do this using partitions of unity. Let be the unit ball bundle with respect to our orthogonal structure, and let be the unit sphere bundle, then the Thom space is the quotient of topological spaces. is a pointed space with the image of in the quotient as basepoint. If B is compact, then is the one-point compactification of E.
For example, if E is the trivial bundle , then and . Writing for B with a disjoint basepoint, is the smash product of and ; that is, the n-th reduced suspension of .
The Thom isomorphism
The significance of this construction begins with the following result, which belongs to the subject of cohomology of fiber bundles. (We have stated the result in terms of coefficients to avoid complications arising from orientability; see also Orientation of a vector bundle#Thom space.)
Let be a real vector bundle of rank n. Then there is an isomorphism, now called a Thom isomorphism
for all k greater than or equal to 0, where the right hand side is reduced cohomology.
This theorem was formulated and proved by René Thom in his famous 1952 thesis.
We can interpret the theorem as a global generalization of the suspension isomorphism on local trivializations, because the Thom space of a trivial bundle on B of rank k is isomorphic to the kth suspension of , B with a disjoint point added (cf. #Construction of the Thom space.) This can be more easily seen in the formulation of the theorem t
|
https://en.wikipedia.org/wiki/Dither
|
Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD.
A common use of dither is converting a grayscale image to black and white, such that the density of black dots in the new image approximates the average gray level in the original.
Etymology
The term dither was published in books on analog computation and hydraulically controlled guns shortly after World War II. Though he did not use the term dither, the concept of dithering to reduce quantization patterns was first applied by Lawrence G. Roberts in his 1961 MIT master's thesis and 1962 article. By 1964 dither was being used in the modern sense described in this article. The technique was in use at least as early as 1915, though not under the name dither.
In digital processing and waveform analysis
Dither is utilized in many different fields where digital processing and analysis are used. These uses include systems using digital signal processing, such as digital audio, digital video, digital photography, seismology, radar and weather forecasting systems.
Quantization yields error. If that error is correlated to the signal, the result is potentially cyclical or predictable. In some fields, especially where the receptor is sensitive to such artifacts, cyclical errors yield undesirable artifacts. In these fields introducing dither converts the error to random noise. The field of audio is a primary example of this. The human ear functions much like a Fourier transform, wherein it hears individual frequencies. The ear is therefore very sensitive to distortion, or additional frequency content, but far less sensitive to additional random noise at all frequencies such as found in a dithered signal.
Digital audio
In an analog system, the signal is continuous, but in a PCM digita
|
https://en.wikipedia.org/wiki/Noise%20shaping
|
Noise shaping is a technique typically used in digital audio, image, and video processing, usually in combination with dithering, as part of the process of quantization or bit-depth reduction of a signal. Its purpose is to increase the apparent signal-to-noise ratio of the resultant signal. It does this by altering the spectral shape of the error that is introduced by dithering and quantization; such that the noise power is at a lower level in frequency bands at which noise is considered to be less desirable and at a correspondingly higher level in bands where it is considered to be more desirable. A popular noise shaping algorithm used in image processing is known as ‘Floyd Steinberg dithering’; and many noise shaping algorithms used in audio processing are based on an ‘Absolute threshold of hearing’ model.
Operation
Any feedback loop functions as a filter. Noise shaping works by putting quantization noise in a feedback loop designed to filter the noise as desired.
Low-pass boxcar filter example
For example, consider the feedback system:
where is a constant, is the cycle number, is the input sample value, is the value being quantized, and is its quantization error:
In this model, when any sample's bit depth is reduced, the quantization error is measured and on the next cycle added with the next sample prior to quantization. The effect is that the quantization error is low-pass filtered by a 2-sample boxcar filter (also known as a simple moving average filter). As a result, compared to before, the quantization error has lower power at higher frequencies and higher power at lower frequencies. The filter's cutoff frequency can be adjusted by modifying , the proportion of error from the previous sample that is fed back.
Impulse response filters in general
More generally, any FIR filter or IIR filter can be used to create a more complex frequency response curve. Such filters can be designed using the weighted least squares method. In the case of digi
|
https://en.wikipedia.org/wiki/Moment%20problem
|
In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure μ to the sequence of moments
More generally, one may consider
for an arbitrary sequence of functions Mn.
Introduction
In the classical setting, μ is a measure on the real line, and M is the sequence { xn : n = 0, 1, 2, ... }. In this form the question appears in probability theory, asking whether there is a probability measure having specified mean, variance and so on, and whether it is unique.
There are three named classical moment problems: the Hamburger moment problem in which the support of μ is allowed to be the whole real line; the Stieltjes moment problem, for [0, +∞); and the Hausdorff moment problem for a bounded interval, which without loss of generality may be taken as [0, 1].
Existence
A sequence of numbers mn is the sequence of moments of a measure μ if and only if a certain positivity condition is fulfilled; namely, the Hankel matrices Hn,
should be positive semi-definite. This is because a positive-semidefinite Hankel matrix corresponds to a linear functional such that and (non-negative for sum of squares of polynomials). Assume can be extended to . In the univariate case, a non-negative polynomial can always be written as a sum of squares. So the linear functional is positive for all the non-negative polynomials in the univariate case. By Haviland's theorem, the linear functional has a measure form, that is . A condition of similar form is necessary and sufficient for the existence of a measure supported on a given interval [a, b].
One way to prove these results is to consider the linear functional that sends a polynomial
to
If mkn are the moments of some measure μ supported on [a, b], then evidently
Vice versa, if () holds, one can apply the M. Riesz extension theorem and extend to a functional on the space of continuous functions with compact support C0([a, b]), so that
By the Riesz representation theorem, () hol
|
https://en.wikipedia.org/wiki/Analytic%20hierarchy%20process
|
In the theory of decision making, the analytic hierarchy process (AHP), also analytical hierarchy process, is a structured technique for organizing and analyzing complex decisions, based on mathematics and psychology. It was developed by Thomas L. Saaty in the 1970s; Saaty partnered with Ernest Forman to develop Expert Choice software in 1983, and AHP has been extensively studied and refined since then. It represents an accurate approach to quantifying the weights of decision criteria. Individual experts’ experiences are utilized to estimate the relative magnitudes of factors through pair-wise comparisons. Each of the respondents compares the relative importance of each pair of items using a specially designed questionnaire.
Uses and applications
AHP is targeted at group decision making, and is used for decision situations, in fields such as government, business, industry, healthcare and education.
Rather than prescribing a "correct" decision, the AHP helps decision makers find the decision that best suits their goal and their understanding of the problem. It provides a comprehensive and rational framework for structuring a decision problem, for representing and quantifying its elements, for relating those elements to overall goals, and for evaluating alternative solutions.
Users of the AHP first decompose their decision problem into a hierarchy of more easily comprehended sub-problems, each of which can be analyzed independently. The elements of the hierarchy can relate to any aspect of the decision problem—tangible or intangible, carefully measured or roughly estimated, well or poorly understood—anything at all that applies to the decision at hand.
Once the hierarchy is built, the decision makers evaluate its various elements by comparing them to each other two at a time, with respect to their impact on an element above them in the hierarchy. In making the comparisons, the decision makers can use concrete data about the elements, and they can also use their ju
|
https://en.wikipedia.org/wiki/5%CE%B1-Reductase
|
5α-Reductases, also known as 3-oxo-5α-steroid 4-dehydrogenases, are enzymes involved in steroid metabolism. They participate in three metabolic pathways: bile acid biosynthesis, androgen and estrogen metabolism. There are three isozymes of 5α-reductase encoded by the genes SRD5A1, SRD5A2, and SRD5A3.
5α-Reductases catalyze the following generalized chemical reaction:
a 3-oxo-5α-steroid + acceptor a 3-oxo-Δ4-steroid + reduced acceptor
Where a 3-oxo-5α-steroid and acceptor are substrates, and a corresponding 3-oxo-Δ4-steroid and the reduced acceptor are products. An instance of this generalized reaction that 5α-reductase type 2 catalyzes is:
dihydrotestosterone + NADP+ testosterone + NADPH + H+
where dihydrotestosterone is the 3-oxo-5α-steroid, NADP+ is the acceptor and testosterone is the 3-oxo-Δ4-steroid and NADPH the reduced acceptor.
Production and activity
The enzyme is produced in many tissues in both males and females, in the reproductive tract, testes and ovaries, skin, seminal vesicles, prostate, epididymis and many organs, including the nervous system. There are three isoenzymes of 5α-reductase: steroid 5α-reductase 1, 2, and 3 (SRD5A1, SRD5A2 and SRD5A3).
5α-Reductases act on 3-oxo (3-keto), Δ4,5 C19/C21 steroids as its substrates; "3-keto" refers to the double bond of the third carbon to oxygen. Carbons 4 and 5 also have a double bond, represented by 'Δ4,5'. The reaction involves a stereospecific and permanent break of the Δ4,5 with the help of NADPH as a cofactor. A hydride anion (H−) is also placed on the α face at the fifth carbon, and a proton on the β face at carbon 4.
Distribution with age
5α-R1 is expressed in fetal scalp and nongenital skin of the back, anywhere from 5 to 50 times less than in the adult. 5α-R2 is expressed in fetal prostates similar to adults. 5α-R1 is expressed mainly in the epithelium and 5α-R2 the stroma of the fetal prostate. Scientists looked for 5α-R2 expression in fetal liver, adrenal, testis, ovary, brain, scalp,
|
https://en.wikipedia.org/wiki/Conditional-access%20module
|
A conditional access module (CAM) is an electronic device, usually incorporating a slot for a smart card, which equips an integrated digital television or set-top box with the appropriate hardware facility to view conditional access content that has been encrypted using a conditional access system. They are normally used with direct-broadcast satellite (DBS) services, although digital terrestrial pay TV suppliers also use CAMs. PC Card form factor is used as the Common Interface form of Conditional Access Modules for DVB broadcasts. Major CAM manufacturers include: Neotion, SmarDTV and SMIT.
Some encryption systems for which CAMs are available are Logiways, Nagravision, Viaccess, Mediaguard, Irdeto, KeyFly, Verimatrix, Cryptoworks, Mascom, Safeview, Diablo CAM and Conax. NDS VideoGuard encryption, the preferred choice of Sky Digital can only be externally emulated by a Dragon brand CAM. The NDS CAM that the Sky viewing card ordinarily uses is built into the Sky Digibox and thus not visible. Dragon and Matrix, two popular cams with satellite television enthusiasts are multicrypt meaning each is capable of handling more than one encryption system. Matrix CAMs can be upgraded via the PC Card port in a laptop personal computer whereas a Dragon cam update is done via separate programmer hardware. Although not officially supported or acknowledged, multicrypt and programmable modules are a grey market in the pay-TV industry.
The primary purpose of the CAM is to derive control words, which are short-term decryption keys for video. The effectiveness of a CAM depends on the tamper resistance of the hardware; if the hardware is broken, the functionality of the CAM can be emulated, enabling the content to be decrypted by non-subscribers. CAMs are normally removable so that they can be replaced after the hardware security is breached. Replacement of the CAMs in a system is called a card swap-out.
CAM Modules come in two types: standard, intended for a single TV consumer, and
|
https://en.wikipedia.org/wiki/Phytochemistry
|
Phytochemistry is the study of phytochemicals, which are chemicals derived from plants. Phytochemists strive to describe the structures of the large number of secondary metabolites found in plants, the functions of these compounds in human and plant biology, and the biosynthesis of these compounds. Plants synthesize phytochemicals for many reasons, including to protect themselves against insect attacks and plant diseases. The compounds found in plants are of many kinds, but most can be grouped into four major biosynthetic classes: alkaloids, phenylpropanoids, polyketides, and terpenoids.
Phytochemistry can be considered a subfield of botany or chemistry. Activities can be led in botanical gardens or in the wild with the aid of ethnobotany. Phytochemical studies directed toward human (i.e. drug discovery) use may fall under the discipline of pharmacognosy, whereas phytochemical studies focused on the ecological functions and evolution of phytochemicals likely fall under the discipline of chemical ecology. Phytochemistry also has relevance to the field of plant physiology.
Techniques
Techniques commonly used in the field of phytochemistry are extraction, isolation, and structural elucidation (MS,1D and 2D NMR) of natural products, as well as various chromatography techniques (MPLC, HPLC, and LC-MS).
Phytochemicals
Many plants produce chemical compounds for defence against herbivores. The major classes of pharmacologically active phytochemicals are described below, with examples of medicinal plants that contain them. Human settlements are often surrounded by weeds containing phytochemicals, such as nettle, dandelion and chickweed.
Many phytochemicals, including curcumin, epigallocatechin gallate, genistein, and resveratrol are pan-assay interference compounds and are not useful in drug discovery.
Alkaloids
Alkaloids are bitter-tasting chemicals, widespread in nature, and often toxic. There are several classes with different modes of action as drugs, both recre
|
https://en.wikipedia.org/wiki/Mandelbrot%20Competition
|
Named in honor of Benoit Mandelbrot, the Mandelbrot Competition was a mathematics competition founded by Sam Vandervelde, Richard Rusczyk and Sandor Lehoczky that operated from 1990 to 2019. It allowed high school students to compete individually and in four-person teams.
Competition
The Mandelbrot was a "correspondence competition," meaning that the competition was sent to a school's coach and students competed at their own school on a predetermined date. Individual results and team answers were then sent back to the contest coordinators. The most notable aspects of the Mandelbrot competition were the difficulty of the problems (much like the American Mathematics Competition and harder American Invitational Mathematics Examination problems) and the proof-based team round. Many past medalists at the International Mathematics Olympiad first tried their skills on the Mandelbrot Competition.
History
The Mandelbrot Competition was started by Sam Vandervelde, Richard Rusczyk, and Sandor Lehoczky while they were undergraduates in the early 1990s. Vandervelde ran the competition until its completion in 2019. Rusczyk now manages Art of Problem Solving Inc. and Lehoczky enjoys a successful career on Wall Street.
Contest format
The individual competition consisted of seven questions of varying value, worth a total of 14 points, that students had 40 minutes to answer. The team competition was a proof-based competition, where many questions were asked about a particular situation, and a team of four students was given 60 minutes to answer.
Divisions
The Mandelbrot Competition had two divisions, referred to as National and Regional. Questions at the National level were more difficult than those at the Regional level, but generally had overlap or concerned similar topics. For example, in the individual competition, the National competition would remove some of the easier Regional questions, and add some harder questions. In the team competition, the topic would be
|
https://en.wikipedia.org/wiki/Dynamic%20link%20matching
|
Dynamic link matching is a graph-based system for image recognition. It uses wavelet transformations to encode incoming image data.
References
External links
Original paper on Dynamic Link Matching
Wavelets
Pattern recognition
Graph algorithms
|
https://en.wikipedia.org/wiki/Marcel%20Grossmann
|
Marcel Grossmann (April 9, 1878 – September 7, 1936) was a Swiss mathematician and a friend and classmate of Albert Einstein. Grossmann was a member of an old Swiss family from Zurich. His father managed a textile factory. He became a Professor of Mathematics at the Federal Polytechnic School in Zurich, today the ETH Zurich, specializing in descriptive geometry.
Career
In 1900 Grossmann graduated from the Federal Polytechnic School (ETH) and became an assistant to the geometer Wilhelm Fiedler. He continued to do research on non-Euclidean geometry and taught in high schools for the next seven years. In 1902, he earned his doctorate from the University of Zurich with the thesis Ueber die metrischen Eigenschaften kollinearer Gebilde (translated On the Metrical Properties of Collinear Structures) with Fiedler as advisor. In 1907, he was appointed full professor of descriptive geometry at the Federal Polytechnic School.
As a professor of geometry, Grossmann organized summer courses for high school teachers. In 1910, he became one of the founders of the Swiss Mathematical Society. He was an Invited Speaker of the ICM in 1912 at Cambridge and in 1920 at Strasbourg.
Collaborations with Albert Einstein
Albert Einstein's friendship with Grossmann began with their school days in Zurich. Grossmann's careful and complete lecture notes at the Federal Polytechnic School proved to be a salvation for Einstein, who missed many lectures. Grossmann's father helped Einstein get his job at the Swiss Patent Office in Bern, and it was Grossmann who helped to conduct the negotiations to bring Einstein back from Prague as a professor of physics at the Zurich Polytechnic. Grossmann was an expert in differential geometry and tensor calculus; just the mathematical tools providing a proper mathematical framework for Einstein's work on gravity. Thus, it was natural that Einstein would enter into a scientific collaboration with Grossmann.
It was Grossmann who emphasized the importance of a non
|
https://en.wikipedia.org/wiki/Game%20Over%20%28video%20game%29
|
Game Over is an action video game developed by Dinamic Software and published by Imagine Software in 1987. It was released for the Amstrad CPC, Commodore 64, MSX, Thomson TO7, and ZX Spectrum. The game includes some adventure game elements. A prompted unrated sequel, Game Over II, was released in 1987.
Plot
Arkos, a former loyal lieutenant of the beautiful but evil galactic empress Queen Gremla, became a rebel dedicated to end her cruel tyranny. The first part of the game takes place on the prison planet Hypsis, from which Arkos must try to escape. In the second part, Arkos arrives in the jungle swamp planet Sckunn to infiltrate the queen's palace, defeat her Giant Guardian robot, and assassinate her.
Reception
Controversy arose around the presence of a visible nipple on the advertising and inlay artwork, which had originally appeared on the cover of Heavy Metal (May 1984 - Vol.8 No.2) called Cover Ere Comprimee and is attributed to Luis Royo. Oliver Frey, the art editor for Crash magazine, painted over the original bare-breasted image with a thin grey corset so that it could be printed, but retailers demanded that logos be placed over the nipple. Game Over won the awards for best advert and best inlay of the year, according to the readers of Crash.
The game itself was mostly well received. Computer & Video Games awarded it 8/10 for the ZX Spectrum and 7/10 for the Amstrad CPC versions. The MSX version was rated an overall 8/10 by MSX Extra, and the Commodore 64 version was given a score of 68% by Zzap!64. Your Sinclair rated the ZX Spectrum version 9/10, but the 1990 re-release edition from Alternative Software, which featured a sanitised version of the original cover described as "tragically modified", was given only 52% for its very high difficulty level.
Legacy
Game Over was followed by Game Over II (also known as Phantis in its native Spain), which was developed and published by Dinamic Software in 1987.
References
External links
Game Over at Lemon64
|
https://en.wikipedia.org/wiki/Volume%20integral
|
In mathematics (particularly multivariable calculus), a volume integral (∭) refers to an integral over a 3-dimensional domain; that is, it is a special case of multiple integrals. Volume integrals are especially important in physics for many applications, for example, to calculate flux densities, or to calculate mass from a corresponding density function.
In coordinates
It can also mean a triple integral within a region of a function and is usually written as:
A volume integral in cylindrical coordinates is
and a volume integral in spherical coordinates (using the ISO convention for angles with as the azimuth and measured from the polar axis (see more on conventions)) has the form
Example
Integrating the equation over a unit cube yields the following result:
So the volume of the unit cube is 1 as expected. This is rather trivial however, and a volume integral is far more powerful. For instance if we have a scalar density function on the unit cube then the volume integral will give the total mass of the cube. For example for density function:
the total mass of the cube is:
See also
Divergence theorem
Surface integral
Volume element
External links
Multivariable calculus
|
https://en.wikipedia.org/wiki/Teleomorph%2C%20anamorph%20and%20holomorph
|
In mycology, the terms teleomorph, anamorph, and holomorph apply to portions of the life cycles of fungi in the phyla Ascomycota and Basidiomycota:
Teleomorph: the sexual reproductive stage (morph), typically a fruiting body.
Anamorph: an asexual reproductive stage (morph), often mold-like. When a single fungus produces multiple morphologically distinct anamorphs, these are called synanamorphs.
Holomorph: the whole fungus, including anamorphs and teleomorph.
Dual naming of fungi
Fungi are classified primarily based on the structures associated with sexual reproduction, which tend to be evolutionarily conserved. However, many fungi reproduce only asexually, and cannot easily be classified based on sexual characteristics; some produce both asexual and sexual states. These species are often members of the Ascomycota, but a few of them belong to the Basidiomycota. Even among fungi that reproduce both sexually and asexually, often only one method of reproduction can be observed at a specific point in time or under specific conditions. Additionally, fungi typically grow in mixed colonies and sporulate amongst each other. These facts have made it very difficult to link the various states of the same fungus.
Fungi that are not known to produce a teleomorph were historically placed into an artificial phylum, the "Deuteromycota," also known as "fungi imperfecti," simply for convenience. Some workers hold that this is an obsolete concept, and that molecular phylogeny allows accurate placement of species which are known from only part of their life cycle. Others retain the term "deuteromycetes," but give it a lowercase "d" and no taxonomic rank.
Historically, Article 59 of the International Code of Botanical Nomenclature permitted mycologists to give asexually reproducing fungi (anamorphs) separate names from their sexual states (teleomorphs); but this practice was discontinued as of 1 January 2013.
The dual naming system can be confusing. However, it is essential for work
|
https://en.wikipedia.org/wiki/Sterile%20fungi
|
The sterile fungi, or mycelia sterilia, are a group of fungi that do not produce any known spores, either sexual or asexual. This is considered a form group, not a taxonomic division, and is used as a matter of convenience only, as various isolates within such morphotypes could include distantly related taxa or different morphotypes of the same species, leading to incorrect identifications. Because these fungi do not produce spores, it is impossible to use traditional methods of morphological comparison to classify them. However, molecular techniques can be applied to determine their evolutionary history, with ITS testing being the preferred method.
References
Mycology
Reproduction
|
https://en.wikipedia.org/wiki/Electronic%20news%20gathering
|
Electronic news gathering (ENG) or electronic journalism (EJ) is usage of electronic video and audio technologies by reporters to gather and present news instead of using film cameras. The term was coined during the rise of videotape technology in the 1970s. ENG can involve anything from a single reporter with a single professional video camera, to an entire television crew taking a truck on location.
Beginnings
Shortcomings of film
The term ENG was created as television news departments moved from film-based news gathering to electronic field production technology in the 1970s. Since film requires chemical processing before it can be viewed and edited, it generally took at least an hour from the time the film arrived back at the television station or network news department until it was ready to be broadcast. Film editing was done by hand on what was known as "color reversal" film, usually Kodak Ektachrome, meaning there were no negatives. Color reversal film had replaced black-and-white film as television itself evolved from black-and-white to color broadcasting. Filmo cameras were most commonly used for silent filming, while Auricon cameras were used for filming with synchronized sound. Since editing required cutting the film into segments and then splicing them together, a common problem was film breaking during the newscast. News stories were often transferred to bulky 2-inch videotape for distribution and playback, which made the content cumbersome to access.
Film remained important in daily news operations until the late 1960s, when news outlets adopted portable professional video cameras, portable recorders, wireless microphones and joined those with various microwave- and satellite truck-linked delivery systems. By the mid-1980s, film had all but disappeared from use in television journalism.
Transition to ENG
As one cameraman of the era tells it,
This portability greatly contributed to the rise of electronic news gathering as it made portable news
|
https://en.wikipedia.org/wiki/Manhattan%20wiring
|
Manhattan wiring (also known as right-angle wiring) is a technique for laying out circuits in computer engineering. Inputs to a circuit (specifically, the interconnects from the inputs) are aligned into a grid, and the circuit "taps" (connects to) them perpendicularly. This may be done either virtually or physically. That is, it may be shown this way only in the documentation and the actual circuit may look nothing like that; or it may be laid out that way on the physical chip. Typically, separate lanes are used for the inverted inputs and are tapped separately.
The name Manhattan wiring relates to its Manhattan geometry. Reminiscent of how streets in Manhattan, New York tend to criss-cross in a very regular grid, it relates to appearance of such circuit diagrams.
Manhattan wiring is often used to represent a programmable logic array.
Alternatives include X-architecture wiring, or 45° wiring, and Y-architecture wiring (using wires running in the 0°, 120°, and 240° directions).
See also
Manhattan metric
References
Electronic circuits
|
https://en.wikipedia.org/wiki/Coulomb%20explosion
|
A Coulombic explosion is a condensed-matter physics process in which a molecule or crystal lattice is destroyed by the Coulombic repulsion between its constituent atoms. Coulombic explosions are a prominent technique in laser-based machining, and appear naturally in certain high-energy reactions.
Mechanism
A Coulombic explosion begins when an intense electric field (often from a laser) excites the valence electrons in a solid, ejecting them from the system and leaving behind positively charged ions. The chemical bonds holding the solid together are weakened by the loss of the electrons, enabling the Coulombic repulsion between the ions to overcome them. The result is an explosion of ions and electrons – a plasma.
The laser must be very intense to produce a Coulomb explosion. If it is too weak, the energy given to the electrons will be transferred to the ions via electron-phonon coupling. This will cause the entire material to heat up, melt, and thermally ablate away as a plasma. The end result is similar to Coulomb explosion, except that any fine structure in the material will be damaged by thermal melting.
It may be shown that the Coulomb explosion occurs in the same parameter regime as the superradiant phase transition i.e. when the destabilizing interactions become overwhelming and dominate over the oscillatory phonon-solid binding motions.
Technological use
A Coulomb explosion is a "cold" alternative to the dominant laser etching technique of thermal ablation, which depends on local heating, melting, and vaporization of molecules and atoms using less-intense beams. Pulse brevity down only to the nanosecond regime is sufficient to localize thermal ablation – before the heat is conducted far, the energy input (pulse) has ended. Nevertheless, thermally ablated materials may seal pores important in catalysis or battery operation, and recrystallize or even burn the substrate, thus changing the physical and chemical properties at the etch site. In contrast, even
|
https://en.wikipedia.org/wiki/Foxfire
|
Foxfire, also called fairy fire and chimpanzee fire, is the bioluminescence created by some species of fungi present in decaying wood. The bluish-green glow is attributed to a luciferase, an oxidative enzyme, which emits light as it reacts with a luciferin. The phenomenon has been known since ancient times, with its source determined in 1823.
Description
Foxfire is the bioluminescence created by some species of fungi present in decaying wood. It occurs in a number of species, including Panellus stipticus, Omphalotus olearius and Omphalotus nidiformis. The bluish-green glow is attributed to luciferin, which emits light after oxidation catalyzed by the enzyme luciferase. Some believe that the light attracts insects to spread spores, or acts as a warning to hungry animals, like the bright colors exhibited by some poisonous or unpalatable animal species. Although generally very dim, in some cases foxfire is bright enough to read by.
History
The oldest recorded documentation of foxfire is from 382 B.C., by Aristotle, whose notes refer to a light that, unlike fire, was cold to the touch. The Roman thinker Pliny the Elder also mentioned glowing wood in olive groves.
Foxfire was used to illuminate the needles on the barometer and the compass of Turtle, an early submarine. This is commonly thought to have been suggested by Benjamin Franklin; a reading of the correspondence from Benjamin Gale, however, shows that Benjamin Franklin was only consulted for alternative forms of lighting when the cold temperatures rendered the foxfire inactive.
After many more literary references to foxfire by early scientists and naturalists, its cause was discovered in 1823. The glow emitted from wooden support beams in mines was examined, and it was found that the luminescence came from fungal growth.
The "fox" in foxfire may derive from the Old French word , meaning "false", rather than from the name of the animal. The association of foxes with such fires is widespread, however, and occ
|
https://en.wikipedia.org/wiki/Ideal%20theory
|
In mathematics, ideal theory is the theory of ideals in commutative rings. While the notion of an ideal exists also for non-commutative rings, a much more substantial theory exists only for commutative rings (and this article therefore only considers ideals in commutative rings.)
Throughout the articles, rings refer to commutative rings. See also the article ideal (ring theory) for basic operations such as sum or products of ideals.
Ideals in a finitely generated algebra over a field
Ideals in a finitely generated algebra over a field (that is, a quotient of a polynomial ring over a field) behave somehow nicer than those in a general commutative ring. First, in contrast to the general case, if is a finitely generated algebra over a field, then the radical of an ideal in is the intersection of all maximal ideals containing the ideal (because is a Jacobson ring). This may be thought of as an extension of Hilbert's Nullstellensatz, which concerns the case when is a polynomial ring.
Topology determined by an ideal
If I is an ideal in a ring A, then it determines the topology on A where a subset U of A is open if, for each x in U,
for some integer . This topology is called the I-adic topology. It is also called an a-adic topology if is generated by an element .
For example, take , the ring of integers and an ideal generated by a prime number p. For each integer , define when , prime to . Then, clearly,
where denotes an open ball of radius with center . Hence, the -adic topology on is the same as the metric space topology given by . As a metric space, can be completed. The resulting complete metric space has a structure of a ring that extended the ring structure of ; this ring is denoted as and is called the ring of p-adic integers.
Ideal class group
In a Dedekind domain A (e.g., a ring of integers in a number field or the coordinate ring of a smooth affine curve) with the field of fractions , an ideal is invertible in the sense: there exists a f
|
https://en.wikipedia.org/wiki/Computational%20topology
|
Algorithmic topology, or computational topology, is a subfield of topology with an overlap with areas of computer science, in particular, computational geometry and computational complexity theory.
A primary concern of algorithmic topology, as its name suggests, is to develop efficient algorithms for solving problems that arise naturally in fields such as computational geometry, graphics, robotics, structural biology and chemistry, using methods from computable topology.
Major algorithms by subject area
Algorithmic 3-manifold theory
A large family of algorithms concerning 3-manifolds revolve around normal surface theory, which is a phrase that encompasses several techniques to turn problems in 3-manifold theory into integer linear programming problems.
Rubinstein and Thompson's 3-sphere recognition algorithm. This is an algorithm that takes as input a triangulated 3-manifold and determines whether or not the manifold is homeomorphic to the 3-sphere. It has exponential run-time in the number of tetrahedral simplexes in the initial 3-manifold, and also an exponential memory profile. Moreover, it is implemented in the software package Regina. Saul Schleimer went on to show the problem lies in the complexity class NP. Furthermore, Raphael Zentner showed that the problem lies in the complexity class coNP, provided that the generalized Riemann hypothesis holds. He uses instanton gauge theory, the geometrization theorem of 3-manifolds, and subsequent work of Greg Kuperberg on the complexity of knottedness detection.
The connect-sum decomposition of 3-manifolds is also implemented in Regina, has exponential run-time and is based on a similar algorithm to the 3-sphere recognition algorithm.
Determining that the Seifert-Weber 3-manifold contains no incompressible surface has been algorithmically implemented by Burton, Rubinstein and Tillmann and based on normal surface theory.
The Manning algorithm is an algorithm to find hyperbolic structures on 3-manifolds wh
|
https://en.wikipedia.org/wiki/Water%20supply%20network
|
A water supply network or water supply system is a system of engineered hydrologic and hydraulic components that provide water supply. A water supply system typically includes the following:
A drainage basin (see water purification – sources of drinking water)
A raw water collection point (above or below ground) where the water accumulates, such as a lake, a river, or groundwater from an underground aquifer. Raw water may be transferred using uncovered ground-level aqueducts, covered tunnels, or underground water pipes to water purification facilities.
Water purification facilities. Treated water is transferred using water pipes (usually underground).
Water storage facilities such as reservoirs, water tanks, or water towers. Smaller water systems may store the water in cisterns or pressure vessels. Tall buildings may also need to store water locally in pressure vessels in order for the water to reach the upper floors.
Additional water pressurizing components such as pumping stations may need to be situated at the outlet of underground or aboveground reservoirs or cisterns (if gravity flow is impractical).
A pipe network for distribution of water to consumers (which may be private houses or industrial, commercial, or institution establishments) and other usage points (such as fire hydrants)
Connections to the sewers (underground pipes, or aboveground ditches in some developing countries) are generally found downstream of the water consumers, but the sewer system is considered to be a separate system, rather than part of the water supply system.
Water supply networks are often run by public utilities of the water industry.
Water abstraction and raw water transfer
Raw water (untreated) is from a surface water source (such as an intake on a lake or a river) or from a groundwater source (such as a water well drawing from an underground aquifer) within the watershed that provides the water resource.
The raw water is transferred to the water purification facilit
|
https://en.wikipedia.org/wiki/FutureGen
|
FutureGen was a project to demonstrate capture and sequestration of waste carbon dioxide from a coal-fired electrical generating station. The project (renamed FutureGen 2.0) was retrofitting a shuttered coal-fired power plant in Meredosia, Illinois, with oxy-combustion generators. The waste CO2 would be piped approximately to be sequestered in underground saline formations. FutureGen was a partnership between the United States government and an alliance of primarily coal-related corporations. Costs were estimated at US$1.65 billion, with $1.0 billion provided by the Federal Government.
First announced by President George W. Bush in 2003, construction started in 2014 after restructuring, canceling, relocating, and restarting. Citing an inability to commit and spend the funds by deadlines in 2015, the Department of Energy withdrew funds and suspended FutureGen 2.0 in February, 2015. The government also cited the Alliance's inability to raise the requisite amount of private funding. The Meredosia power plant that had been planned for retrofit was demolished around 2021.
FutureGen 2.0 would have been the most comprehensive Department of Energy Carbon Capture and Storage demonstration project, involving all phases from combustion to sequestration. FutureGen's initial plan involved integrated gasification combined cycle technology to produce both electricity and hydrogen. Early in the project it was to be sited in Mattoon, IL.
Original project
The original incarnation of FutureGen was as a public-private partnership to build the world's first near zero-emissions coal-fueled power plant. The 275-megawatt plant would be intended to prove the feasibility of producing electricity and hydrogen from coal while capturing and permanently storing carbon dioxide underground. The Alliance intended to build the plant in Mattoon Township, Coles County, Illinois northwest of Mattoon, Illinois, subject to necessary approvals (issuing a “Record of Decision”) by the Department of Ene
|
https://en.wikipedia.org/wiki/Nothing-up-my-sleeve%20number
|
In cryptography, nothing-up-my-sleeve numbers are any numbers which, by their construction, are above suspicion of hidden properties. They are used in creating cryptographic functions such as hashes and ciphers. These algorithms often need randomized constants for mixing or initialization purposes. The cryptographer may wish to pick these values in a way that demonstrates the constants were not selected for a nefarious purpose, for example, to create a backdoor to the algorithm. These fears can be allayed by using numbers created in a way that leaves little room for adjustment. An example would be the use of initial digits from the number as the constants. Using digits of millions of places after the decimal point would not be considered trustworthy because the algorithm designer might have selected that starting point because it created a secret weakness the designer could later exploit.
Digits in the positional representations of real numbers such as , e, and irrational roots are believed to appear with equal frequency (see normal number). Such numbers can be viewed as the opposite extreme of Chaitin–Kolmogorov random numbers in that they appear random but have very low information entropy. Their use is motivated by early controversy over the U.S. Government's 1975 Data Encryption Standard, which came under criticism because no explanation was supplied for the constants used in its S-box (though they were later found to have been carefully selected to protect against the then-classified technique of differential cryptanalysis). Thus a need was felt for a more transparent way to generate constants used in cryptography.
"Nothing up my sleeve" is a phrase associated with magicians, who sometimes preface a magic trick by holding open their sleeves to show they have no objects hidden inside.
Examples
Ron Rivest used the trigonometric sine function to generate constants for the widely used MD5 hash.
The U.S. National Security Agency used the square roots of sma
|
https://en.wikipedia.org/wiki/OMAP
|
The OMAP (Open Multimedia Applications Platform) family, developed by Texas Instruments, was a series of image/video processors. They are proprietary system on chips (SoCs) for portable and mobile multimedia applications. OMAP devices generally include a general-purpose ARM architecture processor core plus one or more specialized co-processors. Earlier OMAP variants commonly featured a variant of the Texas Instruments TMS320 series digital signal processor.
The platform was created after December 12, 2002, as STMicroelectronics and Texas Instruments jointly announced an initiative for Open Mobile Application Processor Interfaces (OMAPI) intended to be used with 2.5 and 3G mobile phones, that were going to be produced during 2003. (This was later merged into a larger initiative and renamed the MIPI Alliance.) The OMAP was Texas Instruments' implementation of this standard. (The STMicroelectronics implementation was named Nomadik.)
OMAP did enjoy some success in the smartphone and tablet market until 2011 when it lost ground to Qualcomm Snapdragon. On September 26, 2012, Texas Instruments announced they would wind down their operations in smartphone and tablet oriented chips and instead focus on embedded platforms. On November 14, 2012, Texas Instruments announced they would cut 1,700 jobs due to their shift from mobile to embedded platforms. The last OMAP5 chips were released in Q2 2013.
OMAP family
The OMAP family consists of three product groups classified by performance and intended application:
high-performance applications processors
basic multimedia applications processors
integrated modem and applications processors
Further, two main distribution channels exist, and not all parts are available in both channels. The genesis of the OMAP product line is from partnership with cell phone vendors, and the main distribution channel involves sales directly to such wireless handset vendors. Parts developed to suit evolving cell phone requirements are flexib
|
https://en.wikipedia.org/wiki/APEXC
|
The APE(X)C, or All Purpose Electronic (X) Computer series was designed by Andrew Donald Booth at Birkbeck College, London in the early 1950s. His work on the APE(X)C series was sponsored by the British Rayon Research Association. Although the naming conventions are slightly unclear, it seems the first model belonged to the BRRA. According to Booth, the X stood for X-company.
One of the series was also known as the APE(X)C or All Purpose Electronic X-Ray Computer and was sited at Birkbeck.
Background
From 1943 on, Booth started working on the determination of crystal structures using X-ray diffraction data. The computations involved were extremely tedious and there was ample incentive for automating the process and he developed an analogue computer to compute the reciprocal spacings of the diffraction pattern.
In 1947, along with his collaborator and future spouse Kathleen Britten, he spent a few months with von Neumann's team, which was the leading edge in computer research at the time.
ARC and SEC
Booth designed an electromechanical computer, the ARC (Automatic Relay Computer), in the late 1940s (1947-1948). Later on, they built an experimental electronic computer named SEC (Simple Electronic Computer, designed around 1948-1949) - and finally, the APE(X)C (All-Purpose Electronic Computer) series.
The computers were programmed by Kathleen.
The APE(X) C series
The APE(X)C series included the following machines:
APE(X)C: Birkbeck College, London, first time operated in May 1952, ready for use at the end of 1953
APE(N)C: Board of Mathematical Machines, Oslo ('N' likely stands for 'Norway'), also known as NUSSE
APE(H)C: British Tabulating Machine Company (It is unclear what 'H' stands for - perhaps 'Hollerith' as the company sold Hollerith Unit record equipment
APE(R)C: British Rayon Research Association ('R' stands for 'Rayon'), ready for use in June 1952
UCC: University College, London (circa January 1956)
MAC or MAGIC (Magnetic Automatic Calculator)
|
https://en.wikipedia.org/wiki/Osmotic%20stress%20technique
|
The osmotic stress technique is a method for measuring the effect of water on biological molecules, particularly enzymes. Just as the properties of molecules can depend on the presence of salts, pH, and temperature, they can depend significantly on the amount of water present. In the osmotic stress technique, flexible neutral polymers such as polyethylene glycol and dextran are added to the solution containing the molecule of interest, replacing a significant part of the water. The amount of water replaced is characterized by the chemical activity of water.
See also
Osmotic shock
References
Tables containing osmotic pressure data for use in the osmotic stress technique
Biochemistry methods
|
https://en.wikipedia.org/wiki/Dual%20piping
|
Dual piping is a system of plumbing installations used to supply both potable and reclaimed water to a home or business. Under this system, two completely separate water piping systems are used to deliver water to the user. This system prevents mixing of the two water supplies, which is undesirable, since reclaimed water is usually not intended for human consumption.
In the United States, reclaimed water is distributed in lavender (light purple) pipes, to alert users that the pipes contain non-potable water. Hong Kong has used a dual piping system for toilet flushing with sea water since the 1950s.
According to the El Dorado Irrigation District in California, the average dual-piped home used approximately of potable water in 2006. The average single family residence with traditional piping using potable water for irrigation as well as for domestic uses used between , higher elevation, and , lower elevation.
Further reading
Tang, S.L., Derek P.T. Yue, Damien C.C. Ku: Engineering and Costs of Dual Water Supply Systems, International Water Supply Association 2007,
Plumbing
|
https://en.wikipedia.org/wiki/Lee%20Felsenstein
|
Lee Felsenstein (born April 27, 1945) is an American computer engineer who played a central role in the development of the personal computer. He was one of the original members of the Homebrew Computer Club and the designer of the Osborne 1, the first mass-produced portable computer.
Before the Osborne, Felsenstein designed the Intel 8080 based Sol-20 computer from Processor Technology, the PennyWhistle modem, and other early "S-100 bus" era designs. His shared-memory alphanumeric video display design, the Processor Technology VDM-1 video display module board, was widely copied and became the basis for the standard display architecture of personal computers.
Many of his designs were leaders in reducing costs of computer technologies for the purpose of making them available to large markets. His work featured a concern for the social impact of technology and was influenced by the philosophy of Ivan Illich. Felsenstein was the engineer for the Community Memory project, one of the earliest attempts to place networked computer terminals in public places to facilitate social interactions among individuals, in the era before the commercial Internet.
Life
Felsenstein graduated from Central High School in Philadelphia as a member of class 219. As a young man, Felsenstein was a New Left radical. From October through December 1964, he was a participant in the Free Speech Movement and was one of 768 arrested in the climactic "Sproul Hall Sit-In" of December 2–3, 1964. He also wrote for the Berkeley Barb, one of the leading underground newspapers.
He had entered University of California, Berkeley first in 1963, joined the Co-operative Work-Study Program in Engineering in 1964 and dropped out at the end of 1967, working as a Junior Engineer at the Ampex Corporation from 1968 through 1971, when he re-enrolled at Berkeley. He received a B.S. in electrical engineering and computer science from the University of California, Berkeley in 1972.
From 1981–1983, Felsenstein was emp
|
https://en.wikipedia.org/wiki/Non-maskable%20interrupt
|
In computing, a non-maskable interrupt (NMI) is a hardware interrupt that standard interrupt-masking techniques in the system cannot ignore. It typically occurs to signal attention for non-recoverable hardware errors. Some NMIs may be masked, but only by using proprietary methods specific to the particular NMI.
An NMI is often used when response time is critical or when an interrupt should never be disabled during normal system operation. Such uses include reporting non-recoverable hardware errors, system debugging and profiling, and handling of special cases like system resets.
Modern computer architectures typically use NMIs to handle non-recoverable errors which need immediate attention. Therefore, such interrupts should not be masked in the normal operation of the system. These errors include non-recoverable internal system chipset errors, corruption in system memory such as parity and ECC errors, and data corruption detected on system and peripheral buses.
On some systems, a computer user can trigger an NMI through hardware and software debugging interfaces and system reset buttons.
Programmers typically use debugging NMIs to diagnose and fix faulty code. In such cases, an NMI can execute an interrupt handler that transfers control to a special monitor program. From this program, a developer can inspect the machine's memory and examine the internal state of the program at the instant of its interruption. This also allows the debugging or diagnosing of computers which appear hung.
History
In older architectures, NMIs were used for interrupts which were typically never disabled because of the required response time. They were hidden signals. Examples include the floppy disk controller on the Amstrad PCW, the 8087 coprocessor on the x86 when used in the IBM PC or its compatibles (even though Intel recommended connecting it to a normal interrupt), and the Low Battery signal on the HP 95LX.
In the original IBM PC, an NMI was triggered if a parity error was de
|
https://en.wikipedia.org/wiki/Langenberg%20transmission%20tower
|
The Langenberg transmission tower (also translated as "Sender Langenberg" or "Transmission Facility Langenberg") is a broadcasting station for ananlog FM Radio and Digital-TV (DVB-T2 HD) signals. It is located in Langenberg, Velbert, Germany and owned and operated by Westdeutscher Rundfunk, WDR.
The history of the transmitting site is very changing. The transmitter first went into service in 1927 with 60 kilowatts (kW) of power and a T-aerial hanging on two 100-metre freestanding steel-frame towers insulated against ground.
History
In 1926 the „Westdeutschen Funkstunde“ and a association of locals agreed to build the transmitter tower in Langenberg. On Januar 15. 1927 the transmitter was inaugorated.
In the early 1930s, communist underground groups tried to manipulate the line from the studio to the transmitter in order to broadcast their own propaganda. Their attempts failed, but they did manage to attach a red star to the top of one of the towers, which was removed on the same day.
In 1934 the T-aerial was replaced by an aerial hanging from a 160-metre wood framework tower and the transmission power was increased to 100 kW. However, this tower was destroyed on October 10, 1935 by a tornado. After this a triangular aerial hung on three 45-metre freestanding towers was built; this went into service in December 1935. In 1940/41 a second aerial was installed on a 240-metre insulated guyed steel tube mast. The entire aerial system was destroyed by SS-Postschutz troops on April 12, 1945.
Post-1945
After World War II, British forces built two triangular aerials mounted on 6 masts, each 50 metres high. One of these aerials was removed in 1948 and a insulated radio mast built on its site. The other aerial was destroyed in a storm in 1949 which broke two of the three masts. The third mast was transformed into an AM transmitter and was in service until 1957. In 1949 a second radio mast with a height of 120 metres was built, and in 1952 a third guyed mast followe
|
https://en.wikipedia.org/wiki/Blood%20culture
|
A blood culture is a medical laboratory test used to detect bacteria or fungi in a person's blood. Under normal conditions, the blood does not contain microorganisms: their presence can indicate a bloodstream infection such as bacteremia or fungemia, which in severe cases may result in sepsis. By culturing the blood, microbes can be identified and tested for resistance to antimicrobial drugs, which allows clinicians to provide an effective treatment.
To perform the test, blood is drawn into bottles containing a liquid formula that enhances microbial growth, called a culture medium. Usually, two containers are collected during one draw, one of which is designed for aerobic organisms that require oxygen, and one of which is for anaerobic organisms, that do not. These two containers are referred to as a set of blood cultures. Two sets of blood cultures are sometimes collected from two different blood draw sites. If an organism only appears in one of the two sets, it is more likely to represent contamination with skin flora than a true bloodstream infection. False negative results can occur if the sample is collected after the person has received antimicrobial drugs or if the bottles are not filled with the recommended amount of blood. Some organisms do not grow well in blood cultures and require special techniques for detection.
The containers are placed in an incubator for several days to allow the organisms to multiply. If microbial growth is detected, a Gram stain is conducted from the culture bottle to confirm that organisms are present and provide preliminary information about their identity. The blood is then subcultured, meaning it is streaked onto an agar plate to isolate microbial colonies for full identification and antimicrobial susceptibility testing. Because it is essential that bloodstream infections are diagnosed and treated quickly, rapid testing methods have been developed using technologies like polymerase chain reaction and MALDI-TOF MS.
Procedure
|
https://en.wikipedia.org/wiki/Random%20number%20generator%20attack
|
The security of cryptographic systems depends on some secret data that is known to authorized persons but unknown and unpredictable to others. To achieve this unpredictability, some randomization is typically employed. Modern cryptographic protocols often require frequent generation of random quantities. Cryptographic attacks that subvert or exploit weaknesses in this process are known as random number generator attacks.
A high quality random number generation (RNG) process is almost always required for security, and lack of quality generally provides attack vulnerabilities and so leads to lack of security, even to complete compromise, in cryptographic systems. The RNG process is particularly attractive to attackers because it is typically a single isolated hardware or software component easy to locate. If the attacker can substitute pseudo-random bits generated in a way they can predict, security is totally compromised, yet generally undetectable by any upstream test of the bits. Furthermore, such attacks require only a single access to the system that is being compromised. No data need be sent back in contrast to, say, a computer virus that steals keys and then e-mails them to some drop point.
Human generation of random quantities
Humans generally do poorly at generating random quantities. Magicians, professional gamblers and con artists depend on the predictability of human behavior. In World War II German code clerks were instructed to select three letters at random to be the initial rotor setting for each Enigma machine message. Instead some chose predictable values like their own or a girlfriend's initials, greatly aiding Allied breaking of these encryption systems. Another example is the often predictable ways computer users choose passwords (see password cracking).
Nevertheless, in the specific case of playing mixed strategy games, use of human gameplay entropy for randomness generation was studied by Ran Halprin and Moni Naor.
Attacks
Software RNGs
Jus
|
https://en.wikipedia.org/wiki/Edward%20Yourdon
|
Edward Nash Yourdon (April 30, 1944 – January 20, 2016) was an American software engineer, computer consultant, author and lecturer, and software engineering methodology pioneer. He was one of the lead developers of the structured analysis techniques of the 1970s and a co-developer of both the Yourdon/Whitehead method for object-oriented analysis/design in the late 1980s and the Coad/Yourdon methodology for object-oriented analysis/design in the 1990s.
Biography
Yourdon obtained his B.S. in applied mathematics from Massachusetts Institute of Technology (MIT) in 1965, and did graduate work in electrical engineering and computer science at MIT and the Polytechnic Institute of New York.
In 1964 Yourdon started working at Digital Equipment Corporation developing FORTRAN programs for the PDP-5 minicomputer and later assembler for the PDP-8. In the late 1960s and early 1970s he worked at a small consulting firm and as an independent consultant. In 1974 Yourdon founded his own consulting firm, YOURDON Inc., to provide educational, publishing, and consulting services. After he sold this firm in 1986 he served on the Board of multiple IT consultancy corporations and was advisor on several research project in the software industry throughout the 1990s.
In June 1997, Yourdon was inducted into the Computer Hall of Fame, along with such notables as Charles Babbage, James Martin, Grace Hopper, and Gerald Weinberg. In December 1999 Crosstalk: The Journal of Defense Software Engineering named him one of the ten most influential people in the software field.
In the late 1990s, Yourdon became the center of controversy over his beliefs that Y2K-related computer problems could result in severe software failures that would culminate in widespread social collapse. Due to the efforts of Yourdon and thousands of dedicated technologists, developers and project managers, these potential critical system failure points were successfully remediated, thus avoiding the problems Yourdon and
|
https://en.wikipedia.org/wiki/MoSCoW%20method
|
The MoSCoW method is a prioritization technique used in management, business analysis, project management, and software development to reach a common understanding with stakeholders on the importance they place on the delivery of each requirement; it is also known as MoSCoW prioritization or MoSCoW analysis.
The term MOSCOW itself is an acronym derived from the first letter of each of four prioritization categories:
M - Must have,
S - Should have,
C - Could have,
W - Won't have.
The interstitial Os are added to make the word pronounceable. While the Os are usually in lower-case to indicate that they do not stand for anything, the all-capitals MOSCOW is also used.
Background
This prioritization method was developed by Dai Clegg in 1994 for use in rapid application development (RAD). It was first used extensively with the dynamic systems development method (DSDM) from 2002.
MoSCoW is often used with timeboxing, where a deadline is fixed so that the focus must be on the most important requirements, and is commonly used in agile software development approaches such as Scrum, rapid application development (RAD), and DSDM.
Prioritization of requirements
All requirements are important, however to deliver the greatest and most immediate business benefits early the requirements must be prioritized. Developers will initially try to deliver all the Must have, Should have and Could have requirements but the Should and Could requirements will be the first to be removed if the delivery timescale looks threatened.
The plain English meaning of the prioritization categories has value in getting customers to better understand the impact of setting a priority, compared to alternatives like High, Medium and Low.
The categories are typically understood as:
Must have
Requirements labelled as Must have are critical to the current delivery timebox in order for it to be a success. If even one Must have requirement is not included, the project delivery should be considered a
|
https://en.wikipedia.org/wiki/Calculus%20of%20structures
|
The calculus of structures is a proof calculus with deep inference for studying the structural proof theory of noncommutative logic. The calculus has since been applied to study linear logic, classical logic, modal logic, and process calculi, and many benefits are claimed to follow in these investigations from the way in which deep inference is made available in the calculus.
References
Alessio Guglielmi (2004)., 'A System of Interaction and Structure'. ACM Transactions on Computational Logic.
Kai Brünnler (2004). Deep Inference and Symmetry in Classical Proofs. Logos Verlag.
External links
Calculus of structures homepage
CoS in Maude: page documenting implementations of logical systems in the calculus of structures, using the Maude system.
Logical calculi
|
https://en.wikipedia.org/wiki/Deep%20inference
|
Deep inference names a general idea in structural proof theory that breaks with the classical sequent calculus by generalising the notion of structure to permit inference to occur in contexts of high structural complexity. The term deep inference is generally reserved for proof calculi where the structural complexity is unbounded; in this article we will use non-shallow inference to refer to calculi that have structural complexity greater than the sequent calculus, but not unboundedly so, although this is not at present established terminology.
Deep inference is not important in logic outside of structural proof theory, since the phenomena that lead to the proposal of formal systems with deep inference are all related to the cut-elimination theorem. The first calculus of deep inference was proposed by Kurt Schütte, but the idea did not generate much interest at the time.
Nuel Belnap proposed display logic in an attempt to characterise the essence of structural proof theory. The calculus of structures was proposed in order to give a cut-free characterisation of noncommutative logic. Cirquent calculus was developed as a system of deep inference allowing to explicitly account for the possibility of subcomponent-sharing.
Notes
Further reading
Kai Brünnler, "Deep Inference and Symmetry in Classical Proofs" (Ph.D. thesis 2004), also published in book form by Logos Verlag ().
Deep Inference and the Calculus of Structures Intro and reference web page about ongoing research in deep inference.
Proof theory
Inference
|
https://en.wikipedia.org/wiki/Proof%20calculus
|
In mathematical logic, a proof calculus or a proof system is built to prove statements.
Overview
A proof system includes the components:
Formal language: The set L of formulas admitted by the system, for example, propositional logic or first-order logic.
Rules of inference: List of rules that can be employed to prove theorems from axioms and theorems.
Axioms: Formulas in L assumed to be valid. All theorems are derived from axioms.
A formal proof of a well-formed formula in a proof system is a set of axioms and rules of inference of proof system that infers that the well-formed formula is a theorem of proof system.
Usually a given proof calculus encompasses more than a single particular formal system, since many proof calculi are under-determined and can be used for radically different logics. For example, a paradigmatic case is the sequent calculus, which can be used to express the consequence relations of both intuitionistic logic and relevance logic. Thus, loosely speaking, a proof calculus is a template or design pattern, characterized by a certain style of formal inference, that may be specialized to produce specific formal systems, namely by specifying the actual inference rules for such a system. There is no consensus among logicians on how best to define the term.
Examples of proof calculi
The most widely known proof calculi are those classical calculi that are still in widespread use:
The class of Hilbert systems, of which the most famous example is the 1928 Hilbert–Ackermann system of first-order logic;
Gerhard Gentzen's calculus of natural deduction, which is the first formalism of structural proof theory, and which is the cornerstone of the formulae-as-types correspondence relating logic to functional programming;
Gentzen's sequent calculus, which is the most studied formalism of structural proof theory.
Many other proof calculi were, or might have been, seminal, but are not widely used today.
Aristotle's syllogistic calculus, presented in the
|
https://en.wikipedia.org/wiki/Ecological%20effects%20of%20biodiversity
|
The diversity of species and genes in ecological communities affects the functioning of these communities. These ecological effects of biodiversity in turn are affected by both climate change through enhanced greenhouse gases, aerosols and loss of land cover, and biological diversity, causing a rapid loss of biodiversity and extinctions of species and local populations. The current rate of extinction is sometimes considered a mass extinction, with current species extinction rates on the order of 100 to 1000 times as high as in the past.
The two main areas where the effect of biodiversity on ecosystem function have been studied are the relationship between diversity and productivity, and the relationship between diversity and community stability. More biologically diverse communities appear to be more productive (in terms of biomass production) than are less diverse communities, and they appear to be more stable in the face of perturbations.
Also animals that inhabit an area may alter the surviving conditions by factors assimilated by climate.
Definitions
In order to understand the effects that changes in biodiversity will have on ecosystem functioning, it is important to define some terms. Biodiversity is not easily defined, but may be thought of as the number and/or evenness of genes, species, and ecosystems in a region. This definition includes genetic diversity, or the diversity of genes within a species, species diversity, or the diversity of species within a habitat or region, and ecosystem diversity, or the diversity of habitats within a region.
Two things commonly measured in relation to changes in diversity are productivity and stability. Productivity is a measure of ecosystem function. It is generally measured by taking the total aboveground biomass of all plants in an area. Many assume that it can be used as a general indicator of ecosystem function and that total resource use and other indicators of ecosystem function are correlated with productivity.
|
https://en.wikipedia.org/wiki/Maude%20system
|
The Maude system is an implementation of rewriting logic. It is similar in its general approach to Joseph Goguen's OBJ3 implementation of equational logic, but based on rewriting logic rather than order-sorted equational logic, and with a heavy emphasis on powerful metaprogramming based on reflection.
Maude is free software, and tutorials are available online. It was originally developed at SRI International, but is now developed by a diverse collaboration of researchers.
Introduction
Maude sets out to solve a different set of problems than ordinary imperative languages like C, Java or Perl. It is a formal reasoning tool, which can help us verify that things are "as they should", and show us why they are not if this is the case. In other words, Maude lets us define formally what we mean by some concept in a very abstract manner (not concerning ourselves with how the structure is internally represented and so on), but we can describe what is thought to be the equal concerning our theory (equations) and what state changes it can go through (rewrite rules).
Maude modules (rewrite theories) consist of a term-language plus sets of equations and rewrite-rules. Terms in a rewrite theory are constructed using operators (functions taking 0 or more arguments of some sort, which return a term of a specific sort). Operators taking 0 arguments are considered constants, and one constructs their term-language by these simple constructs. Maude lets the user specify whether or not operators are infix, postfix or prefix (default), this is done using underscores as place fillers for the input terms.
Reduction equations are assumed to be confluent and terminating. Rewrite rules do not have this restriction.
When Maude "executes", it rewrites terms according to the equations and rewrite rules. Maude rewrites terms according to the equations whenever there is a match between the closed terms that one tries to rewrite (or reduce) and the left hand side of an equation in our equatio
|
https://en.wikipedia.org/wiki/Club%20set
|
In mathematics, particularly in mathematical logic and set theory, a club set is a subset of a limit ordinal that is closed under the order topology, and is unbounded (see below) relative to the limit ordinal. The name club is a contraction of "closed and unbounded".
Formal definition
Formally, if is a limit ordinal, then a set is closed in if and only if for every if then Thus, if the limit of some sequence from is less than then the limit is also in
If is a limit ordinal and then is unbounded in if for any there is some such that
If a set is both closed and unbounded, then it is a club set. Closed proper classes are also of interest (every proper class of ordinals is unbounded in the class of all ordinals).
For example, the set of all countable limit ordinals is a club set with respect to the first uncountable ordinal; but it is not a club set with respect to any higher limit ordinal, since it is neither closed nor unbounded.
If is an uncountable initial ordinal, then the set of all limit ordinals is closed unbounded in In fact a club set is nothing else but the range of a normal function (i.e. increasing and continuous).
More generally, if is a nonempty set and is a cardinal, then (the set of subsets of of cardinality ) is club if every union of a subset of is in and every subset of of cardinality less than is contained in some element of (see stationary set).
The closed unbounded filter
Let be a limit ordinal of uncountable cofinality For some , let be a sequence of closed unbounded subsets of Then is also closed unbounded. To see this, one can note that an intersection of closed sets is always closed, so we just need to show that this intersection is unbounded. So fix any and for each n < ω choose from each an element which is possible because each is unbounded. Since this is a collection of fewer than ordinals, all less than their least upper bound must also be less than so we can call it This process genera
|
https://en.wikipedia.org/wiki/Mostowski%20collapse%20lemma
|
In mathematical logic, the Mostowski collapse lemma, also known as the Shepherdson–Mostowski collapse, is a theorem of set theory introduced by and .
Statement
Suppose that R is a binary relation on a class X such that
R is set-like: R−1[x] = {y : y R x} is a set for every x,
R is well-founded: every nonempty subset S of X contains an R-minimal element (i.e. an element x ∈ S such that R−1[x] ∩ S is empty),
R is extensional: R−1[x] ≠ R−1[y] for every distinct elements x and y of X
The Mostowski collapse lemma states that for every such R there exists a unique transitive class (possibly proper) whose structure under the membership relation is isomorphic to (X, R), and the isomorphism is unique. The isomorphism maps each element x of X to the set of images of elements y of X such that y R x (Jech 2003:69).
Generalizations
Every well-founded set-like relation can be embedded into a well-founded set-like extensional relation. This implies the following variant of the Mostowski collapse lemma: every well-founded set-like relation is isomorphic to set-membership on a (non-unique, and not necessarily transitive) class.
A mapping F such that F(x) = {F(y) : y R x} for all x in X can be defined for any well-founded set-like relation R on X by well-founded recursion. It provides a homomorphism of R onto a (non-unique, in general) transitive class. The homomorphism F is an isomorphism if and only if R is extensional.
The well-foundedness assumption of the Mostowski lemma can be alleviated or dropped in non-well-founded set theories. In Boffa's set theory, every set-like extensional relation is isomorphic to set-membership on a (non-unique) transitive class. In set theory with Aczel's anti-foundation axiom, every set-like relation is bisimilar to set-membership on a unique transitive class, hence every bisimulation-minimal set-like relation is isomorphic to a unique transitive class.
Application
Every set model of ZF is set-like and extensional. If the model is well-founded
|
https://en.wikipedia.org/wiki/Community%20Identification%20Number
|
The Official Municipality Key, formerly also known as the Official Municipality Characteristic Number or Municipality Code Number, is a number sequence for the identification of politically independent municipalities or unincorporated areas. Other classifications for the identification of areas include postal codes, NUTS codes or FIPS codes.
Germany
In Germany the Official Municipality Key serves statistical purposes and is issued by the statistics offices of individual German states. The municipality key is to be indicated in instances such as changing residence on the notice of departure or registration documents. This is done at the registration office in every town's city hall.
Structure
The municipality key consists of eight digits, which are generated as follows: The designate the individual German state. The designates the government district (in areas without government districts a zero is used instead). The designate the number of the urban area (in a district-free city) or the district (in a city with districts). The indicate the municipality or the number of the unincorporated area.
Examples
: Stuttgart
: Baden-Württemberg
: Government district of Stuttgart
: Urban area of Stuttgart
: No other municipality is available, since Stuttgart is an urban area
: Aschersleben
: Saxony-Anhalt
: Government district of Magdeburg
: District of Aschersleben Staßfurt
: City of Aschersleben
Federal States
01: Schleswig-Holstein
02: Hamburg
03: Lower Saxony
04: Bremen
05: North Rhine-Westphalia
06: Hesse
07: Rhineland-Palatinate
08: Baden-Württemberg
09: Bavaria
10: Saarland
11: Berlin
12: Brandenburg
13: Mecklenburg-Vorpommern
14: Saxony
15: Saxony-Anhalt
16: Thuringia
Austria
Structure
The municipality identifier consists of five digits in Austria, which are generated as follows: The designates the number of the Austrian state, the designate the district, and the designate the municipality.
Examples
: Rappottenstein
|
https://en.wikipedia.org/wiki/Suspension%20%28topology%29
|
In topology, a branch of mathematics, the suspension of a topological space X is intuitively obtained by stretching X into a cylinder and then collapsing both end faces to points. One views X as "suspended" between these end points. The suspension of X is denoted by SX or susp(X).
There is a variation of the suspension for pointed space, which is called the reduced suspension and denoted by ΣX. The "usual" suspension SX is sometimes called the unreduced suspension, unbased suspension, or free suspension of X, to distinguish it from ΣX.
Free suspension
The (free) suspension of a topological space can be defined in several ways.
1. is the quotient space . In other words, it can be constructed as follows:
Construct the cylinder .
Consider the entire set as a single point ("glue" all its points together).
Consider the entire set as a single point ("glue" all its points together).
2. Another way to write this is:
Where are two points, and for each i in {0,1}, is the projection to the point (a function that maps everything to ). That means, the suspension is the result of constructing the cylinder , and then attaching it by its faces, and , to the points along the projections .
3. One can view as two cones on X, glued together at their base.
4. can also be defined as the join where is a discrete space with two points.
Properties
In rough terms, S increases the dimension of a space by one: for example, it takes an n-sphere to an (n + 1)-sphere for n ≥ 0.
Given a continuous map there is a continuous map defined by where square brackets denote equivalence classes. This makes into a functor from the category of topological spaces to itself.
Reduced suspension
If X is a pointed space with basepoint x0, there is a variation of the suspension which is sometimes more useful. The reduced suspension or based suspension ΣX of X is the quotient space:
.
This is the equivalent to taking SX and collapsing the line (x0 × I) joining the two ends to
|
https://en.wikipedia.org/wiki/Full%20scale
|
In electronics and signal processing, full scale represents the maximum amplitude a system can represent.
In digital systems, a signal is said to be at digital full scale when its magnitude has reached the maximum representable value. Once a signal has reached digital full scale, all headroom has been utilized, and any further increase in amplitude will result in an error known as clipping. The amplitude of a digital signal can be represented in percent; full scale; or decibels, full scale (dBFS).
In analog systems, full scale may be defined by the maximum voltage available, or the maximum deflection (full scale deflection or FSD) or indication of an analog instrument such as a moving coil meter or galvanometer.
Binary representation
Since binary integer representation range is asymmetrical, full scale is defined using the maximum positive value that can be represented. For example, 16-bit PCM audio is centered on the value 0, and can contain values from −32,768 to +32,767. A signal is at full-scale if it reaches from −32,767 to +32,767. (This means that −32,768, the lowest possible value, slightly exceeds full-scale.)
Signal processing in digital audio workstations often uses floating-point arithmetic, which can include values past full-scale, to avoid clipping in intermediate processing stages. In a floating-point representation, a full-scale signal is typically defined to reach from −1.0 to +1.0.
Processing
The signal passes through an anti-aliasing, resampling, or reconstruction filter, which may increase peak amplitude slightly due to ringing.
It is possible for the analog signal represented by the digital data to exceed digital full scale even if the digital data does not, and vice versa. Converting to the analog domain, there is no clipping problem as long as the analog circuitry in the digital-to-analog converter is well designed.
If a full-scale analog signal is converted to digital with sufficient sampling frequency, and then reconstructed, the
|
https://en.wikipedia.org/wiki/Planar%20process
|
The planar process is a manufacturing process used in the semiconductor industry to build individual components of a transistor, and in turn, connect those transistors together. It is the primary process by which silicon integrated circuit chips are built, and it is the most commonly used method of producing junctions during the manufacture of semiconductor devices. The process utilizes the surface passivation and thermal oxidation methods.
The planar process was developed at Fairchild Semiconductor in 1959.
The planar process proved to be one of the most important single advances in semiconductor technology.
Overview
The key concept is to view a circuit in its two-dimensional projection (a plane), thus allowing the use of photographic processing concepts such as film negatives to mask the projection of light exposed chemicals. This allows the use of a series of exposures on a substrate (silicon) to create silicon oxide (insulators) or doped regions (conductors). Together with the use of metallization, and the concepts of p–n junction isolation and surface passivation, it is possible to create circuits on a single silicon crystal slice (a wafer) from a monocrystalline silicon boule.
The process involves the basic procedures of silicon dioxide (SiO2) oxidation, SiO2 etching and heat diffusion. The final steps involves oxidizing the entire wafer with an SiO2 layer, etching contact vias to the transistors, and depositing a covering metal layer over the oxide, thus connecting the transistors without manually wiring them together.
History
Development
At a 1958 Electrochemical Society meeting, Mohamed Atalla presented a paper about the surface passivation of PN junctions by thermal oxidation, based on his 1957 BTL memos.
Swiss engineer Jean Hoerni (one of the "traitorous eight") attended the same 1958 meeting, and was intrigued by Atalla's presentation. Hoerni came up with the "planar idea" one morning while thinking about Atalla's device. Taking advantage of silic
|
https://en.wikipedia.org/wiki/Run-time%20infrastructure%20%28simulation%29
|
In simulation, run-time infrastructure (RTI) is a middleware that is required when implementing the High Level Architecture (HLA). RTI
is the fundamental component of HLA. It provides a set of software services that are necessary to support federates to coordinate their operations and data exchange during a runtime execution. In other sense, it is the implementation of the HLA interface specification but is not itself part of specification. Modern RTI implementations conform to the IEEE 1516 and/or HLA 1.3 API specifications. These specifications do not include a network protocol for RTI. It is up to the implementors of an RTI to create a specification. Due to this, interoperability between RTI products and often, RTI versions, should not be assumed unless the vendor specifies interoperability with other products or versions.
Known implementations
Middleware
Simulation software
|
https://en.wikipedia.org/wiki/Web%20engineering
|
The World Wide Web has become a major delivery platform for a variety of complex and sophisticated enterprise applications in several domains. In addition to their inherent multifaceted functionality, these Web applications exhibit complex behaviour and place some unique demands on their usability, performance, security, and ability to grow and evolve. However, a vast majority of these applications continue to be developed in an ad hoc way, contributing to problems of usability, maintainability, quality and reliability. While Web development can benefit from established practices from other related disciplines, it has certain distinguishing characteristics that demand special considerations. In recent years, there have been developments towards addressing these considerations.
Web engineering focuses on the methodologies, techniques, and tools that are the foundation of Web application development and which support their design, development, evolution, and evaluation. Web application development has certain characteristics that make it different from traditional software, information system, or computer application development.
Web engineering is multidisciplinary and encompasses contributions from diverse areas: systems analysis and design, software engineering, hypermedia/hypertext engineering, requirements engineering, human-computer interaction, user interface, data engineering, information science, information indexing and retrieval, testing, modelling and simulation, project management, and graphic design and presentation. Web engineering is neither a clone nor a subset of software engineering, although both involve programming and software development. While Web Engineering uses software engineering principles, it encompasses new approaches, methodologies, tools, techniques, and guidelines to meet the unique requirements of Web-based applications.
As a discipline
Proponents of Web engineering supported the establishment of Web engineering as a discipline
|
https://en.wikipedia.org/wiki/Thermionic%20converter
|
A thermionic converter consists of a hot electrode which thermionically emits electrons over a potential energy barrier to a cooler electrode, producing a useful electric power output. Caesium vapor is used to optimize the electrode work functions and provide an ion supply (by surface ionization or electron impact ionization in a plasma) to neutralize the electron space charge.
Definition
From a physical electronic viewpoint, thermionic energy conversion is the direct production of electric power from heat by thermionic electron emission. From a thermodynamic viewpoint, it is the use of electron vapor as the working fluid in a power-producing cycle. A thermionic converter consists of a hot emitter electrode from which electrons are vaporized by thermionic emission and a colder collector electrode into which they are condensed after conduction through the inter-electrode plasma. The resulting current, typically several amperes per square centimeter of emitter surface, delivers electrical power to a load at a typical potential difference of 0.5–1 volt and thermal efficiency of 5–20%, depending on the emitter temperature (1500–2000 K) and mode of operation.
History
After the first demonstration of the practical arc-mode caesium vapor thermionic converter by V. Wilson in 1957, several applications of it were demonstrated in the following decade, including its use with solar, combustion, radioisotope, and nuclear reactor heat sources. The application most seriously pursued, however, was the integration of thermionic nuclear fuel elements directly into the core of nuclear reactors for production of electrical power in space. The exceptionally high operating temperature of thermionic converters, which makes their practical use difficult in other applications, gives the thermionic converter decisive advantages over competing energy conversion technologies in the space power application where radiant heat rejection is required. Substantial thermionic space reactor d
|
https://en.wikipedia.org/wiki/Chemical-mechanical%20polishing
|
Chemical mechanical polishing (CMP) or planarization is a process of smoothing surfaces with the combination of chemical and mechanical forces. It can be thought of as a hybrid of chemical etching and free abrasive polishing.
Description
The process uses an abrasive and corrosive chemical slurry (commonly a colloid) in conjunction with a polishing pad and retaining ring, typically of a greater diameter than the wafer. The pad and wafer are pressed together by a dynamic polishing head and held in place by a plastic retaining ring. The dynamic polishing head is rotated with different axes of rotation (i.e., not concentric). This removes material and tends to even out any irregular topography, making the wafer flat or planar. This may be necessary to set up the wafer for the formation of additional circuit elements. For example, CMP can bring the entire surface within the depth of field of a photolithography system, or selectively remove material based on its position. Typical depth-of-field requirements are down to Angstrom levels for the latest 22 nm technology.
Working principles
Physical action
Typical CMP tools, such as the ones seen on the right, consist of rotating an extremely flat plate which is covered by a pad. The wafer that is being polished is mounted upside-down in a carrier/spindle on a backing film. The retaining ring (Figure 1) keeps the wafer in the correct horizontal position. During the process of loading and unloading the wafer onto the tool, the wafer is held by vacuum by the carrier to prevent unwanted particles from building up on the wafer surface. A slurry introduction mechanism deposits the slurry on the pad, represented by the slurry supply in Figure 1. Both the plate and the carrier are then rotated and the carrier is kept oscillating; this can be better seen in the top view of Figure 2. A downward pressure/down force is applied to the carrier, pushing it against the pad; typically the down force is an average force, but local pressure
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.