source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Essential%20infimum%20and%20essential%20supremum
|
In mathematics, the concepts of essential infimum and essential supremum are related to the notions of infimum and supremum, but adapted to measure theory and functional analysis, where one often deals with statements that are not valid for all elements in a set, but rather almost everywhere, that is, except on a set of measure zero.
While the exact definition is not immediately straightforward, intuitively the essential supremum of a function is the smallest value that is greater than or equal to the function values everywhere while ignoring what the function does at a set of points of measure zero. For example, if one takes the function that is equal to zero everywhere except at where then the supremum of the function equals one. However, its essential supremum is zero because we are allowed to ignore what the function does at the single point where is peculiar. The essential infimum is defined in a similar way.
Definition
As is often the case in measure-theoretic questions, the definition of essential supremum and infimum does not start by asking what a function does at points (that is, the image of ), but rather by asking for the set of points where equals a specific value (that is, the preimage of under ).
Let be a real valued function defined on a set The supremum of a function is characterized by the following property: for all and if for some we have for all then
More concretely, a real number is called an upper bound for if for all that is, if the set
is empty. Let
be the set of upper bounds of and define the infimum of the empty set by Then the supremum of is
if the set of upper bounds is nonempty, and otherwise.
Now assume in addition that is a measure space and, for simplicity, assume that the function is measurable. Similar to the supremum, the essential supremum of a function is characterised by the following property: for -almost all and if for some we have for -almost all then More concretely, a number
|
https://en.wikipedia.org/wiki/E4M
|
Encryption for the Masses (E4M) is a free disk encryption software for Windows NT and Windows 9x families of operating systems. E4M is discontinued; it is no longer maintained. Its author, former criminal cartel boss Paul Le Roux, joined Shaun Hollingworth (the author of the Scramdisk) to produce the commercial encryption product DriveCrypt for the security company SecurStar.
The popular source-available freeware program TrueCrypt is based on E4M's source code. However, TrueCrypt uses a different container format than E4M, which makes it impossible to use one of these programs to access an encrypted volume created by the other.
Allegation of stolen source code
Shortly after TrueCrypt version 1.0 was released in February 2004, the TrueCrypt Team reported receiving emails from Wilfried Hafner, manager of SecurStar, claiming that Paul Le Roux had stolen the source code of E4M from SecurStar as an employee. According to the TrueCrypt Team, the emails stated that Le Roux illegally distributed E4M, and authored an illegal license permitting anyone to base derivative work on E4M and distribute it freely, which Hafner alleges Le Roux did not have any right to do, claiming that all versions of E4M always belonged only to SecurStar. For a time, this led the TrueCrypt Team to stop developing and
distributing TrueCrypt.
See also
On-the-fly encryption (OTFE)
Disk encryption
Disk encryption software
Comparison of disk encryption software
References
External links
Archived version of official website
Cryptographic software
Disk encryption
Free software
|
https://en.wikipedia.org/wiki/Zeolite%20facies
|
Zeolite facies describes the mineral assemblage resulting from the pressure and temperature conditions of low-grade metamorphism.
The zeolite facies is generally considered to be transitional between diagenetic processes which turn sediments into sedimentary rocks, and prehnite-pumpellyite facies, which is a hallmark of subseafloor alteration of the oceanic crust around mid-ocean ridge spreading centres. The zeolite and prehnite-pumpellyite facies are considered burial metamorphism as the processes of orogenic regional metamorphism are not required.
Zeolite facies is most often experienced by pelitic sediments; rocks rich in aluminium, silica, potassium and sodium, but generally low in iron, magnesium and calcium. Zeolite facies metamorphism usually results in the production of low temperature clay minerals into higher temperature polymorphs such as kaolinite and vermiculite.
Mineral assemblages include kaolinite and montmorillonite with laumontite, wairakite, prehnite, calcite and chlorite. Phengite and adularia occur in potassium rich rocks. Minerals in this series include zeolites, albite, and quartz.
This occurs by dehydration of the clays during compaction, and heating due to blanketing of the sediments by continued deposition of sediments above. Zeolite facies is considered to start with temperatures of approximately 50 - 150 °C and some burial is required, usually 1 - 5 km.
Zeolite facies tends to correlate in clay-rich sediments with the onset of a bedding plane foliation, parallel with the bedding of the rocks, caused by alignment of platy clay minerals in a horizontal orientation which reduces their free energy state.
Generally plutonic and volcanic rocks are not greatly affected by zeolite facies metamorphism, although vesicular basalts and the like will have their vesicles filled with zeolite minerals, forming amygdaloidal texture. Tuff can also become zeolitized, as is seen in the Obispo formation on the California coast.
See also
Diagenesis
|
https://en.wikipedia.org/wiki/Raster%20interrupt
|
A raster interrupt (also called a horizontal blank interrupt) is an interrupt signal in a legacy computer system which is used for display timing. It is usually, though not always, generated by a system's graphics chip as the scan lines of a frame are being readied to send to the monitor for display. The most basic implementation of a raster interrupt is the vertical blank interrupt.
Such an interrupt provides a mechanism for graphics registers to be changed mid-frame, so they have different values above and below the interrupt point. This allows a single-color object such as the background or the screen border to have multiple horizontal color bands, for example. Or, for a hardware sprite to be repositioned to give the illusion that there are more sprites than a system supports. The limitation is that changes only affect the portion of the display below the interrupt. They don't allow more colors or more sprites on a single scan line.
Modern protected mode operating systems generally do not support raster interrupts as access to hardware interrupts for unprivileged user programs could compromise the system stability. As their most important use case, the multiplexing of hardware sprites, is nowadays no longer relevant there exists no modern successor to raster interrupts.
Systems supporting raster interrupts
Several popular home computers and video game consoles included graphics chips supporting raster interrupts or had features that could be combined to work like raster interrupts. The following list is not exhaustive.
Astrocade (two custom chips, 1977)
The Bally Astrocade supported a horizontal blank interrupt to select the four screen colors from a palette of 256 colors. The Astrocade did not support hardware sprites.
Atari 8-bit family (ANTIC chip, 1979)
The ANTIC chip used by the Atari 8-bit family includes display list interrupts (DLIs), which are triggered as the display is being drawn. The ANTIC chip itself is considerably powerful and inherently ca
|
https://en.wikipedia.org/wiki/Sphaerobacter
|
Sphaerobacter is a genus of bacteria. When originally described it was placed in its own subclass (Spahaerobacteridae) within the class Actinomycetota. Subsequently, phylogenetic studies have now placed it in its own order Sphaerobacterales within the phylum Thermomicrobiota. Up to now there is only one species of this genus known (Sphaerobacter thermophilus). The closest related cultivated organism to S. Thermophilus is the Thermomicrobium Roseum and has an 87% sequence similarity which indicates that S. Thermophilus is one of the most isolated bacterial species.[4]
References
4. Pati, A., Labutti, K., Pukall, R., Nolan, M., Glavina Del Rio, T., Tice, H., … Lapidus, A. (2010). Complete genome sequence of Sphaerobacter thermophilus type strain (S 6022). Standards in genomic sciences, 2(1), 49–56. doi:10.4056/sigs.601105
Monotypic bacteria genera
Bacteria genera
|
https://en.wikipedia.org/wiki/Van%20der%20Corput%20sequence
|
A van der Corput sequence is an example of the simplest one-dimensional low-discrepancy sequence over the unit interval; it was first described in 1935 by the Dutch mathematician J. G. van der Corput. It is constructed by reversing the base-n representation of the sequence of natural numbers (1, 2, 3, …).
The -ary representation of the positive integer is
where is the base in which the number is represented, and that is, the -th digit in the -ary expansion of
The -th number in the van der Corput sequence is
Examples
For example, to get the decimal van der Corput sequence, we start by dividing the numbers 1 to 9 in tenths (), then we change the denominator to 100 to begin dividing in hundredths (). In terms of numerator, we begin with all two-digit numbers from 10 to 99, but in backwards order of digits. Consequently, we will get the numerators grouped by the end digit. Firstly, all two-digit numerators that end with 1, so the next numerators are 01, 11, 21, 31, 41, 51, 61, 71, 81, 91. Then the numerators ending with 2, so they are 02, 12, 22, 32, 42, 52, 62, 72, 82, 92. And after that, the numerators ending in 3: 03, 13, 23 and so on...
Thus, the sequence begins
or in decimal representation:
0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.01, 0.11, 0.21, 0.31, 0.41, 0.51, 0.61, 0.71, 0.81, 0.91, 0.02, 0.12, 0.22, 0.32, …,
The same can be done for the binary numeral system, and the binary van der Corput sequence is
0.12, 0.012, 0.112, 0.0012, 0.1012, 0.0112, 0.1112, 0.00012, 0.10012, 0.01012, 0.11012, 0.00112, 0.10112, 0.01112, 0.11112, …
or, equivalently,
The elements of the van der Corput sequence (in any base) form a dense set in the unit interval; that is, for any real number in , there exists a subsequence of the van der Corput sequence that converges to that number. They are also equidistributed over the unit interval.
C implementation
double corput(int n, int base){
double q=0, bk=(double)1/base;
while (n > 0) {
q += (n % bas
|
https://en.wikipedia.org/wiki/SDS%20940
|
The SDS 940 was Scientific Data Systems' (SDS) first machine designed to directly support time-sharing. The 940 was based on the SDS 930's 24-bit CPU, with additional circuitry to provide protected memory and virtual memory.
It was announced in February 1966 and shipped in April, becoming a major part of Tymshare's expansion during the 1960s. The influential Stanford Research Institute "oN-Line System" (NLS) was demonstrated on the system. This machine was later used to run Community Memory, the first bulletin board system.
After SDS was acquired by Xerox in 1969 and became Xerox Data Systems, the SDS 940 was renamed as the XDS 940.
History
The design was originally created by the University of California, Berkeley as part of their Project Genie that ran between 1964 and 1969. Genie added memory management and controller logic to an existing SDS 930 computer to give it page-mapped virtual memory, which would be heavily copied by other designs. The 940 was simply a commercialized version of the Genie design and remained backwardly compatible with their earlier models, with the exception of the 12-bit SDS 92.
Like most systems of the era, the machine was built with a bank of core memory as the primary storage, allowing between 16 and 64 kilowords. Words were 24 bits plus a parity bit. This was backed up by a variety of secondary storage devices, including a 1376 kword drum in Genie, or hard disks in the SDS models in the form of a drum-like 2097 kword "fixed-head" disk or a traditional "floating-head" model. The SDS machines also included a paper tape punch and reader, line printer, and a real-time clock. They bootstrapped from paper tape.
A file storage of 96 MB were also attached. The line printer used was a Potter Model HSP-3502 chain printer with 96 printing characters and a speed of about 230 lines per minute.
Software system
The operating system developed at Project Genie was the Berkeley Timesharing System.
By August 1968 a version 2.0 was announced th
|
https://en.wikipedia.org/wiki/Ratchet%20effect
|
A ratchet effect is an instance of the restrained ability of human processes to be reversed once a specific thing has happened, analogous with the mechanical ratchet that holds the spring tight as a clock is wound up. It is related to the phenomena of featuritis and scope creep in the manufacture of various consumer goods, and of mission creep in military planning.
In sociology, "ratchet effects refer to the tendency for central controllers to base next year's targets on last year's performance, meaning that managers who expect still to be in place in the next target period have a perverse incentive not to exceed targets even if they could easily do so".
Examples
Famine cycle
Garrett Hardin, a biologist and environmentalist, used the phrase to describe how food aid keeps people alive who would otherwise die in a famine. They live and multiply in better times, making another bigger crisis inevitable, since the supply of food has not been increased.
The ratchet effect first came to light in Alan Peacock and Jack Wiseman's work, The Growth of Public Expenditure in the United Kingdom. Peacock and Wiseman found that public spending increases like a ratchet following periods of crisis. The term was later used by American historian Robert Higgs to highlight Peacock and Wiseman's research in his book, "Crisis and Leviathan". Similarly, governments have difficulty in rolling back huge bureaucratic organizations created initially for temporary needs, e.g., at times of war, natural or economic crisis. The effect may likewise afflict large business corporations with myriad layers of bureaucracy which resist reform or dismantling.
Production strategy
Jean Tirole used the concept in his pioneering work on regulation and monopolies. The ratchet effect can denote an economic strategy arising in an environment where incentive depends on both current and past production, such as in a competitive industry employing piece rates. The producers observe that since incentive is readju
|
https://en.wikipedia.org/wiki/Trusted%20Execution%20Technology
|
Intel Trusted Execution Technology (Intel TXT, formerly known as LaGrande Technology) is a computer hardware technology of which the primary goals are:
Attestation of the authenticity of a platform and its operating system.
Assuring that an authentic operating system starts in a trusted environment, which can then be considered trusted.
Provision of a trusted operating system with additional security capabilities not available to an unproven one.
Intel TXT uses a Trusted Platform Module (TPM) and cryptographic techniques to provide measurements of software and platform components so that system software as well as local and remote management applications may use those measurements to make trust decisions. It complements Intel Management Engine. This technology is based on an industry initiative by the Trusted Computing Group (TCG) to promote safer computing. It defends against software-based attacks aimed at stealing sensitive information by corrupting system or BIOS code, or modifying the platform's configuration.
Details
The Trusted Platform Module (TPM) as specified by the TCG provides many security functions including special registers (called Platform Configuration Registers – PCRs) which hold various measurements in a shielded location in a manner that prevents spoofing. Measurements consist of a cryptographic hash using a Secure Hashing Algorithm (SHA); the TPM v1.0 specification uses the SHA-1 hashing algorithm. More recent TPM versions (v2.0+) call for SHA-2.
A desired characteristic of a cryptographic hash algorithm is that (for all practical purposes) the hash result (referred to as a hash digest or a hash) of any two modules will produce the same hash value only if the modules are identical.
Measurements
Measurements can be of code, data structures, configuration, information, or anything that can be loaded into memory. TCG requires that code not be executed until after it has been measured. To ensure a particular sequence of measurements, has
|
https://en.wikipedia.org/wiki/OGDL
|
OGDL (Ordered Graph Data Language), is a "structured textual format that represents information in the form of graphs, where the nodes are strings and the arcs or edges are spaces or indentation."
Like XML, but unlike JSON and YAML, OGDL includes a schema notation and path traversal notation. There is also a binary representation.
Example
network
eth0
ip 192.168.0.10
mask 255.255.255.0
hostname crispin
See also
Comparison of data serialization formats
References
External links
OGDL home page
Data serialization formats
Markup languages
|
https://en.wikipedia.org/wiki/Tell-tale%20%28automotive%29
|
A tell-tale, sometimes called an idiot light or warning light, is an indicator of malfunction or operation of a system, indicated by a binary (on/off) illuminated light, symbol or text legend.
The "idiot light" terminology arises from popular frustration with automakers' use of lights for crucial functions which could previously be monitored by gauges, so a troublesome condition could be detected and corrected early. Such early detection of problems with, for example, engine temperature or oil pressure or charging system operation is not possible via an idiot light, which lights only when a fault has already occurred – thus providing no advance warnings or details of the malfunction's extent. The Hudson automobile company was the first to use lights instead of gauges for oil pressure and the voltmeter, starting in the mid-1930s.
Regulation
Automotive tell-tales are regulated by automobile safety standards worldwide. In the United States, National Highway Traffic Safety Administration Federal Motor Vehicle Safety Standard 101 includes tell-tales in its specifications for vehicle controls and displays. In Canada, the analogous Canada Motor Vehicle Safety Standard 101 applies. In Europe and throughout most of the rest of the world, ECE Regulations specify various types of tell-tales.
Types
Different tell-tales can convey different kinds of information. One type lights or blinks to indicate a failure (as of oil pressure, engine temperature control, charging current, etc.); lighting and blinking indicate progression from warning to failure indication. Another type lights to alert the need for specific service after a certain amount of time or distance has elapsed (e.g., to change the oil).
Colour may also communicate information about the nature of the tell-tale, for example red may signify that the vehicle cannot continue driving (e.g. oil pressure).) Many older vehicles used schemes which were specific to the manufacturer, e.g. some British Fords of the 1960s used
|
https://en.wikipedia.org/wiki/AT%26T%20Technologies
|
AT&T Technologies, Inc., was created by AT&T in 1983 in preparation for the breakup of the Bell System, which became effective as of January 1, 1984. It assumed the corporate charter of Western Electric Co., Inc.
History
Creation
AT&T (originally American Telephone and Telegraph Company), after divesting ownership of the Bell System, restructured its remaining companies into three core units. American Bell, Bell Labs and Western Electric were fully absorbed into AT&T, and divided up as an umbrella of several specifically focused companies held by AT&T Technologies, including:
AT&T Bell Laboratories - R&D functions
AT&T Consumer Products - Consumer telephone equipment sales
AT&T International - International ventures
AT&T Network Systems International
Goldstar Semiconductor
AT&T Taiwan
AT&T Microelectronica de Espana
Lycom
AT&T Ricoh
AT&T Network Systems Espana
AT&T Network Systems - Large Business/Corporate equipment
AT&T Technology Systems - Computer-focused R&D
Telephone production
From January 1, 1984, until mid-1986, AT&T Technologies continued to manufacture telephones that had been made before 1984 by Western Electric under the Western Electric marking. "Bell System Property - Not For Sale" markings were eliminated from all telephones, replaced with "AT&T" in the plastic housing and "Western Electric" in the metal telephone bases.
Bell logos contained on the bottom of Trimline bases were filled in, leaving a giant lump next to "Western Electric".
Telephone changes
Toward the end of the Bell System, Western Electric telephones contained much more computer technology and more plastic over metal, since advances in electronics and manufacturing processes made it possible, and there was no longer the need to produce heavy duty, long-lasting telephones. In 1985, the 2220 Trimline was heavily modified, including a touch-tone/pulse dial switch, eliminating the need for the 220 rotary phone, foreshadowing what was to come for other AT&T telephone
|
https://en.wikipedia.org/wiki/Infradian%20rhythm
|
In chronobiology, an infradian rhythm is a rhythm with a period longer than the period of a circadian rhythm, i.e., with a frequency of less than one cycle in 24 hours. Some examples of infradian rhythms in mammals include menstruation, breeding, migration, hibernation, molting and fur or hair growth, and tidal or seasonal rhythms. In contrast, ultradian rhythms have periods shorter than the period of a circadian rhythm. Several infradian rhythms are known to be caused by hormone stimulation or exogenous factors. For example, seasonal depression, an example of an infradian rhythm occurring once a year, can be caused by the systematic lowering of light levels during the winter.
See also
Photoperiodicity
References
Chronobiology
|
https://en.wikipedia.org/wiki/Lecher%20line
|
In electronics, a Lecher line or Lecher wires is a pair of parallel wires or rods that were used to measure the wavelength of radio waves, mainly at VHF, UHF and microwave frequencies. They form a short length of balanced transmission line (a resonant stub). When attached to a source of radio-frequency power such as a radio transmitter, the radio waves form standing waves along their length. By sliding a conductive bar that bridges the two wires along their length, the length of the waves can be physically measured. Austrian physicist Ernst Lecher, improving on techniques used by Oliver Lodge and Heinrich Hertz, developed this method of measuring wavelength around 1888. Lecher lines were used as frequency measuring devices until frequency counters became available after World War 2. They were also used as components, often called "resonant stubs", in VHF, UHF and microwave radio equipment such as transmitters, radar sets, and television sets, serving as tank circuits, filters, and impedance-matching devices. They are used at frequencies between HF/VHF, where lumped components are used, and UHF/SHF, where resonant cavities are more practical.
Wavelength measurement
A Lecher line is a pair of parallel uninsulated wires or rods held a precise distance apart. The separation is not critical but should be a small fraction of the wavelength; it ranges from less than a centimeter to over 10 cm. The length of the wires depends on the wavelength involved; lines used for measurement are generally several wavelengths long. The uniform spacing of the wires makes them a transmission line, conducting waves at a constant speed very close to the speed of light. One end of the rods is connected to the source of RF power, such as the output of a radio transmitter. At the other end the rods are connected together with a conductive bar between them. This short circuiting termination reflects the waves. The waves reflected from the short-circuited end interfere with the
|
https://en.wikipedia.org/wiki/Second-order%20arithmetic
|
In mathematical logic, second-order arithmetic is a collection of axiomatic systems that formalize the natural numbers and their subsets. It is an alternative to axiomatic set theory as a foundation for much, but not all, of mathematics.
A precursor to second-order arithmetic that involves third-order parameters was introduced by David Hilbert and Paul Bernays in their book Grundlagen der Mathematik. The standard axiomatization of second-order arithmetic is denoted by Z2.
Second-order arithmetic includes, but is significantly stronger than, its first-order counterpart Peano arithmetic. Unlike Peano arithmetic, second-order arithmetic allows quantification over sets of natural numbers as well as numbers themselves. Because real numbers can be represented as (infinite) sets of natural numbers in well-known ways, and because second-order arithmetic allows quantification over such sets, it is possible to formalize the real numbers in second-order arithmetic. For this reason, second-order arithmetic is sometimes called "analysis".
Second-order arithmetic can also be seen as a weak version of set theory in which every element is either a natural number or a set of natural numbers. Although it is much weaker than Zermelo–Fraenkel set theory, second-order arithmetic can prove essentially all of the results of classical mathematics expressible in its language.
A subsystem of second-order arithmetic is a theory in the language of second-order arithmetic each axiom of which is a theorem of full second-order arithmetic (Z2). Such subsystems are essential to reverse mathematics, a research program investigating how much of classical mathematics can be derived in certain weak subsystems of varying strength. Much of core mathematics can be formalized in these weak subsystems, some of which are defined below. Reverse mathematics also clarifies the extent and manner in which classical mathematics is nonconstructive.
Definition
Syntax
The language of second-order arithmetic is
|
https://en.wikipedia.org/wiki/Serous%20fluid
|
In physiology, serous fluid or serosal fluid (originating from the Medieval Latin word serosus, from Latin serum) is any of various body fluids resembling serum, that are typically pale yellow or transparent and of a benign nature. The fluid fills the inside of body cavities. Serous fluid originates from serous glands, with secretions enriched with proteins and water. Serous fluid may also originate from mixed glands, which contain both mucous and serous cells. A common trait of serous fluids is their role in assisting digestion, excretion, and respiration.
In medical fields, especially cytopathology, serous fluid is a synonym for effusion fluids from various body cavities. Examples of effusion fluid are pleural effusion and pericardial effusion. There are many causes of effusions which include involvement of the cavity by cancer. Cancer in a serous cavity is called a serous carcinoma. Cytopathology evaluation is recommended to evaluate the causes of effusions in these cavities.
Examples
Saliva consists of mucus and serous fluid; the serous fluid contains the enzyme amylase, which is important for the digestion of carbohydrates. Minor salivary glands of von Ebner present on the tongue secrete the lipase. The parotid gland produces purely serous saliva. The other major salivary glands produce mixed (serous and mucus) saliva.
Another type of serous fluid is secreted by the serous membranes (serosa), two-layered membranes which line the body cavities. Serous membrane fluid collects on microvilli on the outer layer and acts as a lubricant and reduces friction from muscle movement. This can be seen in the lungs, with the pleural cavity.
Pericardial fluid is a serous fluid secreted by the serous layer of the pericardium into the pericardial cavity. The pericardium consists of two layers, an outer fibrous layer and the inner serous layer. This serous layer has two membranes which enclose the pericardial cavity into which is secreted the pericardial fluid.
Blood serum
|
https://en.wikipedia.org/wiki/Zen%20Cart
|
Zen Cart is an online store management system. It is PHP-based, using a MySQL database and HTML components. Support is provided for numerous languages and currencies, and it is freely available under the GNU General Public License.
History
Zen Cart is a software fork that branched from osCommerce in 2003. Beyond some aesthetic changes, the major differences between the two systems come from Zen Cart's architectural changes (for example, a template system) and additional included features in the core. The release of the 1.3.x series further differentiated Zen Cart by moving the template system from its historic tables-based layout approach to one that is largely CSS-based.
Plugins
As support for Zen Cart dropped in recent years, many third party companies are creating Zencart plugins and modules that can help users solve problems like installing reCAPTCHA v3
See also
Comparison of shopping cart software
References
External links
Free e-commerce software
Free software programmed in PHP
Content management systems
Software forks
|
https://en.wikipedia.org/wiki/Kato%27s%20conjecture
|
Kato's conjecture is a mathematical problem named after mathematician Tosio Kato, of the University of California, Berkeley. Kato initially posed the problem in 1953.
Kato asked whether the square roots of certain elliptic operators, defined via functional calculus, are analytic. The full statement of the conjecture as given by Auscher et al. is: "the domain of the square root of a uniformly complex elliptic operator with bounded measurable coefficients in Rn is the Sobolev space H1(Rn) in any dimension with the estimate ".
The problem remained unresolved for nearly a half-century, until in 2001 it was jointly solved in the affirmative by Pascal Auscher, Steve Hofmann, Michael Lacey, Alan McIntosh, and Philippe Tchamitchian.
References
Differential operators
Operator theory
Conjectures that have been proved
|
https://en.wikipedia.org/wiki/Business%20process%20interoperability
|
Business process interoperability (BPI) is a property referring to the ability of diverse business processes to work together, to so called "inter-operate". It is a state that exists when a business process can meet a specific objective automatically utilizing essential human labor only. Typically, BPI is present when a process conforms to standards that enable it to achieve its objective regardless of ownership, location, make, version or design of the computer systems used.
Overview
The main attraction of BPI is that a business process can start and finish at any point worldwide regardless of the types of hardware and software required to automate it. Because of its capacity to offload human "mind" labor, BPI is considered by many as the final stage in the evolution of business computing. BPI's twin criteria of specific objective and essential human labor are both subjective.
The objectives of BPI vary, but tend to fall into the following categories:
Enable end-to-end straight-through processing ("STP") by interconnecting data and procedures trapped in information silos
Let systems and products work with other systems or products without special effort on the part of the customer
Increase productivity by automating human labor
Eliminate redundant business processes and data replications
Minimize errors inherent in manual processes
Introduce mainstream enterprise software-as-a-service
Give top managers a practical means of overseeing processes used to run business operations
Encourage development of innovative Internet-based business processes
Place emphasis on business processes rather than on the systems required to operate them
Strengthen security by eliminating gaps among proprietary software systems
Improve privacy by giving users complete control over their data
Enable realtime enterprise scenarios and forecasts
Business process interoperability is limited to enterprise software systems in which functions are designed to work together, such
|
https://en.wikipedia.org/wiki/Frederick%20Remsen%20Hutton
|
Frederick Remsen Hutton, M.E., Sc.D. (1853 – New York City May 14, 1918) was an American mechanical engineer, consulting engineer, educator, editor of the Engineering Magazine and president of the American Society of Mechanical Engineers in the year 1907–08.
Biography
Hutton was born in New York City, graduated from Columbia College in 1873, and from Columbia School of Mines in 1876. He was employed there in several positions until he retired in 1907. Columbia gave him the honorary degree of Sc.D. in 1904.
In 1892 he became associate editor of the Engineering Magazine. From 1883 to 1906 he was secretary of the American Society of Mechanical Engineers; and he became president of the organization in 1907. In 1911 he was consulting engineer for the department of water, gas, and electricity of New York City, and he served as chairman of the technical committee of the Automobile Club of America for many terms. He wrote reports on machine tools for the census of 1880 and multiple books.
Sinclair and Hull (1980) reflected, that "Frederick Hutton was eager to have the Society also determine a standard for rating steam-boiler capability, and observed 'it is part of our duty, no doubt, to establish gauges and standards.'33 In the drive to rationalize American industry that began to gather force in the last quarter of the nineteenth century, standardization was to the engineer what administration was to the manager. Within the technologically complex mechanical industries, especially, the creation of standard parts and uniform practices gave the engineer control over anomaly."
Publications, a selection
Frederick Remsen Hutton, Mechanical Engineering of Power Plants (1897; third edition, 1909);
Frederick Remsen Hutton, Heat and Heat Engines (1899);
Frederick Remsen Hutton, The Gas-Engine (1903; third edition, 1908).
Frederick Remsen Hutton, A history of the American Society of Mechanical Engineers from 1880 to 1915, 1915
References
1853 births
1918 deaths
Ame
|
https://en.wikipedia.org/wiki/Tacheometry
|
Tacheometry (; from Greek for "quick measure") is a system of rapid surveying, by which the horizontal and vertical positions of points on the earth's surface relative to one another are determined without using a chain or tape, or a separate levelling instrument.
Instead of the pole normally employed to mark a point, a staff similar to a level staff is used. This is marked with heights from the base or foot, and is graduated according to the form of tacheometer in use.
The horizontal distance S is inferred from the vertical angle subtended between two well-defined points on the staff and the known distance 2L between them. Alternatively, also by readings of the staff indicated by two fixed stadia wires in the diaphragm (reticle) of the telescope. The difference of height Δh is computed from the angle of depression z or angle of elevation α of a fixed point on the staff and the horizontal distance S already obtained.
The azimuth angle is determined as normally. Thus, all the measurements requisite to locate a point both vertically and horizontally with reference to the point where the tacheometer is centred are determined by an observer at the instrument without any assistance beyond that of a person to hold the level staff.
The ordinary methods of surveying with a theodolite, chain, and levelling instrument are fairly satisfactory when the ground is relatively clear of obstructions and not very precipitous, but it becomes extremely cumbersome when the ground is covered with bush, or broken up by ravines. Chain measurements then become slow and liable to considerable error; the levelling, too, is carried on at great disadvantage in point of speed, though without serious loss of accuracy. These difficulties led to the introduction of tacheometry.
In western countries, tacheometry is primarily of historical interest in surveying, as professional measurement nowadays is usually carried out using total stations and recorded using data collectors. Location positions
|
https://en.wikipedia.org/wiki/LAN%20messenger
|
A LAN Messenger is an instant messaging program for computers designed for use within a single local area network (LAN).
Many LAN Messengers offer basics functionality for sending private messages, file transfer, chatrooms and graphical smileys. The advantage of using a simple LAN messenger over a normal instant messenger is that no active Internet connection or central server is required, and only people inside the firewall will have access to the system.
History
A precursor of LAN Messengers is the Unix talk command, and similar facilities on earlier systems, which enabled multiple users on one host system to directly talk with each other. At the time, computers were usually shared between multiple users, who accessed them through serial or telephone lines.
Novell NetWare featured a trivial person-to-person chat program for DOS, which used the [IPX/SPX] protocol suite. NetWare for Windows also included broadcast and targeted messages similar to WinPopup and the Windows Messenger service.
On Windows, WinPopup was a small utility included with Windows 3.11. WinPopup uses SMB/NetBIOS protocol and was intended to receive and send short text messages.
Windows NT/2000/XP improves upon this with Windows Messenger service, a Windows service compatible to WinPopup. On systems where this service is running, the received messages "pop up" as simple message boxes. Any software compatible with WinPopup, like the console utility NET SEND, can send such messages. However, due to security concerns, by default, the messenger service is off in Windows XP SP2 and blocked by Windows XP's firewall.
On Apple's -based computers, the iChat program has allowed LAN messaging over the Bonjour protocol since 2005. The multi-protocol messenger Pidgin has support for the Bonjour protocol, including on Windows.
See also
Comparison of instant messaging protocols
Comparison of cross-platform instant messaging clients
Comparison of LAN messengers
Friend-to-friend
IRC on LANs
Talker
|
https://en.wikipedia.org/wiki/Charge%20amplifier
|
A charge amplifier is an electronic current integrator that produces a voltage output proportional to the integrated value of the input current, or the total charge injected.
The amplifier offsets the input current using a feedback reference capacitor, and produces an output voltage inversely proportional to the value of the reference capacitor but proportional to the total input charge flowing during the specified time period.
The circuit therefore acts as a charge-to-voltage converter. The gain of the circuit depends on the values of the feedback capacitor.
The charge amplifier was invented by Walter Kistler in 1950.
Design
Charge amplifiers are usually constructed using an operational amplifier or other high gain semiconductor circuit with a negative feedback capacitor Cf.
Into the inverting node flow the input charge signal qin and the feedback charge qf from the output. According to Kirchhoff's circuit laws they compensate each other.
.
The input charge and the output voltage are proportional with inverted sign. The feedback capacitor Cf sets the amplification.
The input impedance of the circuit is almost zero because of the Miller effect. Hence all the stray capacitances (the cable capacitance, the amplifier input capacitance, etc.) are virtually grounded and they have no influence on the output signal.
The feedback resistor Rf discharges the capacitor. Without Rf the DC gain would be very high so that even the tiny DC input offset current of the operational amplifier would appear highly amplified at the output. Rf and Cf set the lower frequency limit of the charge amplifier.
Due to the described DC effects and the finite isolation resistances in practical charge amplifiers the circuit is not suitable for the measurement of static charges. High quality charge amplifiers allow, however, quasistatic measurements at frequencies below 0.1 Hz. Some manufacturers also use a reset switch instead of Rf to manually discharge Cf before a measurement.
Practic
|
https://en.wikipedia.org/wiki/VisSim
|
VisSim is a visual block diagram program for simulation of dynamical systems and model-based design of embedded systems, with its own visual language. It is developed by Visual Solutions of Westford, Massachusetts. Visual Solutions was acquired by Altair in August 2014 and its products have been rebranded as Altair Embed as a part of Altair's Model Based Development Suite. With Embed, you can develop virtual prototypes of dynamic systems. Models are built by sliding blocks into the work area and wiring them together with the mouse. Embed automatically converts the control diagrams into C-code ready to be downloaded to the target hardware.
VisSim or now Altair Embed uses a graphical data flow paradigm to implement dynamic systems based on differential equations. Version 8 adds interactive UML OMG 2 compliant state chart graphs that are placed in VisSim diagrams. This allows the modeling of state based systems such as startup sequencing of process plants or serial protocol decoding.
Applications
VisSim/Altair Embed is used in control system design and digital signal processing for multidomain simulation and design. It includes blocks for arithmetic, Boolean, and transcendental functions, as well as digital filters, transfer functions, numerical integration and interactive plotting. The most commonly modeled systems are aeronautical, biological/medical, digital power, electric motor, electrical, hydraulic, mechanical, process, thermal/HVAC and econometric.
Distributing VisSim models
A read-only version of the software, VisSim Viewer, is available free of charge and provides a way for people not licensed to use VisSim to run VisSim models. This program is intended to allow models to be more widely shared while preserving the model in its published form. The viewer will execute any VisSim model, and only allows changes to block and simulation parameters to illustrate different design scenarios. Sliders and buttons may be activated if included in the model.
Code gene
|
https://en.wikipedia.org/wiki/Punctured%20code
|
In coding theory, puncturing is the process of removing some of the parity bits after encoding with an error-correction code. This has the same effect as encoding with an error-correction code with a higher rate, or less redundancy. However, with puncturing the same decoder can be used regardless of how many bits have been punctured, thus puncturing considerably increases the flexibility of the system without significantly increasing its complexity.
In some cases, a pre-defined pattern of puncturing is used in an encoder. Then, the inverse operation, known as depuncturing, is implemented by the decoder.
Puncturing is used in UMTS during the rate matching process. It is also used in Wi-Fi, Wi-SUN, GPRS, EDGE, DVB-T and DAB, as well as in the DRM Standards.
Puncturing is often used with the Viterbi algorithm in coding systems.
During Radio Resource Control (RRC) Connection set procedure, during sending NBAP radio link setup message the uplink puncturing limit will send to NODE B, along with U/L spreading factor & U/L scrambling code.
Puncturing was introduced by Gustave Solomon and J. J. Stiffler in 1964.
See also
Singleton bound, an upper bound in coding theory
References
Coding theory
|
https://en.wikipedia.org/wiki/Radio%20transmitter%20design
|
A radio transmitter or just transmitter is an electronic device which produces radio waves with an antenna. Radio waves are electromagnetic waves with frequencies between about 30 Hz and 300 GHz. The transmitter itself generates a radio frequency alternating current, which is applied to the antenna. When excited by this alternating current, the antenna radiates radio waves. Transmitters are necessary parts of all systems that use radio: radio and television broadcasting, cell phones, wireless networks, radar, two way radios like walkie talkies, radio navigation systems like GPS, remote entry systems, among numerous other uses.
A transmitter can be a separate piece of equipment, or an electronic circuit within another device. Most transmitters consist of an electronic oscillator which generates an oscillating carrier wave, a modulator which impresses an information bearing modulation signal on the carrier, and an amplifier which increases the power of the signal. To prevent interference between different users of the radio spectrum, transmitters are strictly regulated by national radio laws, and are restricted to certain frequencies and power levels, depending on use. The design must usually be type approved before sale. An important legal requirement is that the circuit does not radiate significant radio wave power outside its assigned frequency band, called spurious emission.
Design issues
A radio transmitter design has to meet certain requirements. These include the frequency of operation, the type of modulation, the stability and purity of the resulting signal, the efficiency of power use, and the power level required to meet the system design objectives. High-power transmitters may have additional constraints with respect to radiation safety, generation of X-rays, and protection from high voltages.
Typically a transmitter design includes generation of a carrier signal, which is normally sinusoidal, optionally one or more frequency multiplication stages
|
https://en.wikipedia.org/wiki/AI%20winter
|
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). Roger Schank and Marvin Minsky—two leading AI researchers who experienced the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. They described a chain reaction, similar to a "nuclear winter", that would begin with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. Three years later the billion-dollar AI industry began to collapse.
There were two major winters approximately 1974–1980 and 1987–2000 and several smaller episodes, including the following:
1966: failure of machine translation
1969: criticism of perceptrons (early, single-layer artificial neural networks)
1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University
1973: large decrease in AI research in the United Kingdom in response to the Lighthill report
1973–74: DARPA's cutbacks to academic AI research in general
1987: collapse of the LISP machine market
1988: cancellation of new spending on AI by the Strategic Computing Initiative
1990s: many expert systems were abandoned
1990s: end of the Fifth Generation computer project's original goals
Enthusiasm and optimism about AI has generally increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence (and especially the sub-field of machine learning) from the research and corporat
|
https://en.wikipedia.org/wiki/VLSI%20Project
|
The VLSI Project was a DARPA-program initiated by Robert Kahn in 1978 that provided research funding to a wide variety of university-based teams in an effort to improve the state of the art in microprocessor design, then known as Very Large Scale Integration (VLSI).
The VLSI Project is one of the most influential research projects in modern computer history. Its offspring include Berkeley Software Distribution (BSD) Unix, the reduced instruction set computer (RISC) processor concept, many computer-aided design (CAD) tools still in use today, 32-bit graphics workstations, fabless manufacturing and design houses, and its own semiconductor fabrication plant (fab), MOSIS, starting in 1981. A similar DARPA project partnering with industry, VHSIC had little or no impact.
The VLSI Project was central in promoting the Mead and Conway revolution throughout industry.
Project
New design rules
In 1975, Carver Mead, Tom Everhart and Ivan Sutherland of Caltech wrote a report for ARPA on the topic of microelectronics. Over the previous few years, Mead had coined the term "Moore's law" to describe Gordon Moore's 1965 prediction for the growth rate of complexity, and in 1974, Robert Dennard of IBM noted that the scale shrinking that formed the basis of Moore's law also affected the performance of the systems. These combined effects implied a massive increase in computing power was about to be unleashed on the industry. The report, published in 1976, suggested that ARPA fund development across a number of fields in order to deal with the complexity that was about to appear due to these "very-large-scale integrated circuits".
Later that year, Sutherland wrote a letter to his brother Bert who was at that time working at Xerox PARC. He suggested a joint effort between PARC and Caltech to begin studying these issues. Bert agreed to form a team, inviting Lynn Conway and Doug Fairbairn to join. Conway had previously worked at IBM on a supercomputer project known as ACS-1. After consid
|
https://en.wikipedia.org/wiki/Stanford%20MIPS
|
MIPS, an acronym for Microprocessor without Interlocked Pipeline Stages, was a research project conducted by John L. Hennessy at Stanford University between 1981 and 1984. MIPS investigated a type of instruction set architecture (ISA) now called reduced instruction set computer (RISC), its implementation as a microprocessor with very large scale integration (VLSI) semiconductor technology, and the effective exploitation of RISC architectures with optimizing compilers. MIPS, together with the IBM 801 and Berkeley RISC, were the three research projects that pioneered and popularized RISC technology in the mid-1980s. In recognition of the impact MIPS made on computing, Hennessey was awarded the IEEE John von Neumann Medal in 2000 by the Institute of Electrical and Electronics Engineers (IEEE) (shared with David A. Patterson), the Eckert–Mauchly Award in 2001 by the Association for Computing Machinery, the Seymour Cray Computer Engineering Award in 2001 by the IEEE Computer Society, and, again with David Patterson, the Turing Award in 2017 by the ACM.
The project was initiated in 1981 in response to reports of similar projects at IBM (the 801) and the University of California, Berkeley (the RISC). MIPS was conducted by Hennessy and his graduate students until its conclusion in 1984. Hennessey founded MIPS Computer Systems in the same year to commercialize the technology developed by the project. In 1985, MIPS Computer Systems announced a new ISA, also called MIPS, and its first implementation, the R2000 microprocessor. The commercial MIPS ISA, and its implementations went on to be widely used, appearing in embedded computers, personal computers, workstations, servers, and supercomputers. As of May 2017, the commercial MIPS ISA is owned by Imagination Technologies, and is used mainly in embedded computers. In the late 1980s, a follow-up project called MIPS-X was conducted by Hennessy at Stanford.
The MIPS ISA was based on a 32-bit word. It supported 32-bit addressing,
|
https://en.wikipedia.org/wiki/Louis%20Alan%20Hazeltine
|
Louis Alan Hazeltine (August 7, 1886 – May 24, 1964) was an engineer and physicist, the inventor of the Neutrodyne circuit, and the Hazeltine-Fremodyne Superregenerative circuit. He was the founder of the Hazeltine Corporation.
Biography
Louis Alan Hazeltine was born in Morristown, New Jersey, in 1886 and attended the Stevens Institute of Technology in Hoboken, New Jersey, majoring in electrical engineering. He graduated in 1906 and accepted a job with General Electric corporation.
Hazeltine returned to Stevens to teach, eventually becoming chair of the electrical engineering department in 1917.
The following year he became a consultant for the United States Navy. The Navy job eventually parlayed into a position as an advisor to the U.S. government on radio broadcasting regulation, and later, a position on the National Defense Research Committee during World War II.
Hazeltine was president of the Institute of Radio Engineers in 1936.
References
Further reading
"Adventures in Cybersound: Louis Alan Hazeltine : 1886 - 1964"
Reiman, Dick, "Scanning the Past: A History of Electrical Engineering from the Past: Louis Alan Hazeltine", Copyright 1993 IEEE. Reprinted with permission from the IEEE publication, "Scanning the Past" which covers a reprint of an article appearing in the Proceedings of the IEEE Vol. 81, No. 4, April 1993.
External links
"The Neutrodyne Radio", Arcane Radio Trivia, Tuesday, October 2, 2007
1886 births
1964 deaths
People from Morristown, New Jersey
Stevens Institute of Technology alumni
American electronics engineers
Engineers from New Jersey
Fellows of the American Physical Society
Presidents of the Institute of Radio Engineers
|
https://en.wikipedia.org/wiki/EMedicine
|
eMedicine is an online clinical medical knowledge base founded in 1996 by doctors Scott Plantz and Jonathan Adler, and computer engineer Jeffrey Berezin. The eMedicine website consists of approximately 6,800 medical topic review articles, each of which is associated with a clinical subspecialty "textbook". The knowledge base includes over 25,000 clinically multimedia files.
Each article is authored by board certified specialists in the subspecialty to which the article belongs and undergoes three levels of physician peer-review, plus review by a Doctor of Pharmacy. The article's authors are identified with their current faculty appointments. Each article is updated yearly, or more frequently as changes in practice occur, and the date is published on the article. eMedicine.com was sold to WebMD in January, 2006 and is available as the Medscape Reference.
History
Plantz, Adler and Berezin evolved the concept for eMedicine.com in 1996 and deployed the initial site via Boston Medical Publishing, Inc., a corporation in which Plantz and Adler were principals. A Group Publishing System 1 (GPS 1) was developed that allowed large numbers of contributors to collaborate simultaneously. That system was first used to create a knowledge base in emergency medicine with 600 contributing MDs creating over 630 chapters in just over a year. In 1997 eMedicine.com, Inc. was legally spun off from Boston Medical Publishing. eMedicine attracted angel-level investment from Tenet Healthcare in 1999 and a significant VC investment in 2000 (Omnicom Group, HIG Capital).
Several years were spent creating the tables of contents, recruiting expert physicians and in the creation of the additional 6,100+ medical and surgical articles. The majority of operations were based out of the Omaha, Nebraska, office.
In the early 2000s Plantz and Lorenzo also spearheaded an alliance with the University of Nebraska Medical Center to accredit eMedicine content for physician, nursing, and pharmacy con
|
https://en.wikipedia.org/wiki/Layer%20four%20traceroute
|
Layer Four Traceroute (LFT) is a fast, multi-protocol traceroute engine, that also implements numerous other features including AS number lookups through regional Internet registries and other reliable sources, Loose Source Routing, firewall and load balancer detection, etc. LFT is best known for its use by network security practitioners to trace a route to a destination host through many configurations of packet-filters / firewalls, and to detect network connectivity, performance or latency problems.
How it works
LFT sends various TCP SYN and FIN probes (differing from Van Jacobson's UDP-based method) or UDP probes utilizing the IP protocol time to live field and attempts to elicit an ICMP TIME_EXCEEDED response from each gateway along the path to some host. LFT also listens for various TCP, UDP, and ICMP messages along the way to assist network managers in ascertaining per-protocol heuristic routing information, and can optionally retrieve various information about the networks it traverses. The operation of layer four traceroute is described in detail in several prominent security books.
Origins
The lft command first appeared in 1998 as fft. Renamed as a result of confusion with fast Fourier transforms, lft stands for layer four traceroute. Results are often referred to as a layer four trace.
See also
Prefix WhoIs
Sources
External links
Layer Four Traceroute Project
Network analyzers
Free network management software
|
https://en.wikipedia.org/wiki/1mdc
|
1mdc was a digital gold currency (DGC) that existed from 2001 to 2007 in which users traded digital currency backed by reserves of e-gold, rather than physical bullion reserves.
The website appeared to switch between various offshore hosting locations, and used software designed by Interesting Software Ltd, an Anguilla company.
As of April 27, 2007, a US court order has forced e-gold to liquidate a large number of e-gold accounts totalling some 10 to 20 million US dollars' worth of gold. A small part of this seizure was 1mdc's accounts and assets . If the court order in the USA is reversed, a user's e-gold grams remaining in 1mdc will "unbail" normally to the user's e-gold account. Ultimately e-gold is owned and operated by US citizens, so, 1mdc users must respect the decisions of US courts and the US authorities regarding the disposition of e-gold and the safety and security of US citizens. Even though 1mdc has no connection whatsoever to the US, and most 1mdc users are non-USA, ultimately e-gold is operated from the USA.
Features
As with any digital gold currency, one used 1mdc to keep assets away from fiat currencies and avoid inflationary risks associated with them. To open an account, 1mdc required the user to have a functioning e-mail address, an e-gold account, a password, initials and a PIN.
1mdc charged 0.05 gold grams per spend for accounts that receive 100 or more spends (total over 500 grams) to their account in any given calendar month. There were no spend fees for accounts that receive 99 or less spends in a calendar month, and no storage fees on all accounts. This was in sharp contrast to e-gold, which charges a storage fee of 1% per annum. Coupled with the quick and easy transfer of funds between e-gold and 1mdc accounts, 1mdc was attractive to persons with large amounts of e-gold, whose balances gradually shrink due to e-gold's storage fees. 1mdc also offered virtually fee free exchange from Pecunix gold to 1mdc, and a 5% fee to exchange fro
|
https://en.wikipedia.org/wiki/Noise%20control
|
Noise control or noise mitigation is a set of strategies to reduce noise pollution or to reduce the impact of that noise, whether outdoors or indoors.
Overview
The main areas of noise mitigation or abatement are: transportation noise control, architectural design, urban planning through zoning codes, and occupational noise control. Roadway noise and aircraft noise are the most pervasive sources of environmental noise. Social activities may generate noise levels that consistently affect the health of populations residing in or occupying areas, both indoor and outdoor, near entertainment venues that feature amplified sounds and music that present significant challenges for effective noise mitigation strategies.
Multiple techniques have been developed to address interior sound levels, many of which are encouraged by local building codes. In the best case of project designs, planners are encouraged to work with design engineers to examine trade-offs of roadway design and architectural design. These techniques include design of exterior walls, party walls, and floor and ceiling assemblies; moreover, there are a host of specialized means for damping reverberation from special-purpose rooms such as auditoria, concert halls, entertainment and social venues, dining areas, audio recording rooms, and meeting rooms.
Many of these techniques rely upon material science applications of constructing sound baffles or using sound-absorbing liners for interior spaces. Industrial noise control is a subset of interior architectural control of noise, with emphasis on specific methods of sound isolation from industrial machinery and for protection of workers at their task stations.
Sound masking is the active addition of noise to reduce the annoyance of certain sounds, the opposite of soundproofing.
Standards, recommendations, and guidelines
Organizations each have their own standards, recommendations/guidelines, and directives for what levels of noise workers are permitted to be ar
|
https://en.wikipedia.org/wiki/List%20of%20cancer%20mortality%20rates%20in%20the%20United%20States
|
Cancer mortality rates are determined by the complex relationship of a population's health and lifestyle with their healthcare system. In the United States during 2013–2017, the age-adjusted mortality rate for all types of cancer was 189.5/100,000 for males, and 135.7/100,000 for females. Below is an incomplete list of age-adjusted mortality rates for different types of cancer in the United States from the Surveillance, Epidemiology, and End Results program.
References
Cancer mortality rates
|
https://en.wikipedia.org/wiki/Ensoniq%20AudioPCI
|
The Ensoniq AudioPCI is a Peripheral Component Interconnect (PCI)-based sound card released in 1997. It was Ensoniq's last sound card product before they were acquired by Creative Technology. The card represented a shift in Ensoniq's market positioning. Whereas the Soundscape line had been made up primarily of low-volume high-end products full of features, the AudioPCI was designed to be a very simple, low-cost product to appeal to system OEMs and thus hopefully sell in mass quantities.
Low cost
Towards the end of the 1990s, Ensoniq was struggling financially. Their cards were very popular with PC OEMs, but their costs were too high and their musical instrument division was fading in revenue. Pressure from intense competition, especially with the dominant Creative Labs, was forcing audio card makers to try to keep their prices low.
The AudioPCI, released in July 1997, was designed primarily to be cheap. In comparison to the wide variety of chips on and sheer size of the older Soundscape boards, the highly integrated two chip design of the AudioPCI is an obvious shift in design philosophy. The board consists only of a very small software-driven audio chip (one of the following: S5016, ES1370, ES 1371) and a companion digital-to-analog converter (DAC). In another cost-cutting move, the previously typical ROM chip used for storage of samples for sample-based synthesis was replaced with the facility to use system RAM as storage for this audio data. This was made possible by the move to the PCI bus, with its far greater bandwidth and more efficient bus mastering interface when compared to the older ISA bus standard.
Features
AudioPCI, while designed to be cheap, is still quite functional. It offers many of the audio capabilities of the Soundscape ELITE card, including several digital effects (reverb, chorus, and spatial enhancement) when used with Microsoft Windows 95 and later versions of Windows.
AudioPCI was one of the first cards to have Microsoft DirectSound3
|
https://en.wikipedia.org/wiki/Marine%20engineering
|
Marine engineering is the engineering of boats, ships, submarines, and any other marine vessel. Here it is also taken to include the engineering of other ocean systems and structures – referred to in certain academic and professional circles as “ocean engineering.”
Marine engineering applies a number of engineering sciences, including mechanical engineering, electrical engineering, electronic engineering, and computer science, to the development, design, operation and maintenance of watercraft propulsion and ocean systems. It includes but is not limited to power and propulsion plants, machinery, piping, automation and control systems for marine vehicles of any kind, as well as coastal and offshore structures.
History
Archimedes is traditionally regarded as the first marine engineer, having developed a number of marine engineering systems in antiquity. Modern marine engineering dates back to the beginning of the Industrial Revolution (early 1700s).
In 1807, Robert Fulton successfully used a steam engine to propel a vessel through the water. Fulton's ship used the engine to power a small wooden paddle wheel as its marine propulsion system. The integration of a steam engine into a watercraft to create a marine steam engine was the start of the marine engineering profession. Only twelve years after Fulton’s Clermont had her first voyage, the Savannah marked the first sea voyage from America to Europe. Around 50 years later the steam powered paddle wheels had a peak with the creation of the Great Eastern, which was as big as one of the cargo ships of today, 700 feet in length, weighing 22,000 tons. Paddle steamers would become the front runners of the steamship industry for the next thirty years till the next type of propulsion came around.
Relevance and Scope
here are many ways to become a Mining Engineer, but all include a university or college degree. Primarily, training includes a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bac
|
https://en.wikipedia.org/wiki/List%20of%20permutation%20topics
|
This is a list of topics on mathematical permutations.
Particular kinds of permutations
Alternating permutation
Circular shift
Cyclic permutation
Derangement
Even and odd permutations—see Parity of a permutation
Josephus permutation
Parity of a permutation
Separable permutation
Stirling permutation
Superpattern
Transposition (mathematics)
Unpredictable permutation
Combinatorics of permutations
Bijection
Combination
Costas array
Cycle index
Cycle notation
Cycles and fixed points
Cyclic order
Direct sum of permutations
Enumerations of specific permutation classes
Factorial
Falling factorial
Permutation matrix
Generalized permutation matrix
Inversion (discrete mathematics)
Major index
Ménage problem
Permutation graph
Permutation pattern
Permutation polynomial
Permutohedron
Rencontres numbers
Robinson–Schensted correspondence
Sum of permutations:
Direct sum of permutations
Skew sum of permutations
Stanley–Wilf conjecture
Symmetric function
Szymanski's conjecture
Twelvefold way
Permutation groups and other algebraic structures
Groups
Alternating group
Automorphisms of the symmetric and alternating groups
Block (permutation group theory)
Cayley's theorem
Cycle index
Frobenius group
Galois group of a polynomial
Jucys–Murphy element
Landau's function
Oligomorphic group
O'Nan–Scott theorem
Parker vector
Permutation group
Place-permutation action
Primitive permutation group
Rank 3 permutation group
Representation theory of the symmetric group
Schreier vector
Strong generating set
Symmetric group
Symmetric inverse semigroup
Weak order of permutations
Wreath product
Young symmetrizer
Zassenhaus group
Zolotarev's lemma
Other algebraic structures
Burnside ring
Mathematical analysis
Conditionally convergent series
Riemann series theorem
Lévy–Steinitz theorem
Mathematics applicable to physical sciences
Antisymmetrizer
Identical particles
Levi-Civita symbol
Number theory
Permutable prime
Algorithms and information processing
Bit-reversal permutation
Claw-
|
https://en.wikipedia.org/wiki/Identity%20theorem
|
In real analysis and complex analysis, branches of mathematics, the identity theorem for analytic functions states: given functions f and g analytic on a domain D (open and connected subset of or ), if f = g on some , where has an accumulation point in D, then f = g on D.
Thus an analytic function is completely determined by its values on a single open neighborhood in D, or even a countable subset of D (provided this contains a converging sequence together with its limit). This is not true in general for real-differentiable functions, even infinitely real-differentiable functions. In comparison, analytic functions are a much more rigid notion. Informally, one sometimes summarizes the theorem by saying analytic functions are "hard" (as opposed to, say, continuous functions which are "soft").
The underpinning fact from which the theorem is established is the expandability of a holomorphic function into its Taylor series.
The connectedness assumption on the domain D is necessary. For example, if D consists of two disjoint open sets, can be on one open set, and on another, while is on one, and on another.
Lemma
If two holomorphic functions and on a domain D agree on a set S which has an accumulation point in , then on a disk in centered at .
To prove this, it is enough to show that for all .
If this is not the case, let be the smallest nonnegative integer with . By holomorphy, we have the following Taylor series representation in some open neighborhood U of :
By continuity, is non-zero in some small open disk around . But then on the punctured set . This contradicts the assumption that is an accumulation point of .
This lemma shows that for a complex number , the fiber is a discrete (and therefore countable) set, unless .
Proof
Define the set on which and have the same Taylor expansion:
We'll show is nonempty, open, and closed. Then by connectedness of , must be all of , which implies on .
By the lemma, in a disk centered at
|
https://en.wikipedia.org/wiki/Tap%20code
|
The tap code, sometimes called the knock code, is a way to encode text messages on a letter-by-letter basis in a very simple way. The message is transmitted using a series of tap sounds, hence its name.
The tap code has been commonly used by prisoners to communicate with each other. The method of communicating is usually by tapping either the metal bars, pipes or the walls inside a cell.
Design
The tap code is based on a Polybius square using a 5×5 grid of letters representing all the letters of the Latin alphabet, except for K, which is represented by C.
Each letter is communicated by tapping two numbers, the first designating the row and the second (after a pause) designating the column. For example, to specify the letter "B", one taps once, pauses, and then taps twice. The listener only needs to discriminate the timing of the taps to isolate letters.
To communicate the word "hello", the cipher would be the following (with the pause between each number in a pair being shorter than the pause between letters):
The letter "X" is used to break up sentences, and "K" for acknowledgements.
Because of the difficulty and length of time required for specifying a single letter, prisoners often devise abbreviations and acronyms for common items or phrases, such as "GN" for Good night, or "GBU" for God bless you.
By comparison, Morse code is harder to send by tapping or banging because a single tap will fade out and thus has no discernible length. Morse code, however, requires the ability to create two distinguishable lengths (or types) of taps. To simulate Morse by tapping therefore requires either two different sounds (pitch, volume), or very precise timing, so that a dash within a character (e.g. the character N, ) remains distinguishable from a dot at the end of a character (e.g. E-E, ). Morse code also takes longer to learn. Learning the tap system simply requires one to know the alphabet and the short sequence "AFLQV" (the initial letter of each row), without mem
|
https://en.wikipedia.org/wiki/Valuation%20of%20options
|
In finance, a price (premium) is paid or received for purchasing or selling options. This article discusses the calculation of this premium in general. For further detail, see: for discussion of the mathematics; Financial engineering for the implementation; as well as generally.
Premium components
This price can be split into two components: intrinsic value, and time value (also called "extrinsic value").
Intrinsic value
The intrinsic value is the difference between the underlying spot price and the strike price, to the extent that this is in favor of the option holder. For a call option, the option is in-the-money if the underlying spot price is higher than the strike price; then the intrinsic value is the underlying price minus the strike price. For a put option, the option is in-the-money if the strike price is higher than the underlying spot price; then the intrinsic value is the strike price minus the underlying spot price. Otherwise the intrinsic value is zero.
For example, when a DJI call (bullish/long) option is 18,000 and the underlying DJI Index is priced at $18,050 then there is a $50 advantage even if the option were to expire today. This $50 is the intrinsic value of the option.
In summary, intrinsic value:call option
= current stock price − strike price (call option)
= strike price − current stock price (put option)
Extrinsic (Time) value
The option premium is always greater than the intrinsic value up to the expiration event. This extra money is for the risk which the option writer/seller is undertaking. This is called the time value.
Time value is the amount the option trader is paying for a contract above its intrinsic value, with the belief that prior to expiration the contract value will increase because of a favourable change in the price of the underlying asset. The longer the length of time until the expiry of the contract, the greater the time value. So,
Time value = option premium − intrinsic value
Other factors affecting prem
|
https://en.wikipedia.org/wiki/Rosenbrock%20function
|
In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms. It is also known as Rosenbrock's valley or Rosenbrock's banana function.
The global minimum is inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial. To converge to the global minimum, however, is difficult.
The function is defined by
It has a global minimum at , where . Usually, these parameters are set such that and . Only in the trivial case where the function is symmetric and the minimum is at the origin.
Multidimensional generalizations
Two variants are commonly encountered.
One is the sum of uncoupled 2D Rosenbrock problems, and is defined only for even s:
This variant has predictably simple solutions.
A second, more involved variant is
has exactly one minimum for (at ) and exactly two minima for —the global minimum at and a local minimum near . This result is obtained by setting the gradient of the function equal to zero, noticing that the resulting equation is a rational function of . For small the polynomials can be determined exactly and Sturm's theorem can be used to determine the number of real roots, while the roots can be bounded in the region of . For larger this method breaks down due to the size of the coefficients involved.
Stationary points
Many of the stationary points of the function exhibit a regular pattern when plotted. This structure can be exploited to locate them.
Optimization examples
The Rosenbrock function can be efficiently optimized by adapting appropriate coordinate system without using any gradient information and without building local approximation models (in contrast to many derivate-free optimizers). The following figure illustrates an example of 2-dimensional Rosenbrock function optimization by
adaptive coordinate descent from starting point . The solution with the function va
|
https://en.wikipedia.org/wiki/Kappa%20effect
|
The kappa effect or perceptual time dilation is a temporal perceptual illusion that can arise when observers judge the elapsed time between sensory stimuli applied sequentially at different locations. In perceiving a sequence of consecutive stimuli, subjects tend to overestimate the elapsed time between two successive stimuli when the distance between the stimuli is sufficiently large, and to underestimate the elapsed time when the distance is sufficiently small.
In different sensory modalities
The kappa effect can occur with visual (e.g., flashes of light), auditory (e.g., tones), or tactile (e.g. taps to the skin) stimuli. Many studies of the kappa effect have been conducted using visual stimuli. For example, suppose three light sources, X, Y, and Z, are flashed successively in the dark with equal time intervals between each of the flashes. If the light sources are placed at different positions, with X and Y closer together than Y and Z, the temporal interval between the X and Y flashes is perceived to be shorter than that between the Y and Z flashes. The kappa effect has also been demonstrated with auditory stimuli that move in frequency. However, in some experimental paradigms the auditory kappa effect has not been observed. For example, Roy et al. (2011) found that, opposite to the prediction of the kappa effect, "Increasing the distance between sound sources marking time intervals leads to a decrease of the perceived duration". In touch, the kappa effect was first described as the "S-effect" by Suto (1952). Goldreich (2007) refers to the kappa effect as "perceptual time dilation" in analogy with the physical time dilation of the theory of relativity.
Theories based in velocity expectation
Physically, traversed space and elapsed time are linked by velocity. Accordingly, several theories regarding the brain's expectations about stimulus velocity have been put forward to account for the kappa effect.
Constant velocity expectation
According to the constant velo
|
https://en.wikipedia.org/wiki/Exergy%20efficiency
|
Exergy efficiency (also known as the second-law efficiency or rational efficiency) computes the effectiveness of a system relative to its performance in reversible conditions. It is defined as the ratio of the thermal efficiency of an actual system compared to an idealized or reversible version of the system for heat engines. It can also be described as the ratio of the useful work output of the system to the reversible work output for work-consuming systems. For refrigerators and heat pumps, it is the ratio of the actual COP and reversible COP.
Motivation
The reason the second-law efficiency is needed is because the first-law efficiencies fail to take into account an idealized version of the system for comparison. Using first-law efficiencies alone, can lead one to believe a system is more efficient than it is in reality. So, the second-law efficiencies are needed to gain a more realistic picture of a system's effectiveness. From the second law of thermodynamics it can be demonstrated that no system can ever be 100% efficient.
Definition
The exergy B balance of a process gives:
with exergy efficiency defined as:
For many engineering systems this can be rephrased as:
Where is the standard Gibbs (free) energy of reaction at temperature and pressure (also known as the standard Gibbs function change), is the net work output and is the mass flow rate of fuel.
In the same way the energy efficiency can be defined as:
Where is the standard enthalpy of reaction at temperature and pressure .
Application
The destruction of exergy is closely related to the creation of entropy and as such any system containing highly irreversible processes will have a low energy efficiency. As an example the combustion process inside a power stations gas turbine is highly irreversible and approximately 25% of the exergy input will be destroyed here.
For fossil fuels the free enthalpy of reaction is usually only slightly less than the enthalpy of reaction so from equations ()
|
https://en.wikipedia.org/wiki/Canalisation%20%28genetics%29
|
Canalisation is a measure of the ability of a population to produce the same phenotype regardless of variability of its environment or genotype. It is a form of evolutionary robustness. The term was coined in 1942 by C. H. Waddington to capture the fact that "developmental reactions, as they occur in organisms submitted to natural selection...are adjusted so as to bring about one definite end-result regardless of minor variations in conditions during the course of the reaction". He used this word rather than robustness to consider that biological systems are not robust in quite the same way as, for example, engineered systems.
Biological robustness or canalisation comes about when developmental pathways are shaped by evolution. Waddington introduced the concept of the epigenetic landscape, in which the state of an organism rolls "downhill" during development. In this metaphor, a canalised trait is illustrated as a valley (which he called a creode) enclosed by high ridges, safely guiding the phenotype to its "fate". Waddington claimed that canals form in the epigenetic landscape during evolution, and that this heuristic is useful for understanding the unique qualities of biological robustness.
Genetic assimilation
Waddington used the concept of canalisation to explain his experiments on genetic assimilation. In these experiments, he exposed Drosophila pupae to heat shock. This environmental disturbance caused some flies to develop a crossveinless phenotype. He then selected for crossveinless. Eventually, the crossveinless phenotype appeared even without heat shock. Through this process of genetic assimilation, an environmentally induced phenotype had become inherited. Waddington explained this as the formation of a new canal in the epigenetic landscape.
It is, however, possible to explain genetic assimilation using only quantitative genetics and a threshold model, with no reference to the concept of canalisation. However, theoretical models that incorporate a com
|
https://en.wikipedia.org/wiki/Convex%20cone
|
In linear algebra, a cone—sometimes called a linear cone for distinguishing it from other sorts of cones—is a subset of a vector space that is closed under positive scalar multiplication; that is, is a cone if implies for every .
When the scalars are real numbers, or belong to an ordered field, one generally calls a cone a subset of a vector space that is closed under multiplication by a positive scalar. In this context, a convex cone is a cone that is closed under addition, or, equivalently, a subset of a vector space that is closed under linear combinations with positive coefficients. It follows that convex cones are convex sets.
In this article, only the case of scalars in an ordered field is considered.
Definition
A subset C of a vector space V over an ordered field F is a cone (or sometimes called a linear cone) if for each x in C and positive scalar α in F, the product αx is in C. Note that some authors define cone with the scalar α ranging over all non-negative scalars (rather than all positive scalars, which does not include 0).
A cone C is a convex cone if belongs to C, for any positive scalars α, β, and any x, y in C.
A cone C is convex if and only if C + C ⊆ C.
This concept is meaningful for any vector space that allows the concept of "positive" scalar, such as spaces over the rational, algebraic, or (more commonly) the real numbers. Also note that the scalars in the definition are positive meaning that the origin does not have to belong to C. Some authors use a definition that ensures the origin belongs to C. Because of the scaling parameters α and β, cones are infinite in extent and not bounded.
If C is a convex cone, then for any positive scalar α and any x in C the vector It follows that a convex cone C is a special case of a linear cone.
It follows from the above property that a convex cone can also be defined as a linear cone that is closed under convex combinations, or just under additions. More succinctly, a set C is a convex cone i
|
https://en.wikipedia.org/wiki/R%C3%B6ntgen%20equivalent%20physical
|
The Röntgen equivalent physical or rep (symbol rep) is a legacy unit of absorbed dose first introduced by Herbert Parker in 1945 to replace an improper application of the roentgen unit to biological tissue. It is the absorbed energetic dose before the biological efficiency of the radiation is factored in. The rep has variously been defined as 83 or 93 ergs per gram of tissue (8.3/9.3 mGy) or per cm3 of tissue.
At the time, this was thought to be the amount of energy deposited by 1 roentgen. Improved measurements have since found that one roentgen of air kerma deposits 8.77 mGy in dry air, or 9.6 mGy in soft tissue, but the rep was defined as a fixed number of ergs per unit gram.
A 1952 handbook from the US National Bureau of Standards affirms that "The numerical coefficient of the rep has been deliberately changed to 93, instead of the earlier 83, to agree with L. H. Gray's 'energy-unit'." Gray's 'energy unit' was " one roentgen of hard gamma resulted in about 93 ergs per gram energy absorption in water". The lower range value of 83.8 ergs was the value in air corresponding to wet tissue. The rep was commonly used until the 1960s, but was gradually displaced by the rad starting in 1954 and later the gray starting in 1977.
References
See also
Radiation poisoning
Röntgen equivalent man (rem)
Units of radiation dose
Radiobiology
Equivalent units
|
https://en.wikipedia.org/wiki/Chiral%20symmetry%20breaking
|
In particle physics, chiral symmetry breaking generally refers to the dynamical spontaneous breaking of a chiral symmetry associated with massless fermions. This is usually associated with a gauge theory such as quantum chromodynamics, the quantum field theory of the strong interaction, and it also occurs through the Brout-Englert-Higgs mechanism in the electroweak interactions of the standard model. This phenomenon is analogous to magnetization and superconductivity in condensed matter physics. The basic idea was introduced to particle physics by Yoichiro Nambu, in particular, in the Nambu–Jona-Lasinio model, which is a solvable theory of composite bosons that exhibits dynamical spontaneous chiral symmetry when a 4-fermion coupling constant becomes sufficiently large. Nambu was awarded the 2008 Nobel prize in physics "for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics."
Overview
Quantum chromodynamics
Massless fermions in 4 dimensions are described by either left or right-handed spinors
that each have 2 complex components. These have spin either aligned (right-handed chirality), or counter-aligned (left-handed chirality), with their momenta. In this case the chirality is a conserved quantum number of the given fermion, and the left and right handed spinors can be independently phase transformed. More generally they can form multiplets under some symmetry group .
A Dirac mass term explicitly breaks the chiral symmetry. In quantum electrodynamics (QED) the electron mass unites left and right handed spinors forming a 4 component Dirac spinor. In the absence
of mass and quantum loops, QED would have a chiral symmetry, but the Dirac mass of the electron breaks this to a single symmetry that allows a common phase rotation of left and right together, which is the gauge symmetry of electrodynamics. (At the quantum loop level, the chiral symmetry is broken, even for massless electrons, by the chiral anomaly, but the gauge s
|
https://en.wikipedia.org/wiki/DUAL%20%28cognitive%20architecture%29
|
DUAL is a general cognitive architecture integrating the connectionist and symbolic approaches at the micro level. DUAL is based on decentralized representation and emergent computation. It was inspired by the Society of Mind idea proposed by Marvin Minsky, but departs from the initial proposal in many ways. Computations in DUAL emerge from the interaction of many micro-agents, each of which is a hybrid symbolic/connectionist device. The agents exchange messages and activation via links that can be learned and modified, they form coalitions which collectively represent concepts, episodes, and facts.
Several models have been developed on the basis of DUAL. These include: AMBR (a model of analogy-making and memory), JUDGEMAP (a model of judgment), PEAN (a model of perception), etc.
DUAL is developed by a team at the New Bulgarian University led by Boicho Kokinov. The second version was co-authored by Alexander Petrov. The third version is co-authored by Georgi Petkov and Ivan Vankov.
External links
Cognitive architecture
Emergence
|
https://en.wikipedia.org/wiki/Moisture%20recycling
|
In hydrology, moisture recycling or precipitation recycling refer to the process by which a portion of the precipitated water that evapotranspired from a given area contributes to the precipitation over the same area. Moisture recycling is thus a component of the hydrologic cycle. The ratio of the locally derived precipitation () to total precipitation () is known as the recycling ratio, :
The recycling ratio is a diagnostic measure of the potential for interactions between land surface hydrology and regional climate. Land use changes, such as deforestation or agricultural intensification, have the potential to change the amount of precipitation that falls in a region. The recycling ratio for the entire world is one, and for a single point is zero. Estimates for the recycling ratio for the Amazon basin range from 24% to 56%, and for the Mississippi basin from 21% to 24%.
The concept of moisture recycling has been integrated into the concept of the precipitationshed. A precipitationshed is the upwind ocean and land surface that contributes evaporation to a given, downwind location's precipitation. In much the same way that a watershed is defined by a topographically explicit area that provides surface runoff, the precipitationshed is a statistically defined area within which evaporation, traveling via moisture recycling, provides precipitation for a specific point.
See also
Water cycle
Precipitationshed
Land surface effects on climate
Al Baydha Project
References
Climatology
Hydrology
Recycling
|
https://en.wikipedia.org/wiki/Protocol%20spoofing
|
Protocol spoofing is used in data communications to improve performance in situations where an existing protocol is inadequate, for example due to long delays or high error rates.
Spoofing techniques
In most applications of protocol spoofing, a communications device such as a modem or router simulates ("spoofs") the remote endpoint of a connection to a locally attached host, while using a more appropriate protocol to communicate with a compatible remote device that performs the equivalent spoof at the other end of the communications link.
File transfer spoofing
Error correction and file transfer protocols typically work by calculating a checksum or CRC for a block of data known as a packet, and transmitting the resulting number at the end of the packet. At the other end of the connection, the receiver re-calculates the number based on the data it received and compares that result to what was sent from the remote machine. If the two match the packet was transmitted correctly, and the receiver sends an ACK to signal that it's ready to receive the next packet.
The time to transmit the ACK back to the sender is a function of the phone lines, as opposed to the modem's speed, and is typically about of a second on short links and may be much longer on long-distance links or data networks like X.25. For a protocol using small packets, this delay can be larger than the time needed to send a packet. For instance, the UUCP "g" protocol and Kermit both use 64-byte packets, which on a 9600 bit/s link takes about of a second to send. XMODEM used a slightly larger 128-byte packet, which takes about of a second to send.
The next packet of data cannot be sent until the ACK for the previous packet is received. In the case of XMODEM, for instance, that means it takes a minimum of of a second for the entire cycle to complete for a single packet. This means that the overall speed is only half the theoretical maximum, a 50% channel efficiency.
Protocol spoofing addresses this pr
|
https://en.wikipedia.org/wiki/Hardware%20architect
|
(In the automation and engineering environments, the hardware engineer or architect encompasses the electronics engineering and electrical engineering fields, with subspecialities in analog, digital, or electromechanical systems.)
The hardware systems architect or hardware architect is responsible for:
Interfacing with a systems architect or client stakeholders. It is extraordinarily rare nowadays for sufficiently large and/or complex hardware systems that require a hardware architect not to require substantial software and a systems architect. The hardware architect will therefore normally interface with a systems architect, rather than directly with user(s), sponsor(s), or other client stakeholders. However, in the absence of a systems architect, the hardware systems architect must be prepared to interface directly with the client stakeholders in order to determine their (evolving) needs to be realized in hardware. The hardware architect may also need to interface directly with a software architect or engineer(s), or with other mechanical or electrical engineers.
Generating the highest level of hardware requirements, based on the user's needs and other constraints such as cost and schedule.
Ensuring that this set of high level requirements is consistent, complete, correct, and operationally defined.
Performing cost–benefit analyses to determine the best methods or approaches for meeting the hardware requirements; making maximum use of commercial off-the-shelf or already developed components.
Developing partitioning algorithms (and other processes) to allocate all present and foreseeable (hardware) requirements into discrete hardware partitions such that a minimum of communications is needed among partitions, and between the user and the system.
Partitioning large hardware systems into (successive layers of) subsystems and components each of which can be handled by a single hardware engineer or team of engineers.
Ensuring that maximally robust hardware architec
|
https://en.wikipedia.org/wiki/Garou%3A%20Mark%20of%20the%20Wolves
|
is a 1999 fighting game produced by SNK, originally for the Neo Geo system and then as Fatal Fury: Mark of the Wolves for the Dreamcast. It is the eighth (or ninth if one counts Fatal Fury: Wild Ambition) installment of the Fatal Fury series.
Gameplay
The two-plane system in which characters would fight from two different planes was removed from the game. The game introduces the "Tactical Offense Position" (T.O.P.), which is a special area on the life gauge. When the gauge reaches this area, the character enters the T.O.P. mode, granting the player's character the ability to use a T.O.P. attack, gradual life recovery, and increased attack damage. The game also introduces the "Just Defend" system, which rewards the player who successfully blocks an attack at the last moment with a small amount of health recovery and the ability to immediately counterattack out of block stun. Just Defend was later added as a feature of the K-Groove in Capcom's Capcom vs. SNK 2. Similar to previous titles, the player is given a fighting rank after every round. If the player manages to win all rounds from the Arcade Mode with at least an "AAA" rank, they will face the boss Kain R. Heinlein, which unlocks an ending after he is defeated. If the requirements are not met, then Grant will be the final boss and there will be no special endings. Additionally, through Arcade Mode, before facing Grant, the player will face a mid-boss which can be any character from the cast depending on the character they use.
Playable characters
Plot
Ten years after crime lord Geese Howard's death, the city of Southtown has become more peaceful, leading it to be known as the Second Southtown in reference to having formerly been corrupted by Geese. A new fighting tournament called "King of Fighters: Maximum Mayhem" starts in the area, and several characters related with the fighters from the previous King of Fighters tournaments participate in it.
Development
Multiple changes to Garou were made to show a
|
https://en.wikipedia.org/wiki/Life%20Sciences%20Research%20Office
|
Life Sciences Research Organization (LSRO) is a non-profit organization based in Maryland, United States, that specializes in assembling "ad hoc" expert panels to evaluate scientific literature, data, systems, and proposals in the biomedical sciences.
Overview
LSRO was founded in 1962 as an office within the Federation of American Societies for Experimental Biology (FASEB) to fulfill a US military need for independent scientific counsel. In 2000, LSRO became an independent non-profit organization. It changed its name from Life Sciences Research Office to Life Sciences Research Organization in 2010, and in that same year announced the formation of LSRO Solutions which along with LSRO provides independent, impartial scientific analysis and advice. The organization has a reputation for conducting studies on politically charged issues which are of concern to federal agencies or corporations. Some issues include the dental amalgam controversy, dietary supplement monitoring, and "reduced risk" cigarette products.
It has faced scrutiny for its private clients, particularly in relation to tobacco research.
Past and current clients
Federal government
Centers for Disease Control and Prevention (CDC)
Federal Aviation Administration (FAA)
U.S. Food and Drug Administration (FDA)
NASA
National Center for Health Services Research
National Institutes of Health (NIH)
Office of Naval Research
U.S. Army Medical Research and Materiel Command
United States Department of Agriculture (USDA)
United States Department of Health and Human Services
United States National Library of Medicine
Private sector
American Physiological Society
American Society for Nutritional Sciences
American Society for Pharmacology and Experimental Therapeutics
Amoco BioProducts Corp
Biothera
California Walnut Commission
Calorie Control Council
ChemiNutra
Dow AgroSciences
Kellogg Company
Keller and Heckman LLP
Monsanto Company
Philip Morris
Porter Novelli
Procter & Gamble
Researc
|
https://en.wikipedia.org/wiki/Palamedes%20%28video%20game%29
|
is a puzzle video game released by Taito in 1990.
Gameplay
Palamedes is a puzzle game requiring the players to match the dice they are holding to the dice at the top of the screen. Using the "B" button, the player can change the number on their dice, then throw it using the "A" button when it matches the dice at the top of the screen, which wipes the target dice off the board. By matching dice in some combinations, like doing it with the same number several times in a row, or by doing a 1-to-6 sequence, the player is awarded a special move where they can eliminate three to five lines of dice on the game field. At regular time intervals (which get smaller as the game progresses) new dice lines are added, and when a die touches the bottom of the screen, the game ends.
The player can play in "solitaire" mode against the computer or another player, or "tournament" mode against AI opponents. There are six sides and numbers on the dice, making an attempt to match all the numbers on the screen and eliminating them a challenge.
Ports
Ports of the game were published for the NES, MSX, FM Towns and Game Boy by HOT-B. The Japan-only sequel, Palamedes 2: Star Twinkles, was released in 1991 for the NES by HOT-B. It featured most of the same basic gameplay elements as the original but with the play field scrolling in the opposite direction.
Reception
In Japan, Game Machine listed Palamedes on their December 15, 1990 issue as being the sixteenth most-successful table arcade unit of the month.
David Wilson of Your Sinclair magazine reviewed the arcade game, giving it an 80% score. Zero magazine rated it three out of five.
Famitsu magazine reviewed the Game Boy version, scoring the game a 22 out of 40.
References
External links
Palamedes at Arcade History
1990 video games
Arcade video games
FM Towns games
Game Boy games
MSX2 games
Nintendo Entertainment System games
Takara video games
Multiplayer and single-player video games
Taito arcade games
Taito L System games
Video g
|
https://en.wikipedia.org/wiki/Yips
|
In sports, the yips are a sudden and unexplained loss of ability to execute certain skills in experienced athletes. Symptoms of the yips are losing fine motor skills and psychological issues that impact on the muscle memory and decision-making of athletes, leaving them unable to perform basic skills of their sport.
Common treatments include clinical sport psychology therapy as well as refocusing attention on the underlying biomechanics of their physical actions. The impact varies widely. A yips event may last a short time before the athlete regains their composure or it can require longer term adjustments to technique before recovery occurs. The worst cases are those where the athlete does not recover at all, forcing the player to abandon the sport at the highest level.
In golf
In golf, the yips is a movement disorder known to interfere with putting. The term yips is said to have been popularized by Tommy Armour—a golf champion and later golf teacher—to explain the difficulties that led him to abandon tournament play. In describing the yips, golfers have used terms such as twitches, staggers, jitters and jerks. The yips affects between a quarter and a half of all mature golfers. Researchers at the Mayo Clinic found that 33% to 48% of all serious golfers have experienced the yips. Golfers who have played for more than 25 years appear most prone to the condition.
Although the exact cause of the yips has yet to be determined, one possibility is biochemical changes in the brain that accompany aging. Excessive use of the involved muscles and intense demands of coordination and concentration may exacerbate the problem. Giving up golf for a month sometimes helps. Focal dystonia has been mentioned as another possibility for the cause of yips.
Professional golfers seriously afflicted by the yips include Ernie Els, David Duval, Pádraig Harrington, Bernhard Langer, Ben Hogan, Harry Vardon, Sam Snead, Ian Baker-Finch and Keegan Bradley, who missed a six-inch putt in the fi
|
https://en.wikipedia.org/wiki/Inverse%20polymerase%20chain%20reaction
|
Inverse polymerase chain reaction (Inverse PCR) is a variant of the polymerase chain reaction that is used to amplify DNA with only one known sequence. One limitation of conventional PCR is that it requires primers complementary to both termini of the target DNA, but this method allows PCR to be carried out even if only one sequence is available from which primers may be designed.
Inverse PCR is especially useful for the determination of insert locations. For example, various retroviruses and transposons randomly integrate into genomic DNA. To identify the sites where they have entered, the known, "internal" viral or transposon sequences can be used to design primers that will amplify a small portion of the flanking, "external" genomic DNA. The amplified product can then be sequenced and compared with DNA databases to locate the sequence which has been disrupted.
The inverse PCR method involves a series of restriction digests and ligation, resulting in a looped fragment that can be primed for PCR from a single section of known sequence. Then, like other polymerase chain reaction processes, the DNA is amplified by the thermostable DNA polymerase:
A target region with an internal section of known sequence and unknown flanking regions is identified
Genomic DNA is digested into fragments of a few kilobases by a usually low-moderate frequency (6-8 base) cutting restriction enzyme.
Under low DNA concentrations or quick ligation conditions, self-ligation is induced to give a circular DNA product.
PCR is carried out as usual with the circular template, with primers complementary to sections of the known internal sequence pointing outwards.
Finally the sequence of the sequenced PCR product is compared against sequence databases.
It is used in case of chromosome crawling.
References
Laboratory techniques
Molecular biology
Polymerase chain reaction
|
https://en.wikipedia.org/wiki/ITSEC
|
The Information Technology Security Evaluation Criteria (ITSEC) is a structured set of criteria for evaluating computer security within products and systems. The ITSEC was first published in May 1990 in France, Germany, the Netherlands, and the United Kingdom based on existing work in their respective countries. Following extensive international review, Version 1.2 was subsequently published in June 1991 by the Commission of the European Communities for operational use within evaluation and certification schemes.
Since the launch of the ITSEC in 1990, a number of other European countries have agreed to recognize the validity of ITSEC evaluations.
The ITSEC has been largely replaced by Common Criteria, which provides similarly-defined evaluation levels and implements the target of evaluation concept and the Security Target document.
Concepts
The product or system being evaluated, called the target of evaluation, is subjected to a detailed examination of its security features culminating in comprehensive and informed functional and penetration testing. The degree of examination depends upon the level of confidence desired in the target. To provide different levels of confidence, the ITSEC defines evaluation levels, denoted E0 through E6. Higher evaluation levels involve more extensive examination and testing of the target.
Unlike earlier criteria, notably the TCSEC developed by the US defense establishment, the ITSEC did not require evaluated targets to contain specific technical features in order to achieve a particular assurance level. For example, an ITSEC target might provide authentication or integrity features without providing confidentiality or availability. A given target's security features were documented in a Security Target document, whose contents had to be evaluated and approved before the target itself was evaluated. Each ITSEC evaluation was based exclusively on verifying the security features identified in the Security Target.
Use
The formal Z
|
https://en.wikipedia.org/wiki/Nested%20polymerase%20chain%20reaction
|
Nested polymerase chain reaction (nested PCR) is a modification of polymerase chain reaction intended to reduce non-specific binding in products due to the amplification of unexpected primer binding sites.
Polymerase chain reaction
Polymerase chain reaction itself is the process used to amplify DNA samples, via a temperature-mediated DNA polymerase. The products can be used for sequencing or analysis, and this process is a key part of many genetics research laboratories, along with uses in DNA fingerprinting for forensics and other human genetic cases. Conventional PCR requires primers complementary to the termini of the target DNA. The amount of product from the PCR increases with the number of temperature cycles that the reaction is subjected to. A commonly occurring problem is primers binding to incorrect regions of the DNA, giving unexpected products. This problem becomes more likely with an increased number of cycles of PCR.
Primers
Nested polymerase chain reaction involves two sets of primers, used in two successive runs of polymerase chain reaction, the second set intended to amplify a secondary target within the first run product. This allows amplification for a low number of runs in the first round, limiting non-specific products. The second nested primer set should only amplify the intended product from the first round of amplification and not non-specific product. This allows running more total cycles while minimizing non-specific products. This is useful for rare templates or PCR with high background.
Processes
The target DNA undergoes the first run of polymerase chain reaction with the first set of primers, shown in green. The selection of alternative and similar primer binding sites gives a selection of products, only one containing the intended sequence.
The product from the first reaction undergoes a second run with the second set of primers, shown in red. It is very unlikely that any of the unwanted PCR products contain binding sites for both
|
https://en.wikipedia.org/wiki/Radio%20receiver%20design
|
Radio receiver design includes the electronic design of different components of a radio receiver which processes the radio frequency signal from an antenna in order to produce usable information such as audio. The complexity of a modern receiver and the possible range of circuitry and methods employed are more generally covered in electronics and communications engineering. The term radio receiver is understood in this article to mean any device which is intended to receive a radio signal in order to generate useful information from the signal, most notably a recreation of the so-called baseband signal (such as audio) which modulated the radio signal at the time of transmission in a communications or broadcast system.
Fundamental considerations
Design of a radio receiver must consider several fundamental criteria to produce a practical result. The main criteria are gain, selectivity, sensitivity, and stability. The receiver must contain a detector to recover the information initially impressed on the radio carrier signal, a process called modulation.
Gain is required because the signal intercepted by an antenna will have a very low power level, on the order of picowatts or femtowatts. To produce an audible signal in a pair of headphones requires this signal to be amplified a trillion-fold or more. The magnitudes of the required gain are so great that the logarithmic unit decibel is preferred - a gain of 1 trillion times the power is 120 decibels, which is a value achieved by many common receivers. Gain is provided by one or more amplifier stages in a receiver design; some of the gain is applied at the radio-frequency part of the system, and the rest at the frequencies used by the recovered information (audio, video, or data signals).
Selectivity is the ability to "tune in" to just one station of the many that may be transmitting at any given time. An adjustable bandpass filter is a typical stage of a receiver. A receiver may include several stages of bandpas
|
https://en.wikipedia.org/wiki/Ciena
|
Ciena Corporation is an American telecommunications networking equipment and software services supplier based in Hanover, Maryland. The company has been described by The Baltimore Sun as the "world's biggest player in optical connectivity". The company reported revenues of $3.63 billion for 2022. Ciena had over 8,000 employees, as of October 2022. Gary Smith serves as president and chief executive officer (CEO).
Customers include AT&T, Deutsche Telekom, KT Corporation and Verizon Communications.
History
Early history and initial public offering
Ciena was founded in 1992 under the name HydraLite by electrical engineer David R. Huber. Huber served as chief executive officer, while Optelecom, a company building optical networking products, provided "management assistance and production facilities," and co-founder Kevin Kimberlin "provided initial equity capital during the formation of the Company". Dave Huber engaged William K. Woodruff & Co. to raise $3.0 million in venture funding in September of 1993. Woodruff presented the idea to John Bayless at Sevin Rosen in November 1993 that resulted in Sevin Rosen investing $3.0 million April 10, 1994. William K. Woodruff & Co. was a co-manager of Ciena's IPO in February 1997. The company subsequently received funding from Sevin Rosen Funds as a result of a demonstration at its laboratory attended by Jon Bayless, a partner at the firm, who saw the value in applying HydraLite's fiber-optic technology to cable television. Sevin Rosen offered funding immediately, investing $1.25 million in April 1994.
Ciena received $40 million in venture capital financing, including $3.3 million from Sevin Rosen Funds. Other early investors in the company included Charles River Ventures, Japan Associated Finance Co., Star Venture, and Vanguard Venture Partners. Bayless also recruited physicist Patrick Nettles, a former colleague at the telecommunications company Optilink, to serve as Ciena's first CEO, and Lawrence P. Huang, another former
|
https://en.wikipedia.org/wiki/Autodesk%20Vault
|
Autodesk Vault is a data management tool integrated with Autodesk Inventor Series, Autodesk Inventor Professional, AutoCAD Mechanical, AutoCAD Electrical, Autodesk Revit and Civil 3D products. It helps design teams track work in progress and maintain version control in multi-user environments. It allows them to organize and reuse designs by consolidating product information and reducing the need to re-create designs from scratch. Users can store and search both CAD data (such as Autodesk Inventor, DWG, and DWF files) and non-CAD documents (such as Microsoft Word and Microsoft Excel files).
Overview
The Vault environment functions as a client server application with the central SQL database and Autodesk Data Management Server (ADMS) applications installed on a Windows-based server with client access granted via various clients such as: Thick Client (Vault Explorer) and Application Integrations. ADMS acts as the middleware that handles client transactions with the SQL database. Vault Explorer functions as the client application and is intended to run alongside the companion CAD software. The Vault Explorer UI (User Interface) is intended to have an appearance similar to Microsoft Outlook and can display the Vault folder structure, file metadata in the form of a grid and a preview pane for more detailed information.
Autodesk Vault is a file versioning system that "records" the progression of all edits a file has undergone. All files and their associated metadata are indexed in the SQL base data management system and are searchable from the Vault client interface. Other information about the files include version history, uses (composed of a list of children), "Where Used" (a list of all parents) as well as a light weight viewable in the form of the Autodesk Design Web Format (DWF) file which is automatically published upon check-in. When users intend to edit a file the file is checked-out and edits are made. When the user is satisfied with the changes the file chec
|
https://en.wikipedia.org/wiki/Cross-linking%20immunoprecipitation
|
Cross-linking and immunoprecipitation (CLIP, or CLIP-seq) is a method used in molecular biology that combines UV crosslinking with immunoprecipitation in order to identify RNA binding sites of proteins on a transcriptome-wide scale, thereby increasing our understanding of post-transcriptional regulatory networks. CLIP can be used either with antibodies against endogenous proteins, or with common peptide tags (including FLAG, V5, HA, and others) or affinity purification, which enables the possibility of profiling model organisms or RBPs otherwise lacking suitable antibodies.
Workflow
CLIP begins with the in-vivo cross-linking of RNA-protein complexes using ultraviolet light (UV). Upon UV exposure, covalent bonds are formed between proteins and nucleic acids that are in close proximity (on the order of Angstroms apart). The cross-linked cells are then lysed, RNA is fragmented, and the protein of interest is isolated via immunoprecipitation. In order to allow for priming of reverse transcription, RNA adapters are ligated to the 3' ends, and RNA fragments are labelled to enable the analysis of the RNA-protein complexes after they have been separated from free RNA using gel electrophoresis and membrane transfer. Proteinase K digestion is then performed in order to remove protein from the crosslinked RNA, which leaves a few amino acids at the crosslink site. This often leads to truncation of cDNAs at the crosslinked nucleotide, which is exploited in variants such as iCLIP to increase the resolution of the method. cDNA is then synthesized via RT-PCR followed by high-throughput sequencing followed by mapping the reads back to the transcriptome and other computational analyses to study the interaction sites.
History and applications
CLIP was originally undertaken to study interactions between the neuron-specific RNA-binding protein and splicing factors NOVA1 and NOVA2 in the mouse brain, identifying RNA binding sites that contained the expected Nova-binding motifs. Sequen
|
https://en.wikipedia.org/wiki/Hinge%20theorem
|
In geometry, the hinge theorem (sometimes called the open mouth theorem) states that if two sides of one triangle are congruent to two sides of another triangle, and the included angle of the first is larger than the included angle of the second, then the third side of the first triangle is longer than the third side of the second triangle. This theorem is given as Proposition 24 in Book I of Euclid's Elements.
Scope and generalizations
The hinge theorem holds in Euclidean spaces and more generally in simply connected non-positively curved space forms.
It can be also extended from plane Euclidean geometry to higher dimension Euclidean spaces (e.g., to tetrahedra and more generally to simplices), as has been done for orthocentric tetrahedra (i.e., tetrahedra in which altitudes are concurrent) and more generally for orthocentric simplices (i.e., simplices in which altitudes are concurrent).
Converse
The converse of the hinge theorem is also true: If the two sides of one triangle are congruent to two sides of another triangle, and the third side of the first triangle is greater than the third side of the second triangle, then the included angle of the first triangle is larger than the included angle of the second triangle.
In some textbooks, the theorem and its converse are written as the SAS Inequality Theorem and the AAS Inequality Theorem respectively.
References
Elementary geometry
Theorems about triangles
|
https://en.wikipedia.org/wiki/Energy%20Conversion%20Devices
|
Energy Conversion Devices (ECD) was an American photovoltaics manufacturer of thin-film solar cells made of amorphous silicon used in flexible laminates and in building-integrated photovoltaics. The company was also a manufacturer of rechargeable batteries and other renewable energy related products. ECD was headquartered in Rochester Hills, Michigan.
Through its wholly owned Auburn Hills, Michigan, subsidiary United Solar Ovonic, LLC, better known as Uni-Solar, ECD was at one time the world's largest producer of flexible solar panels. Uni-Solar panels consisted of long rectangular strips with wiring at one end, which could be glued to any suitable supporting surface. They were widely used on flat roofs, motorhomes, semi-trailer cabs and similar roles.
On February 14, 2012, Energy Conversion Devices, Inc. and its subsidiaries, United Solar Ovonic LLC and Solar Integrated Technologies, Inc. filed for bankruptcy in the U.S. United States District Court for the Eastern District of Michigan.
Company
Energy Conversion Devices, Inc. (ECD), through its United Solar Ovonic (USO) subsidiary, was engaged in building-integrated and rooftop photovoltaics (PV). The Company manufactured, sold and installed thin-film solar laminates that converted sunlight to electrical energy.
The Company operated in two segments: United Solar Ovonic and Ovonic Materials. The Company's USO segment consisted of its wholly owned subsidiary, United Solar Ovonic LLC, which was engaged in manufacturing of PV laminates designed to be integrated directly with roofing materials. The Ovonic Materials segment invented, designed and developed materials and products based on ECD's materials science technology. ECD, through its subsidiaries, commercialized materials, products and production processes for the alternative energy generation (primarily solar energy), energy storage and information technology markets.
Ovonics (coined from "Ovshinsky" and "electronics") is a field of electronics that uses m
|
https://en.wikipedia.org/wiki/Socle%20%28mathematics%29
|
In mathematics, the term socle has several related meanings.
Socle of a group
In the context of group theory, the socle of a group G, denoted soc(G), is the subgroup generated by the minimal normal subgroups of G. It can happen that a group has no minimal non-trivial normal subgroup (that is, every non-trivial normal subgroup properly contains another such subgroup) and in that case the socle is defined to be the subgroup generated by the identity. The socle is a direct product of minimal normal subgroups.
As an example, consider the cyclic group Z12 with generator u, which has two minimal normal subgroups, one generated by u4 (which gives a normal subgroup with 3 elements) and the other by u6 (which gives a normal subgroup with 2 elements). Thus the socle of Z12 is the group generated by u4 and u6, which is just the group generated by u2.
The socle is a characteristic subgroup, and hence a normal subgroup. It is not necessarily transitively normal, however.
If a group G is a finite solvable group, then the socle can be expressed as a product of elementary abelian p-groups. Thus, in this case, it is just a product of copies of Z/pZ for various p, where the same p may occur multiple times in the product.
Socle of a module
In the context of module theory and ring theory the socle of a module M over a ring R is defined to be the sum of the minimal nonzero submodules of M. It can be considered as a dual notion to that of the radical of a module. In set notation,
Equivalently,
The socle of a ring R can refer to one of two sets in the ring. Considering R as a right R-module, soc(RR) is defined, and considering R as a left R-module, soc(RR) is defined. Both of these socles are ring ideals, and it is known they are not necessarily equal.
If M is an Artinian module, soc(M) is itself an essential submodule of M.
A module is semisimple if and only if soc(M) = M. Rings for which soc(M) = M for all M are precisely semisimple rings.
soc(soc(M)) = soc(M).
M is a finit
|
https://en.wikipedia.org/wiki/National%20Infrastructure%20Protection%20Plan
|
The National Infrastructure Protection Plan (NIPP) is a document called for by Homeland Security Presidential Directive 7, which aims to unify Critical Infrastructure and Key Resource (CIKR) protection efforts across the country. The latest version of the plan was produced in 2013 The NIPP's goals are to protect critical infrastructure and key resources and ensure resiliency. It is generally considered unwieldy and not an actual plan to be carried out in an emergency, but it is useful as a mechanism for developing coordination between government and the private sector. The NIPP is based on the model laid out in the 1998 Presidential Decision Directive-63, which identified critical sectors of the economy and tasked relevant government agencies to work with them on sharing information and on strengthening responses to attack.
The NIPP is structured to create partnerships between Government Coordinating Councils (GCC) from the public sector and Sector Coordinating Councils (SCC) from the private sector for the eighteen sectors DHS has identified as critical.
Sector Specific Agencies
United States Department of Agriculture
United States Department of Defense
United States Department of Energy
United States Department of Health and Human Services
United States Department of the Interior
United States Department of the Treasury
United States Environmental Protection Agency
United States Department of Homeland Security
Cybersecurity and Infrastructure Security Agency
Transportation Security Administration
United States Coast Guard
United States Immigration and Customs Enforcement
Federal Protective Service
Sector Coordinating Councils
Agriculture and Food
Defense Industrial Base
Energy
Public Health and Healthcare
Financial Services
Water and Wastewater Systems
Chemical
Commercial Facilities
Dams
Emergency Services
Nuclear Reactors, Materials, and Waste
Information Technology
Communications
Postal and Shipping
Transportation Systems
Gove
|
https://en.wikipedia.org/wiki/Stop%20the%20Express
|
Stop the Express (also known as Bousou Tokkyuu SOS (暴走特急SOS, "Runaway Express SOS," in Japan) is a video game developed by Hudson Soft and published in 1983. It was written for the Sharp X1 and later ported to the ZX Spectrum, Commodore 64, and MSX.
It was remade for Nintendo Family Computer as Challenger (チャレンジャー) in 1985.
Gameplay
In Stage 1, the player runs along the top of an express train, jumping between carriages while avoiding enemy knives and obstacles. Halfway along the train, the player enters the train, and Stage 2 begins. The player must then proceed through the carriages, towards the front of the train, so that it can be stopped.
Upon the completion of each level, the game displays the Engrish message "Congraturation! You Sucsess!". The game then repeats from Stage 1, with more enemies. Enemies, known as "redmen", initially pursue from the rear on the roof of the train, and the front once inside, and will throw knives which the player must dodge by ducking under, or jumping over, them. In addition, once inside the train, the player can jump up and hang from the overhead straps out of the way of the redmen. However, ghosts flit up and down the carriages making it extremely dangerous to stay there too long. Once a few levels have been completed, redmen will approach from both front and rear.
The player has only two weapons at his disposal. When on the roof of the train, he can catch birds that fly overhead and then release them to run along the carriage and knock the redmen off, as well as high kicking them. Whilst inside, the high kick is the only option.
Reception
Stop the Express was rated as the 4th best Spectrum game by Your Sinclair, in their list of the top 100 Spectrum games. Retro Gamer, meanwhile, ranked it as the twelfth best game for the Spectrum.
Legacy
An NES/Famicom port was planned, but due to only having the first train level, three levels were added and became Challenger, which was released only in Japan.
References
External
|
https://en.wikipedia.org/wiki/Methyl%20cellulose
|
Methyl cellulose (or methylcellulose) is a compound derived from cellulose. It is sold under a variety of trade names and is used as a thickener and emulsifier in various food and cosmetic products, and also as a bulk-forming laxative. Like cellulose, it is not digestible, non-toxic, and not an allergen.
In addition to culinary uses, it is used in arts and crafts such as Papier-mâché and is often the main ingredient of wallpaper paste.
In 2020, it was the 422nd most commonly prescribed medication in the United States, with more than 100thousand prescriptions.
Uses
Methyl cellulose has a wide range of uses.
Medical
Constipation
Methyl cellulose is used to treat constipation. Effects generally occur within three days. It is taken by mouth and is recommended with sufficient water. Side effects may include abdominal pain. It is classified as a bulk forming laxative. It works by increasing the amount of stool present which improves intestinal contractions.
It is available over the counter. It is sold under the brand name Citrucel among others.
Artificial tears and saliva
The lubricating property of methylcellulose is of particular benefit in the treatment of dry eyes. Solutions containing methyl cellulose or similar cellulose derivatives are used as substitute for tears or saliva if the natural production of these fluids is disturbed.
Medication manufacturing
Methyl cellulose is used in the manufacture of drug capsules; its edible and nontoxic properties provide a vegetarian alternative to the use of gelatin.
Consumer products
Thickener and emulsifier
Methyl cellulose is occasionally added to hair shampoos, tooth pastes and liquid soaps, to generate their characteristic thick consistency. This is also done for foods, for example ice cream or croquette. Methyl cellulose is also an important emulsifier, preventing the separation of two mixed liquids because it is an emulsion stabilizer.
Food
The E number of methyl cellulose as food additive is E461. E464 is
|
https://en.wikipedia.org/wiki/Tsallis%20entropy
|
In physics, the Tsallis entropy is a generalization of the standard Boltzmann–Gibbs entropy.
Overview
The concept was introduced in 1988 by Constantino Tsallis as a basis for generalizing the standard statistical mechanics and is identical in form to Havrda–Charvát structural α-entropy, introduced in 1967 within information theory. In scientific literature, the physical relevance of the Tsallis entropy has been debated. However, from the years 2000 on, an increasingly wide spectrum of natural, artificial and social complex systems have been identified which confirm the predictions and consequences that are derived from this nonadditive entropy, such as nonextensive statistical mechanics, which generalizes the Boltzmann–Gibbs theory.
Among the various experimental verifications and applications presently available in the literature, the following ones deserve a special mention:
The distribution characterizing the motion of cold atoms in dissipative optical lattices predicted in 2003 and observed in 2006.
The fluctuations of the magnetic field in the solar wind enabled the calculation of the q-triplet (or Tsallis triplet).
The velocity distributions in a driven dissipative dusty plasma.
Spin glass relaxation.
Trapped ion interacting with a classical buffer gas.
High energy collisional experiments at LHC/CERN (CMS, ATLAS and ALICE detectors) and RHIC/Brookhaven (STAR and PHENIX detectors).
Among the various available theoretical results which clarify the physical conditions under which Tsallis entropy and associated statistics apply, the following ones can be selected:
Anomalous diffusion.
Uniqueness theorem.
Sensitivity to initial conditions and entropy production at the edge of chaos.
Probability sets that make the nonadditive Tsallis entropy to be extensive in the thermodynamical sense.
Strongly quantum entangled systems and thermodynamics.
Thermostatistics of overdamped motion of interacting particles.
Nonlinear generalizations of the Schroedinger
|
https://en.wikipedia.org/wiki/Wireless%20Intelligent%20Network
|
Wireless Intelligent Network (also referred to as a WIN) is a concept developed by the TR-45 Mobile and Personal Communications Systems Standards engineering committee of the Telecommunications Industry Association (TIA). Its objective is to transport the resources of the Intelligent Network to the wireless network, utilizing the TIA-41 set of technical standards. Basing WIN standards on this protocol allows changing to an intelligent network without making current network infrastructure obsolete.
Overview
Today's wireless subscribers are much more sophisticated telecommunications users than they were five years ago. No longer satisfied with just completing a clear call, today's subscribers demand innovative ways to use the wireless phone. They want multiple services that allow them to handle or select incoming calls in a variety of ways.
Enhanced services are very important to wireless customers. They have come to expect, for instance, services such as caller ID and voice messaging bundled in the package when they buy and activate a cellular or personal communications service (PCS) phone. Whether prepaid, voice/data messaging, Internet surfing, or location-sensitive billing, enhanced services will become an important differentiator in an already crowded, competitive service-provider market. Enhanced services will also entice potentially new subscribers to sign up for service and will drive up airtime through increased usage of PCS or cellular services. As the wireless market becomes increasingly competitive, rapid deployment of enhanced services becomes critical to a successful wireless strategy.
Intelligent Network (IN) solutions have revolutionized wireline networks. Rapid creation and deployment of services has become the hallmark of a wireline network based on IN concepts. Wireless Intelligent Network (WIN) will bring those same successful strategies into the wireless networks.
The evolution of wireless networks to a WIN concept of service deployment deliv
|
https://en.wikipedia.org/wiki/Branching%20quantifier
|
In logic a branching quantifier, also called a Henkin quantifier, finite partially ordered quantifier or even nonlinear quantifier, is a partial ordering
of quantifiers for Q ∈ {∀,∃}. It is a special case of generalized quantifier. In classical logic, quantifier prefixes are linearly ordered such that the value of a variable ym bound by a quantifier Qm depends on the value of the variables
y1, ..., ym−1
bound by quantifiers
Qy1, ..., Qym−1
preceding Qm. In a logic with (finite) partially ordered quantification this is not in general the case.
Branching quantification first appeared in a 1959 conference paper of Leon Henkin. Systems of partially ordered quantification are intermediate in strength between first-order logic and second-order logic. They are being used as a basis for Hintikka's and Gabriel Sandu's independence-friendly logic.
Definition and properties
The simplest Henkin quantifier is
It (in fact every formula with a Henkin prefix, not just the simplest one) is equivalent to its second-order Skolemization, i.e.
It is also powerful enough to define the quantifier (i.e. "there are infinitely many") defined as
Several things follow from this, including the nonaxiomatizability of first-order logic with (first observed by Ehrenfeucht), and its equivalence to the -fragment of second-order logic (existential second-order logic)—the latter result published independently in 1970 by Herbert Enderton and W. Walkoe.
The following quantifiers are also definable by .
Rescher: "The number of φs is less than or equal to the number of ψs"
Härtig: "The φs are equinumerous with the ψs"
Chang: "The number of φs is equinumerous with the domain of the model"
The Henkin quantifier can itself be expressed as a type (4) Lindström quantifier.
Relation to natural languages
Hintikka in a 1973 paper advanced the hypothesis that some sentences in natural languages are best understood in terms of branching quantifiers, for example: "some relative of each
|
https://en.wikipedia.org/wiki/Interconnect%20agreement
|
An interconnect agreement is a business contract between telecommunications organizations for the purpose of interconnecting their networks and exchanging telecommunications traffic. Interconnect agreements are found both in the public switched telephone network and the Internet.
In the public switched telephone network, an interconnect agreement invariably involves settlement fees based on call source and destination, connection times and duration, when these fees do not cancel out between operators.
On the Internet, where the concept of a "call" is generally hard to define, settlement-free peering and Internet transit are common forms of interconnection. A contract for interconnection within the Internet is usually called a peering agreement.
Interconnect agreements are typically complex contractual agreements involving payment schemes and schedules, coordination of routing policies, acceptable use policies, traffic balancing requirements, technical standards, coordination of network operations, dispute resolution, etc. Legal and regulatory requirements are often an issue. For example, network operators may be forced by law to interconnect with their competitors. In the United States, the Telecommunications Act of 1996 mandated methods of interconnection and the compensation models for doing so.
External links
Telecommunications law
Internet architecture
Contract law
|
https://en.wikipedia.org/wiki/Electrical%20resonance
|
Electrical resonance occurs in an electric circuit at a particular resonant frequency when the impedances or admittances of circuit elements cancel each other. In some circuits, this happens when the impedance between the input and output of the circuit is almost zero and the transfer function is close to one.
Resonant circuits exhibit ringing and can generate higher voltages or currents than are fed into them. They are widely used in wireless (radio) transmission for both transmission and reception.
LC circuits
Resonance of a circuit involving capacitors and inductors occurs because the collapsing magnetic field of the inductor generates an electric current in its windings that charges the capacitor, and then the discharging capacitor provides an electric current that builds the magnetic field in the inductor. This process is repeated continually. An analogy is a mechanical pendulum, and both are a form of simple harmonic oscillator.
At resonance, the series impedance of the LR circuit is at a minimum and the parallel impedance is at maximum. Resonance is used for tuning and filtering, because it occurs at a particular frequency for given values of inductance and capacitance. It can be detrimental to the operation of communications circuits by causing unwanted sustained and transient oscillations that may cause noise, signal distortion, and damage to circuit elements.
Parallel resonance or near-to-resonance circuits can be used to prevent the waste of electrical energy, which would otherwise occur while the inductor built its field or the capacitor charged and discharged. As an example, asynchronous motors waste inductive current while synchronous ones waste capacitive current. The use of the two types in parallel makes the inductor feed the capacitor, and vice versa, maintaining the same resonant current in the circuit, and converting all the current into useful work.
Since the inductive reactance and the capacitive reactance are of equal magnitude,
,
so
,
w
|
https://en.wikipedia.org/wiki/Ecological%20pyramid
|
An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid') is a graphical representation designed to show the biomass or bioproductivity at each trophic level in an ecosystem.
A pyramid of energy shows how much energy is retained in the form of new biomass from each trophic level, while a pyramid of biomass shows how much biomass (the amount of living or organic matter present in an organism) is present in the organisms. There is also a pyramid of numbers representing the number of individual organisms at each trophic level. Pyramids of energy are normally upright, but other pyramids can be inverted(pyramid of biomass for marine region) or take other shapes.(spindle shaped pyramid)
Ecological pyramids begin with producers on the bottom (such as plants) and proceed through the various trophic levels (such as herbivores that eat plants, then carnivores that eat flesh, then omnivores that eat both plants and flesh, and so on). The highest level is the top of the food chain.
Biomass can be measured by a bomb calorimeter.
Pyramid of Energy
A pyramid of energy or pyramid of productivity shows the production or turnover (the rate at which energy or mass is transferred from one trophic level to the next) of biomass at each trophic level. Instead of showing a single snapshot in time, productivity pyramids show the flow of energy through the food chain. Typical units are grams per square meter per year or calories per square meter per year. As with the others, this graph shows producers at the bottom and higher trophic levels on top.
When an ecosystem is healthy, this graph produces a standard ecological pyramid. This is because, in order for the ecosystem to sustain itself, there must be more energy at lower trophic levels than there is at higher trophic levels. This allows organisms on the lower levels to not only maintain a stable population, but also to transfer energy up the pyramid. The exception to this generalizati
|
https://en.wikipedia.org/wiki/Education%20and%20training%20of%20electrical%20and%20electronics%20engineers
|
Both electrical and electronics engineers typically possess an academic degree with a major in electrical/ electronics engineering. The length of study for such a degree is usually three or four years and the completed degree may be designated as a Bachelor of Engineering, Bachelor of Science or Bachelor of Applied Science depending upon the university.
Scope of undergraduate education
The degree generally includes units covering physics, mathematics, project management and specific topics in electrical and electronics engineering. Initially such topics cover most, if not all, of the sub fields of electrical engineering. Students then choose to specialize in one or more sub fields towards the end of the degree. In most countries, a bachelor's degree in engineering represents the first step towards certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States and Canada), Chartered Engineer (in the United Kingdom, Ireland, India, Pakistan, South Africa and Zimbabwe), Chartered Professional Engineer (in Australia) or European Engineer (in much of the European Union).
Post graduate studies
Electrical engineers can also choose to pursue a postgraduate degree such as a master of engineering, a doctor of philosophy in engineering or an engineer's degree. The master and engineer's degree may consist of either research, coursework or a mixture of the two. The doctor of philosophy consists of a significant research component and is often viewed as the entry point to academia. In the United Kingdom and various other European countries, the master of engineering is often considered an undergraduate degree of slightly longer duration than the bachelor of engineering.
Typical electrical/electronics engin
|
https://en.wikipedia.org/wiki/Loudness%20war
|
The loudness war (or loudness race) is a trend of increasing audio levels in recorded music, which reduces audio fidelity and—according to many critics—listener enjoyment. Increasing loudness was first reported as early as the 1940s, with respect to mastering practices for 7-inch singles. The maximum peak level of analog recordings such as these is limited by varying specifications of electronic equipment along the chain from source to listener, including vinyl and Compact Cassette players. The issue garnered renewed attention starting in the 1990s with the introduction of digital signal processing capable of producing further loudness increases.
With the advent of the compact disc (CD), music is encoded to a digital format with a clearly defined maximum peak amplitude. Once the maximum amplitude of a CD is reached, loudness can be increased still further through signal processing techniques such as dynamic range compression and equalization. Engineers can apply an increasingly high ratio of compression to a recording until it more frequently peaks at the maximum amplitude. In extreme cases, efforts to increase loudness can result in clipping and other audible distortion. Modern recordings that use extreme dynamic range compression and other measures to increase loudness therefore can sacrifice sound quality to loudness. The competitive escalation of loudness has led music fans and members of the musical press to refer to the affected albums as "victims of the loudness war".
History
The practice of focusing on loudness in audio mastering can be traced back to the introduction of the compact disc, but also existed to some extent when the vinyl phonograph record was the primary released recording medium and when 7-inch singles were played on jukebox machines in clubs and bars. The so-called wall of sound (not to be confused with the Phil Spector Wall of Sound) formula preceded the loudness war, but achieved its goal using a variety of techniques, such as instrument
|
https://en.wikipedia.org/wiki/Aposymbiosis
|
Aposymbiosis occurs when symbiotic organisms live apart from one another (for example, a clownfish living independently of a sea anemone). Studies have shown that the lifecycles of both the host and the symbiont are affected in some way, usually negative, and that for obligate symbiosis the effects can be drastic. Aposymbiosis is distinct from exsymbiosis, which occurs when organisms are recently separated from a symbiotic association. Because symbionts can be vertically transmitted from parent to offspring or horizontally transmitted from the environment, the presence of an aposymbiotic state suggests that transmission of the symbiont is horizontal. A classical example of a symbiotic relationship with an aposymbiotic state is the Hawaiian bobtail squid Euprymna scolopes and the bioluminescent bacteria Vibrio fischeri. While the nocturnal squid hunts, the bacteria emit light of similar intensity of the moon which camouflages the squid from predators. Juveniles are colonized within hours of hatching and Vibrio must outcompete other bacteria in the seawater through a system of recognition and infection.
Use in research
Aposymbiotic organisms can be used as models to observe a variety of processes. Aposymbiotic Euprymna juveniles have been studied throughout colonization in order to determine the system of recognizing Vibrio fischeri in seawater. Coral polyps without their symbiont algae are models for coral calcification and the effects of the algae on coral pH regulation.
Aposymbiotic insects are used to model insect-bacteria relationships and modes of infection. These models are also used in arthropod vectors and disease transmission. Wolbachia species are common insect endosymbionts and investigation into this species has yielded potential human health implications. Additionally, aposymbiotic wasps without Wolbachia are unable to reproduce. This relationship between Asobara tabida wasps and Wolbachia is an important model for insect microbiome study.
Health
|
https://en.wikipedia.org/wiki/Microsoft%20Write
|
Microsoft Write is a basic word processor included with Windows 1.0 and later, until Windows NT 3.51. Throughout its lifespan it was minimally updated, and is comparable to early versions of MacWrite. Early versions of Write only work with Write Document (.wri) files, which are a subset of the Rich Text Format (RTF). After Windows 3.0, Write became capable of reading and composing early Word Document (.doc) files. With Windows 3.1, Write became OLE capable. In Windows 95, Write was replaced with WordPad; attempting to open Write from the Windows folder will open WordPad instead.
Being a word processor, Write features additional document formatting features that are not found in Notepad (a simple text editor), such as a choice of font, text decorations and paragraph indentation for different parts of the document. Unlike versions of WordPad before Windows 7, Write could justify a paragraph.
Platforms
Atari ST
In 1986, Atari announced an agreement with Microsoft to bring Microsoft Write to the Atari ST.
Unlike the Windows version, Microsoft Write for the Atari ST is the Atari version of Microsoft Word 1.05 released for the Apple Macintosh while sharing the same name as the program included with Microsoft Windows during the 80s and early 90s. While the program was announced in 1986, various delays caused the program to arrive in 1988. The Atari version is a one time release and was never updated.
Microsoft Write for the Atari ST retailed at $129.95 and is one of two high-profile PC word processors that were released on the Atari platform. The other application is WordPerfect.
Macintosh
In October 1987, Microsoft released Microsoft Write for Macintosh. Write is a version of Microsoft Word with limited features that Microsoft hoped would replace aging MacWrite in the Macintosh word processor market. Write was priced well below Word, though at the time MacWrite was included with new Macintoshes. Write is best described as Word locked in "Short Menus" mode, and a
|
https://en.wikipedia.org/wiki/Defense%20Data%20Network
|
The Defense Data Network (DDN) was a computer networking effort of the United States Department of Defense from 1983 through 1995. It was based on ARPANET technology.
History
As an experiment, from 1971 to 1977, the Worldwide Military Command and Control System (WWMCCS) purchased and operated an ARPANET-type system from BBN Technologies for the Prototype WWMCCS Intercomputer Network (PWIN). The experiments proved successful enough that it became the basis of the much larger WIN system. Six initial WIN sites in 1977 increased to 20 sites by 1981.
In 1975, the Defense Communication Agency (DCA) took over operation of the ARPANET as it became an operational tool in addition to an ongoing research project. At that time, the Automatic Digital Network (AUTODIN), carried most of the Defense Department's message traffic. Starting in 1972, attempts had been made to introduce some packet switching into its planned replacement, AUTODIN II. AUTODIN II development proved unsatisfactory, however, and in 1982, AUTODIN II was canceled, to be replaced by a combination of several packet-based networks that would connect military installations.
The DCA used "Defense Data Network" (DDN) as the program name for this new network. Under its initial architecture, as developed by the Institute for Defense Analysis, the DDN would consist of two separate instances: the unclassified MILNET, which would be split off the ARPANET; and a classified network, also based on ARPANET technology, which would provide services for WIN, DODIIS, and SACDIN. C/30 packet switches, developed by BBN Technologies as upgraded Interface Message Processors, would provide the network technology. End-to-end encryption would be provided by ARPANET encryption devices, namely the Internet Private Line Interface (IPLI) or Blacker.
After MILNET was split away, the ARPANET would continue be used as an Internet backbone for researchers, but be slowly phased out. Both networks carried unclassified information, and we
|
https://en.wikipedia.org/wiki/Hardware%20architecture
|
In engineering, hardware architecture refers to the identification of a system's physical components and their interrelationships. This description, often called a hardware design model, allows hardware designers to understand how their components fit into a system architecture and provides to software component designers important information needed for software development and integration. Clear definition of a hardware architecture allows the various traditional engineering disciplines (e.g., electrical and mechanical engineering) to work more effectively together to develop and manufacture new machines, devices and components.
Hardware is also an expression used within the computer engineering industry to explicitly distinguish the (electronic computer) hardware from the software that runs on it. But hardware, within the automation and software engineering disciplines, need not simply be a computer of some sort. A modern automobile runs vastly more software than the Apollo spacecraft. Also, modern aircraft cannot function without running tens of millions of computer instructions embedded and distributed throughout the aircraft and resident in both standard computer hardware and in specialized hardward components such as IC wired logic gates, analog and hybrid devices, and other digital components. The need to effectively model how separate physical components combine to form complex systems is important over a wide range of applications, including computers, personal digital assistants (PDAs), cell phones, surgical instrumentation, satellites, and submarines.
Hardware architecture is the representation of an engineered (or to be engineered) electronic or electromechanical hardware system, and the process and discipline for effectively implementing the design(s) for such a system. It is generally part of a larger integrated system encompassing information, software, and device prototyping.
It is a representation because it is used to convey information abou
|
https://en.wikipedia.org/wiki/Lacandon%20Jungle
|
The Lacandon Jungle (Spanish: Selva Lacandona) is an area of rainforest which stretches from Chiapas, Mexico, into Guatemala. The heart of this rainforest is located in the Montes Azules Biosphere Reserve in Chiapas near the border with Guatemala in the Montañas del Oriente region of the state. Although much of the jungle outside the reserve has been cleared, the Lacandon is still one of the largest montane rainforests in Mexico. It contains 1,500 tree species, 33% of all Mexican bird species, 25% of all Mexican animal species, 56% of all Mexican diurnal butterflies and 16% of all Mexico's fish species.
The Lacandon in Chiapas is also home to a number of important Mayan archaeological sites including Palenque, Yaxchilan and Bonampak, with numerous smaller sites which remain partially or fully unexcavated. This rainforest, especially the area inside the Biosphere Reserve, is a source of political tension, pitting the EZLN or Zapatistas and their indigenous allies who want to farm the land against international environmental groups and the Lacandon Maya, the original indigenous group of the area and the one that holds the title to most of the lands in Montes Azures.
Environment
The jungle has approximately 1.9 million hectares stretching from southeast Chiapas into northern Guatemala and into the southern Yucatán Peninsula. The Chiapas portion is located on the Montañas del Oriente (Eastern Mountains) centered on a series of canyonlike valleys called the Cañadas, between smaller mountain ridges oriented from northwest to southeast. It is bordered by the Guatemalan border on two sides with Comitán de Domínguez to the southwest and the city of Palenque to north. Dividing the Chiapas part of the forest from the Guatemalan side is the Usumacinta River, which is the largest river in Mexico and the seventh largest in the world based on volume of water.
The core of the Chiapas forest is the Montes Azules Biosphere Reserve, but it also includes some other protected areas
|
https://en.wikipedia.org/wiki/Design%20load
|
In a general sense, the design load is the maximum amount of something a system is designed to handle or the maximum amount of something that the system can produce, which are very different meanings. For example, a crane with a design load of 20 tons is designed to be able to lift loads that weigh 20 tons or less. However, when a failure could be catastrophic, such as a crane dropping its load or collapsing entirely, a factor of safety is necessary. As a result, the crane should lift about 2 to 5 tons at the most.
In structural design, a design load is greater than the load which the system is expected to support. This is because engineers incorporate a safety factor in their design, in order to ensure that the system will be able to support at least the expected loads (called specified loads, despite any problems with construction, materials, etc. that go unnoticed during construction.
A heater would have a general design load, meaning the maximum amount of heat it can produce. A bridge would have a specified load, with the design load being determined by engineers and applied as a theoretical load intended to ensure the actual real-world capacity of the specified load.
See also
Limit states design
Factor of safety
Specified load
Engineering concepts
|
https://en.wikipedia.org/wiki/Orthotropic%20material
|
In material science and solid mechanics, orthotropic materials have material properties at a particular point which differ along three orthogonal axes, where each axis has twofold rotational symmetry. These directional differences in strength can be quantified with Hankinson's equation.
They are a subset of anisotropic materials, because their properties change when measured from different directions.
A familiar example of an orthotropic material is wood. In wood, one can define three mutually perpendicular directions at each point in which the properties are different. It is most stiff (and strong) along the grain (axial direction), because most cellulose fibrils are aligned that way. It is usually least stiff in the radial direction (between the growth rings), and is intermediate in the circumferential direction. This anisotropy was provided by evolution, as it best enables the tree to remain upright.
Because the preferred coordinate system is cylindrical-polar, this type of orthotropy is also called polar orthotropy.
Another example of an orthotropic material is sheet metal formed by squeezing thick sections of metal between heavy rollers. This flattens and stretches its grain structure. As a result, the material becomes anisotropic — its properties differ between the direction it was rolled in and each of the two transverse directions. This method is used to advantage in structural steel beams, and in aluminium aircraft skins.
If orthotropic properties vary between points inside an object, it possesses both orthotropy and inhomogeneity. This suggests that orthotropy is the property of a point within an object rather than for the object as a whole (unless the object is homogeneous). The associated planes of symmetry are also defined for a small region around a point and do not necessarily have to be identical to the planes of symmetry of the whole object.
Orthotropic materials are a subset of anisotropic materials; their properties depend on the directio
|
https://en.wikipedia.org/wiki/Trijet
|
A trijet is a jet aircraft powered by three jet engines. In general, passenger airline trijets are considered to be second-generation jet airliners, due to their innovative engine locations, in addition to the advancement of turbofan technology. Trijets are more efficient than quadjets, but not as efficient as twinjets, which replaced trijets as larger and more reliable turbofan engines became available.
Design
One consideration with trijets is positioning the central engine. This is usually accomplished by placing the engine along the centerline, but still poses difficulties. The most common configuration is having the central engine located in the rear fuselage and supplied with air by an S-shaped duct; this is used on the Hawker Siddeley Trident, Boeing 727, Tupolev Tu-154, Lockheed L-1011 TriStar, and, more recently, the Dassault Falcon 7X. The S-duct has low drag, and since the third engine is mounted closer to the centerline, the aircraft will normally be easy to handle in the event of an engine failure. However, S-duct designs are more complex and costlier, particularly for an airliner. Furthermore, the central engine bay would require structural changes in the event of a major re-engining (remodeling of the engine). For example, the 727's central bay was only wide enough to fit a low-bypass turbofan and not the newer high-bypass turbofans which were quieter and more powerful. Boeing decided that a redesign was too expensive and ended its production instead of pursuing further development. The Lockheed Tristar's tail section was too short to fit an existing two-spool engine as it was designed only to accommodate the new three-spool Rolls-Royce RB211 engine, and delays in the RB211's development, in turn, pushed back the TriStar's entry into service which affected sales.
The McDonnell Douglas DC-10 and related MD-11 use an alternative "straight-through" central engine layout, which allows for easier installation, modification, and access. It also has the a
|
https://en.wikipedia.org/wiki/Kistler%20Prize
|
The Kistler Prize (1999-2011) was awarded annually to recognize original contributions "to the understanding of the connection between human heredity and human society," and was named after its benefactor, physicist and inventor Walter Kistler. The prize was awarded by the Foundation For the Future and it included a cash award of US$100,000 and a 200-gram gold medallion.
Recipients
The recipients have been:
2000 – Edward O. Wilson
2001 – Richard Dawkins
2002 – Luigi Luca Cavalli-Sforza
2003 – Arthur Jensen
2004 – Vincent Sarich
2005 – Thomas J. Bouchard
2006 – Doreen Kimura
2007 – Spencer Wells
2008 – Craig Venter
2009 – Svante Pääbo
2010 – Leroy Hood
2011 – Charles Murray
Walter P. Kistler Book Award
The Walter P. Kistler Book Award was established in 2003 to recognize authors of science books that "significantly increase the knowledge and understanding of the public regarding subjects that will shape the future of our species." The award includes a cash prize of US$10,000 and is formally presented in ceremonies that are open to the public.
The recipients have been:
2003 – Gregory Stock for Redesigning Humans: Our Inevitable Genetic Future
2004 – Spencer Wells for The Journey of Man: A Genetic Odyssey
2005 – Steven Pinker for The Blank Slate
2006 – William H. Calvin for A Brain for All Seasons: Human Evolution and Abrupt Climate Change
2007 – Eric Chaisson for Epic of Evolution: Seven Ages of the Cosmos
2008 – Christopher Stringer for Homo britannicus: The Incredible Story of Human Life in Britain
2009 – David Archer (scientist) for The Long Thaw: How Humans are Changing the Next 100,000 Years of Earth's Climate
2011 – Laurence C. Smith for The World in 2050: Four Forces Shaping Civilization's Northern Future
Foundation For the Future
The mission of the Foundation For the Future is to increase and diffuse knowledge concerning the long-term future of humanity. It conducts a broad range of programs and activities to promote an understanding of the factors that
|
https://en.wikipedia.org/wiki/4Pi%20microscope
|
A 4Pi microscope is a laser scanning fluorescence microscope with an improved axial resolution. With it the typical range of the axial resolution of 500–700 nm can be improved to 100–150 nm, which corresponds to an almost spherical focal spot with 5–7 times less volume than that of standard confocal microscopy.
Working principle
The improvement in resolution is achieved by using two opposing objective lenses, which both are focused to the same geometrical location. Also the difference in optical path length through each of the two objective lenses is carefully aligned to be minimal. By this method, molecules residing in the common focal area of both objectives can be illuminated coherently from both sides and the reflected or emitted light can also be collected coherently, i.e. coherent superposition of emitted light on the detector is possible. The solid angle that is used for illumination and detection is increased and approaches its maximum. In this case the sample is illuminated and detected from all sides simultaneously.
The operation mode of a 4Pi microscope is shown in the figure. The laser light is divided by a beam splitter and directed by mirrors towards the two opposing objective lenses. At the common focal point superposition of both focused light beams occurs. Excited molecules at this position emit fluorescence light, which is collected by both objective lenses, combined by the same beam splitter and deflected by a dichroic mirror onto a detector. There superposition of both emitted light pathways can take place again.
In the ideal case each objective lens can collect light from a solid angle of . With two objective lenses one can collect from every direction (solid angle ). The name of this type of microscopy is derived from the maximal possible solid angle for excitation and detection. Practically, one can achieve only aperture angles of about 140° for an objective lens, which corresponds to .
The microscope can be operated in three different w
|
https://en.wikipedia.org/wiki/Mercury%20%28cipher%20machine%29
|
Mercury was a British cipher machine used by the Air Ministry from 1950 until at least the early 1960s. Mercury was an online rotor machine descended from Typex, but modified to achieve a longer cycle length using a so-called double-drum basket system.
History
Mercury was designed by Wing Commander E. W. Smith and F. Rudd, who were awarded £2,250 and £750 respectively in 1960 for their work in the design of the machine. E. W. Smith, one of the developers of TypeX, had designed the double-drum basket system in 1943, on his own initiative, to fulfil the need for an on-line system.
Mercury prototypes were operational by 1948, and the machine was in use by 1950. Over 200 Mercury machines had been made by 1959 with over £250,000 spent on its production. Mercury links were installed between the UK and various Overseas stations, including in Canada, Australia, Singapore, Cyprus, Germany, France, Middle East, Washington, Nairobi and Colombo. The machine was used for UK diplomatic messaging for more or less a decade, but saw almost no military use.
In 1960, it was anticipated that the machine would remain in use until 1963, when it would be made obsolete by the arrival of BID 610 (Alvis) equipment.
A miniaturised version of Mercury was designed, named Ariel, but this machine appears not to have been adopted for operational use.
Design
In the Mercury system, two series of rotors were used. The first series, dubbed the control maze, had four rotors, and stepped cyclometrically as in Typex. Five outputs from the control maze were used to determine the stepping of five rotors in the second series of rotors, the message maze, the latter used to encrypt and decrypt the plaintext and ciphertext. A sixth rotor in the message maze was controlled independently and stepped in the opposite direction to the others. All ten rotors were interchangeable in any part of either maze. Using rotors to control the stepping of other rotors was a feature of an earlier cipher machine, the US
|
https://en.wikipedia.org/wiki/Scaffold%20protein
|
In biology, scaffold proteins are crucial regulators of many key signalling pathways. Although scaffolds are not strictly defined in function, they are known to interact and/or bind with multiple members of a signalling pathway, tethering them into complexes. In such pathways, they regulate signal transduction and help localize pathway components (organized in complexes) to specific areas of the cell such as the plasma membrane, the cytoplasm, the nucleus, the Golgi, endosomes, and the mitochondria.
History
The first signaling scaffold protein discovered was the Ste5 protein from the yeast Saccharomyces cerevisiae. Three distinct domains of Ste5 were shown to associate with the protein kinases Ste11, Ste7, and Fus3 to form a multikinase complex.
Function
Scaffold proteins act in at least four ways: tethering signaling components, localizing these components to specific areas of the cell, regulating signal transduction by coordinating positive and negative feedback signals, and insulating correct signaling proteins from competing proteins.
Tethering signaling components
This particular function is considered a scaffold's most basic function. Scaffolds assemble signaling components of a cascade into complexes. This assembly may be able to enhance signaling specificity by preventing unnecessary interactions between signaling proteins, and enhance signaling efficiency by increasing the proximity and effective concentration of components in the scaffold complex. A common example of how scaffolds enhance specificity is a scaffold that binds a protein kinase and its substrate, thereby ensuring specific kinase phosphorylation. Additionally, some signaling proteins require multiple interactions for activation and scaffold tethering may be able to convert these interactions into one interaction that results in multiple modifications. Scaffolds may also be catalytic as interaction with signaling proteins may result in allosteric changes of these signaling component
|
https://en.wikipedia.org/wiki/Scalar%20%28mathematics%29
|
A scalar is an element of a field which is used to define a vector space.
In linear algebra, real numbers or generally elements of a field are called scalars and relate to vectors in an associated vector space through the operation of scalar multiplication (defined in the vector space), in which a vector can be multiplied by a scalar in the defined way to produce another vector. Generally speaking, a vector space may be defined by using any field instead of real numbers (such as complex numbers). Then scalars of that vector space will be elements of the associated field (such as complex numbers).
A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied in the defined way to produce a scalar. A vector space equipped with a scalar product is called an inner product space.
A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector.
The term scalar is also sometimes used informally to mean a vector, matrix, tensor, or other, usually, "compound" value that is actually reduced to a single component. Thus, for example, the product of a 1 × n matrix and an n × 1 matrix, which is formally a 1 × 1 matrix, is often said to be a scalar.
The real component of a quaternion is also called its scalar part.
The term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix.
Etymology
The word scalar derives from the Latin word scalaris, an adjectival form of scala (Latin for "ladder"), from which the English word scale also comes. The first recorded usage of the word "scalar" in mathematics occurs in François Viète's Analytic Art (In artem analyticem isagoge) (1591):
Magnitudes that ascend or descend proportionally in keeping with their nature from one kind to another may be called scalar terms.
(Latin: Magnitudines quae ex genere ad genus sua vi proportionaliter adscendunt vel descendunt, voce
|
https://en.wikipedia.org/wiki/NUbuntu
|
nUbuntu or Network Ubuntu was a project to take the existing Ubuntu operating system LiveCD and Full Installer and remaster it with tools needed for penetration testing servers and networks. The main idea is to keep Ubuntu's ease of use and mix it with popular penetration testing tools. Besides usage for network and server testing, nUbuntu will be made to be a desktop distribution for advanced Linux users.
Contents
nUbuntu uses the light window manager Fluxbox.
It includes some of the most used security programs for Linux, such as Wireshark, nmap, dSniff, and Ettercap.
History
2005-12-18 - nUbuntu Project is born, developers release Testing 1
2006-01-16 - nUbuntu Live developers release Stable 1
2006-06-26 - nUbuntu Live developers release version 6.06
2006-10-16 - nUbuntu featured in Hacker Japan , a Japanese Hacker Magazine
2006-11-21 - nUbuntu Live developers release version 6.10
As of April 4, 2010, the official website is closed with no explanation.
Releases
Below is a list of previous and current releases.
Further reading
Russ McRee (Nov 2007) Security testing with nUbuntu, Linux Magazine, issue 84
External links
Operating system distributions bootable from read-only media
Ubuntu derivatives
Pentesting software toolkits
Linux distributions
de:Liste von Linux-Distributionen#Ubuntu-Derivate
|
https://en.wikipedia.org/wiki/Hardness
|
In materials science, hardness (antonym: softness) is a measure of the resistance to plastic deformation, such as an indentation (over an area) or a scratch (linear), induced mechanically either by pressing or abrasion. In general, different materials differ in their hardness; for example hard metals such as titanium and beryllium are harder than soft metals such as sodium and metallic tin, or wood and common plastics. Macroscopic hardness is generally characterized by strong intermolecular bonds, but the behavior of solid materials under force is complex; therefore, hardness can be measured in different ways, such as scratch hardness, indentation hardness, and rebound hardness. Hardness is dependent on ductility, elastic stiffness, plasticity, strain, strength, toughness, viscoelasticity, and viscosity. Common examples of hard matter are ceramics, concrete, certain metals, and superhard materials, which can be contrasted with soft matter.
Measures
There are three main types of hardness measurements: scratch, indentation, and rebound. Within each of these classes of measurement there are individual measurement scales. For practical reasons conversion tables are used to convert between one scale and another.
Scratch hardness
Scratch hardness is the measure of how resistant a sample is to fracture or permanent plastic deformation due to friction from a sharp object. The principle is that an object made of a harder material will scratch an object made of a softer material. When testing coatings, scratch hardness refers to the force necessary to cut through the film to the substrate. The most common test is Mohs scale, which is used in mineralogy. One tool to make this measurement is the sclerometer.
Another tool used to make these tests is the pocket hardness tester. This tool consists of a scale arm with graduated markings attached to a four-wheeled carriage. A scratch tool with a sharp rim is mounted at a predetermined angle to the testing surface. In order to
|
https://en.wikipedia.org/wiki/Heisenbug
|
In computer programming jargon, a heisenbug is a software bug that seems to disappear or alter its behavior when one attempts to study it. The term is a pun on the name of Werner Heisenberg, the physicist who first asserted the observer effect of quantum mechanics, which states that the act of observing a system inevitably alters its state. In electronics, the traditional term is probe effect, where attaching a test probe to a device changes its behavior.
Similar terms, such as bohrbug, mandelbug, hindenbug, and schrödinbug (see the section on related terms) have been occasionally proposed for other kinds of unusual software bugs, sometimes in jest.
Examples
Heisenbugs occur because common attempts to debug a program, such as inserting output statements or running it with a debugger, usually have the side-effect of altering the behavior of the program in subtle ways, such as changing the memory addresses of variables and the timing of its execution.
One common example of a heisenbug is a bug that appears when the program is compiled with an optimizing compiler, but not when the same program is compiled without optimization (as is often done for the purpose of examining it with a debugger). While debugging, values that an optimized program would normally keep in registers are often pushed to main memory. This may affect, for instance, the result of floating-point comparisons, since the value in memory may have smaller range and accuracy than the value in the register. Similarly, heisenbugs may be caused by side-effects in test expressions used in runtime assertions in languages such as C and C++, where the test expression is not evaluated when assertions are turned off in production code using the NDEBUG macro.
Other common causes of heisenbugs are using the value of a non-initialized variable (which may change its address or initial value during debugging), or following an invalid pointer (which may point to a different place when debugging). Debuggers also co
|
https://en.wikipedia.org/wiki/STEbus
|
The STEbus (also called the IEEE-1000 bus) is a non-proprietary, processor-independent, computer bus with 8 data lines and 20 address lines. It was popular for industrial control systems in the late 1980s and early 1990s before the ubiquitous IBM PC dominated this market. STE stands for STandard Eurocard.
Although no longer competitive in its original market, it is valid choice for hobbyists wishing to make 'home brew' computer systems. The Z80 and probably the CMOS 65C02 are possible processors to use. The standardized bus allows hobbyists to interface to each other's designs.
Origins
In the early 1980s, there were many proprietary bus systems, each with its own strengths and weaknesses. Most had grown in an ad-hoc manner, typically around a particular microprocessor. The S-100 bus is based on Intel 8080 signals, the STD Bus around Z80 signals, the SS-50 bus around the Motorola 6800, and the G64 bus around 6809 signals. This made it harder to interface other processors. Upgrading to a more powerful processor would subtly change the timings, and timing restraints were not always tightly specified. Nor were electrical parameters and physical dimensions. They usually used edge-connectors for the bus, which were vulnerable to dirt and vibration.
The VMEbus had provided a high-quality solution for high-performance 16-bit processors, using reliable DIN 41612 connectors and well-specified Eurocard board sizes and rack systems. However, these were too costly where an application only needed a modest 8-bit processor.
In the mid 1980s, the STEbus standard addressed these issues by specifying what is rather like a VMEbus simplified for 8-bit processors. The bus signals are sufficiently generic so that they are easy for 8-bit processors to interface with. The board size was usually a single-height Eurocard (100 mm x 160 mm) but allowed for double-height boards (233 x 160 mm) as well.
The latter positioned the bus connector so that it could neatly merge into VME-bus syste
|
https://en.wikipedia.org/wiki/Concrete%20Mathematics
|
Concrete Mathematics: A Foundation for Computer Science, by Ronald Graham, Donald Knuth, and Oren Patashnik, first published in 1989, is a textbook that is widely used in computer-science departments as a substantive but light-hearted treatment of the analysis of algorithms.
Contents and history
The book provides mathematical knowledge and skills for computer science, especially for the analysis of algorithms. According to the preface, the topics in Concrete Mathematics are "a blend of CONtinuous and disCRETE mathematics". Calculus is frequently used in the explanations and exercises. The term "concrete mathematics" also denotes a complement to "abstract mathematics".
The book is based on a course begun in 1970 by Knuth at Stanford University. The book expands on the material (approximately 100 pages) in the "Mathematical Preliminaries" section of Knuth's The Art of Computer Programming. Consequently, some readers use it as an introduction to that series of books.
Concrete Mathematics has an informal and often humorous style. The authors reject what they see as the dry style of most mathematics textbooks. The margins contain "mathematical graffiti", comments submitted by the text's first editors: Knuth and Patashnik's students at Stanford.
As with many of Knuth's books, readers are invited to claim a reward for any error found in the book—in this case, whether an error is "technically, historically, typographically, or politically incorrect".
The book popularized some mathematical notation: the Iverson bracket, floor and ceiling functions, and notation for rising and falling factorials.
Typography
Donald Knuth used the first edition of Concrete Mathematics as a test case for the AMS Euler typeface and Concrete Roman font.
Chapter outline
Recurrent Problems
Summation
Integer Functions
Number Theory
Binomial Coefficients
Special Numbers
Generating Functions
Discrete Probability
Asymptotics
Editions
Errata: (1994), (January 1998), (27th print
|
https://en.wikipedia.org/wiki/Relay%20channel
|
In information theory, a relay channel is a probability model of the communication between a sender and a receiver aided by one or more intermediate relay nodes.
General discrete-time memoryless relay channel
A discrete memoryless single-relay channel can be modelled as four finite sets, and , and a conditional probability distribution on these sets. The probability distribution of the choice of symbols selected by the encoder and the relay encoder is represented by .
<nowiki>
o------------------o
| Relay Encoder |
o------------------o
Λ |
| y1 x2 |
| V
o---------o x1 o------------------o y o---------o
| Encoder |--->| p(y,y1|x1,x2) |--->| Decoder |
o---------o o------------------o o---------o
</nowiki>
There exist three main relaying schemes: Decode-and-Forward, Compress-and-Forward and Amplify-and-Forward. The first two schemes were first proposed in the pioneer article by Cover and El-Gamal.
Decode-and-Forward (DF): In this relaying scheme, the relay decodes the source message in one block and transmits the re-encoded message in the following block. The achievable rate of DF is known as .
Compress-and-Forward (CF): In this relaying scheme, the relay quantizes the received signal in one block and transmits the encoded version of the quantized received signal in the following block. The achievable rate of CF is known as subject to .
Amplify-and-Forward (AF): In this relaying scheme, the relay sends an amplified version of the received signal in the last time-slot. Comparing with DF and CF, AF requires much less delay as the relay node operates time-slot by time-slot. Also, AF requires much less computing power as no decoding or quantizing operation is performed at the relay side.
Cut-set upper bound
The first upper bound on the capacity of the relay channel is derived in the pioneer article by Cover and El-Gama
|
https://en.wikipedia.org/wiki/Power%20optimization%20%28EDA%29
|
Power optimization is the use of electronic design automation tools to optimize (reduce) the power consumption of a digital design, such as that of an integrated circuit, while preserving the functionality.
Introduction and history
The increasing speed and complexity of today’s designs implies a significant increase in the power consumption of very-large-scale integration (VLSI) chips. To meet this challenge, researchers have developed many different design techniques to reduce power. The complexity of today’s ICs, with over 100 million
transistors, clocked at over 1 GHz, means manual power optimization would be hopelessly slow and all too likely to contain errors. Computer-aided design (CAD) tools and methodologies are mandatory.
One of the key features that led to the success of complementary metal-oxide semiconductor, or CMOS, technology was its intrinsic low-power consumption. This meant that circuit designers and electronic design automation (EDA) tools could afford to concentrate on maximizing circuit performance and minimizing circuit area. Another interesting feature of CMOS technology is its nice scaling properties, which has permitted a steady decrease in the feature size (see Moore's law), allowing for more and more complex systems on a single chip, working at higher clock frequencies.
Power consumption concerns came into play with the appearance of the first portable electronic systems in the late 1980s. In this market, battery lifetime is a decisive factor for the commercial success of the product. Another fact that became apparent at about the same time was that the increasing integration of more active elements per die area would lead to prohibitively large-energy consumption of an integrated circuit. A high absolute level of power is not only undesirable for economic and environmental
reasons, but it also creates the problem of heat dissipation. In order to keep the device working at acceptable temperature levels, excessive heat may require expe
|
https://en.wikipedia.org/wiki/Atomic%20layer%20epitaxy
|
Atomic layer epitaxy (ALE), more generally known as atomic layer deposition (ALD), is a specialized form of thin film growth (epitaxy) that typically deposit alternating monolayers of two elements onto a substrate. The crystal lattice structure achieved is thin, uniform, and aligned with the structure of the substrate. The reactants are brought to the substrate as alternating pulses with "dead" times in between. ALE makes use of the fact that the incoming material is bound strongly until all sites available for chemisorption are occupied. The dead times are used to flush the excess material.
It is mostly used in semiconductor fabrication to grow thin films of thickness in the nanometer scale.
Technique
This technique was invented in 1974 and patented the same year (patent published in 1976) by Dr. Tuomo Suntola at the Instrumentarium company, Finland. Dr. Suntola's purpose was to grow thin films of Zinc sulfide to fabricate electroluminescent flat panel displays. The main trick used for this technique is the use of a self-limiting chemical reaction to control in an accurate way the thickness of the film deposited. Since the early days, ALE (ALD) has grown to a global thin film technology which has enabled the continuation of Moore's law. In 2018, Suntola received the Millennium Technology Prize for ALE (ALD) technology.
Compared to basic chemical vapour deposition, in ALE (ALD), chemical reactants are pulsed alternatively in a reaction chamber and then chemisorb in a saturating manner on the surface of the substrate, forming a chemisorbed monolayer.
ALD introduces two complementary precursors (e.g. Al(CH3)3 and H2O ) alternatively into the reaction chamber. Typically, one of the precursors will adsorb onto the substrate surface until it saturates the surface and further growth cannot occur until the second precursor is introduced. Thus the film thickness is controlled by the number of precursor cycles rather than the deposition time as is the case for conven
|
https://en.wikipedia.org/wiki/Chemical%20beam%20epitaxy
|
Chemical beam epitaxy (CBE) forms an important class of deposition techniques for semiconductor layer systems, especially III-V semiconductor systems. This form of epitaxial growth is performed in an ultrahigh vacuum system. The reactants are in the form of molecular beams of reactive gases, typically as the hydride or a metalorganic. The term CBE is often used interchangeably with metal-organic molecular beam epitaxy (MOMBE). The nomenclature does differentiate between the two (slightly different) processes, however. When used in the strictest sense, CBE refers to the technique in which both components are obtained from gaseous sources, while MOMBE refers to the technique in which the group III component is obtained from a gaseous source and the group V component from a solid source.
Basic principles
Chemical beam epitaxy was first demonstrated by W.T. Tsang in 1984. This technique was then described as a hybrid of metal-organic chemical vapor deposition (MOCVD) and molecular beam epitaxy (MBE) that exploited the advantages of both the techniques. In this initial work, InP and GaAs were grown using gaseous group III and V alkyls. While group III elements were derived from the pyrolysis of the alkyls on the surface, the group V elements were obtained from the decomposition of the alkyls by bringing in contact with heated Tantalum (Ta) or Molybdenum (Mo) at 950-1200 °C.
Typical pressure in the gas reactor is between 102 Torr and 1 atm for MOCVD. Here, the transport of gas occurs by viscous flow and chemicals reach the surface by diffusion. In contrast, gas pressures of less than 10−4 Torr are used in CBE. The gas transport now occurs as molecular beam due to the much longer mean-free paths, and the process evolves to a chemical beam deposition. It is also worth noting here that MBE employs atomic beams (such as aluminium (Al) and Gallium (Ga)) and molecular beams (such as As4 and P4) that are evaporated at high temperatures from solid elemental sources, while the
|
https://en.wikipedia.org/wiki/Business%20mathematics
|
Business mathematics are mathematics used by commercial enterprises to record and manage business operations. Commercial organizations use mathematics in accounting, inventory management, marketing, sales forecasting, and financial analysis.
Mathematics typically used in commerce includes elementary arithmetic, elementary algebra, statistics and probability. For some management problems, more advanced mathematics - calculus, matrix algebra, and linear programming - may be applied.
High school
Business mathematics, sometimes called commercial math or consumer math, is a group of practical subjects used in commerce and everyday life. In schools, these subjects are often taught to students who are not planning a university education. In the United States, they are typically offered in high schools and in schools that grant associate's degrees; elsewhere they may be included under business studies. The emphasis in these courses is on computational skills and their practical application, with practice being predominant. These courses often fulfill the general math credit for high school students.
A (U.S.) business math course typically includes a review of elementary arithmetic, including fractions, decimals, and percentages. Elementary algebra is often included as well, in the context of solving practical business problems. The practical applications typically include checking accounts, price discounts, markups and Markup, payroll calculations, simple and compound interest, consumer and business credit, and mortgages and revenues.
University level
Undergraduate
Business mathematics comprises mathematics credits taken at an undergraduate level by business students.
The course is often organized around the various business sub-disciplines, including the above applications, and usually includes a separate module on interest calculations; the mathematics comprises mainly algebraic techniques.
Many programs, as mentioned, extend to more sophisticated mathemat
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.