source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Message%20submission%20agent
A message submission agent (MSA), or mail submission agent, is a computer program or software agent that receives electronic mail messages from a mail user agent (MUA) and cooperates with a mail transfer agent (MTA) for delivery of the mail. It uses ESMTP, a variant of the Simple Mail Transfer Protocol (SMTP), as specified in RFC 6409. Many MTAs perform the function of an MSA as well, but there are also programs that are specially designed as MSAs without full MTA functionality. Historically, in Internet mail, both MTA and MSA functions use port number 25, but the official port for MSAs is 587. The MTA accepts a user's incoming mail, while the MSA accepts a user's outgoing mail. Benefits Separation of the MTA and MSA functions produces several benefits. One benefit is that an MSA, since it is interacting directly with the author's MUA, can correct minor errors in a message format (such as a missing Date, Message-ID, To fields, or an address with a missing domain name) and/or immediately report an error to the author so that it can be corrected before it is sent to any of the recipients. An MTA accepting a message from another site cannot reliably make those kinds of corrections, and any error reports generated by such an MTA will reach the author (if at all) only after the message has already been sent. One more benefit is that with a dedicated port number, 587, it is always possible for users to connect to their domain to submit new mail. To combat spam (including spam being sent unwittingly by a victim of a botnet) many ISPs and institutional networks restrict the ability to connect to remote MTAs on port 25. The accessibility of an MSA on port 587 enables nomadic users (for example, those working on a laptop) to continue to send mail via their preferred submission servers even from within others' networks. Using a specific submission server is a requirement when sender policies or signing practices are enforced. Another benefit is that separating the MTA and
https://en.wikipedia.org/wiki/Matrix%20norm
In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions). Preliminaries Given a field of either real or complex numbers, let be the -vector space of matrices with rows and columns and entries in the field . A matrix norm is a norm on . This article will always write such norms with double vertical bars (like so: ). Thus, the matrix norm is a function that must satisfy the following properties: For all scalars and matrices , (positive-valued) (definite) (absolutely homogeneous) (sub-additive or satisfying the triangle inequality) The only feature distinguishing matrices from rearranged vectors is multiplication. Matrix norms are particularly useful if they are also sub-multiplicative: Every norm on can be rescaled to be sub-multiplicative; in some books, the terminology matrix norm is reserved for sub-multiplicative norms. Matrix norms induced by vector norms Suppose a vector norm on and a vector norm on are given. Any matrix induces a linear operator from to with respect to the standard basis, and one defines the corresponding induced norm or operator norm or subordinate norm on the space of all matrices as follows: where denotes the supremum. This norm measures how much the mapping induced by can stretch vectors. Depending on the vector norms , used, notation other than can be used for the operator norm. Matrix norms induced by vector p-norms If the p-norm for vectors () is used for both spaces and then the corresponding operator norm is: These induced norms are different from the "entry-wise" p-norms and the Schatten p-norms for matrices treated below, which are also usually denoted by In the special cases of the induced matrix norms can be computed or estimated by which is simply the maximum absolute column sum of the matrix; which is simply the maximum absolute row sum of the matrix. For example, for we have that In the special case of (the Eucl
https://en.wikipedia.org/wiki/Phase%20response
In signal processing, phase response is the relationship between the phase of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as an amplifier or a filter. Amplifiers, filters, and other devices are often categorized by their amplitude and/or phase response. The amplitude response is the ratio of output amplitude to input, usually a function of the frequency. Similarly, phase response is the phase of the output with the input as reference. The input is defined as zero phase. A phase response is not limited to lying between 0° and 360°, as phase can accumulate to any amount of time. See also Group delay and phase delay References Trigonometry Wave mechanics Signal processing
https://en.wikipedia.org/wiki/AGARD
The Advisory Group for Aerospace Research and Development (AGARD) was an agency of NATO that existed from 1952 to 1996. AGARD was founded as an Agency of the NATO Military Committee. It was set up in May 1952 with headquarters in Neuilly sur Seine, France. In a mission statement in the 1982 History it published, the purpose involved "bringing together the leading personalities of the NATO nations in the fields of science and technology relating to aerospace". The Advisory Group was organized by panels: Aerospace medical, avionics, electromagnetic wave propagation, flight mechanics, fluid dynamics, guidance and control, propulsion and energetics, structures and materials, and technical information. In 1958 Theodore von Kármán hired Moe Berg to accompany him to the AGARD conference in Paris. "AGARD's aim was to encourage European countries to develop weapons technology on their own instead of relying on the U.S. defense industry to do it for them." Activities There were annual meetings, frequently in Paris, but also in Delft, Turin, Cambridge, Washington DC. The Advisory Group administered a consultant and exchange program including lecture series and technical panels. The AGARD publishing program included a multilingual aeronautical dictionary, about ninety titles per year, with a normal run of 1200. An Agardograph is a work prepared by, or on behalf of, AGARD's panels. For example, an agardograph on the AGARD-B wind tunnel model was prepared. Later examples of AGARD studies include such topics as non-lethal weapons, theatre ballistic missile defence, protection of large aircraft in peace support operations, and limiting collateral damage caused by air-delivered weapons. AGARD was also one of the first NATO organizations to cooperate with Russia in a mutual exchange of information dealing with flight safety. AGARD merged with the NATO Defence Research Group (DRG) in 1996 to become the NATO Research and Technology Organisation (RTO). See also Aeronautics Notes
https://en.wikipedia.org/wiki/Anisogamy
Anisogamy is a form of sexual reproduction that involves the union or fusion of two gametes that differ in size and/or form. The smaller gamete is male, a sperm cell, whereas the larger gamete is female, typically an egg cell. Anisogamy is predominant among multicellular organisms. In both plants and animals gamete size difference is the fundamental difference between females and males. Anisogamy most likely evolved from isogamy. Since the biological definition of male and female is based on gamete size, the evolution of anisogamy is viewed as the evolutionary origin of male and female sexes. Anisogamy is an outcome of both natural selection and sexual selection, and led the sexes to different primary and secondary sex characteristics including sex differences in behavior. Geoff Parker, Robin Baker, and Vic Smith were the first to provide a mathematical model for the evolution of anisogamy that was consistent with modern evolutionary theory. Their theory was widely accepted but there are alternative hypotheses about the evolution of anisogamy. Etymology Anisogamy comes from the ancient Greek words 'aniso' meaning unequal and 'gamy' meaning marriage. The first known use of the term anisogamy was in the year 1891. Definition Anisogamy is the form of sexual reproduction that involves the union or fusion of two gametes which differ in size and/or form. The smaller gamete is considered to be male (a sperm cell), whereas the larger gamete is regarded as female (typically an egg cell, if non-motile). There are several types of anisogamy. Both gametes may be flagellated and therefore motile. Alternatively, as in flowering plants, conifers and gnetophytes, neither of the gametes are flagellated. In these groups, the male gametes are non-motile cells within pollen grains, and are delivered to the egg cells by means of pollen tubes. In the red alga Polysiphonia, non-motile eggs are fertilized by non-motile sperm. The form of anisogamy that occurs in animals, includ
https://en.wikipedia.org/wiki/HDCAM
HDCAM is a high-definition video digital recording videocassette version of Digital Betacam introduced in 1997 that uses an 8-bit discrete cosine transform (DCT) compressed 3:1:1 recording, in 1080i-compatible down-sampled resolution of 1440×1080, and adding 24p and 23.976 progressive segmented frame (PsF) modes to later models. The HDCAM codec uses rectangular pixels and as such the recorded 1440×1080 content is upsampled to 1920×1080 on playback. The recorded video bit rate is 144 Mbit/s. Audio is also similar, with four channels of AES3 20-bit, 48 kHz digital audio. Like Betacam, HDCAM tapes were produced in small and large cassette sizes; the small cassette uses the same form factor as the original Betamax. The main competitor to HDCAM was the DVCPRO HD format offered by Panasonic, which uses a similar compression scheme and bit rates ranging from 40 Mbit/s to 100 Mbit/s depending on frame rate. HDCAM is standardized as SMPTE 367M, also known as SMPTE D-11. Like most videotape formats, HDCAM is no longer in widespread use, having been superseded by memory cards, disk-based recording formats, and SSDs. Sony officially discontinued the format in 2016. SMPTE 367M SMPTE 367M, also known as SMPTE D-11, is the SMPTE standard for HDCAM. The standard specifies compression of high-definition digital video. D11 source picture rates can be 24, 24/1.001, 25 or 30/1.001 frames per second progressive scan, or 50 or 60/1.001 fields per second interlaced; compression yields output bit rates ranging from 112 to 140 Mbit/s. Each D11 source frame is composed of a luminance channel at 1920 x 1080 pixels and a chrominance channel at 960 x 1080 pixels. During compression, each frame's luminance channel is subsampled at 1440 x 1080, while the chrominance channel is subsampled at 480 x 1080, meaning 3:1:1 chroma subsampling. HDCAM supports recording at 24 FPS for film production applications, but it can be configured for television production. Similar to MPEG IMX, the helical sca
https://en.wikipedia.org/wiki/RSA%20problem
In cryptography, the RSA problem summarizes the task of performing an RSA private-key operation given only the public key. The RSA algorithm raises a message to an exponent, modulo a composite number N whose factors are not known. Thus, the task can be neatly described as finding the eth roots of an arbitrary number, modulo N. For large RSA key sizes (in excess of 1024 bits), no efficient method for solving this problem is known; if an efficient method is ever developed, it would threaten the current or eventual security of RSA-based cryptosystems—both for public-key encryption and digital signatures. More specifically, the RSA problem is to efficiently compute P given an RSA public key (N, e) and a ciphertext C ≡ P e (mod N). The structure of the RSA public key requires that N be a large semiprime (i.e., a product of two large prime numbers), that 2 < e < N, that e be coprime to φ(N), and that 0 ≤ C < N. C is chosen randomly within that range; to specify the problem with complete precision, one must also specify how N and e are generated, which will depend on the precise means of RSA random keypair generation in use. The most efficient method known to solve the RSA problem is by first factoring the modulus N, a task believed to be impractical if N is sufficiently large (see integer factorization). The RSA key setup routine already turns the public exponent e, with this prime factorization, into the private exponent d, and so exactly the same algorithm allows anyone who factors N to obtain the private key. Any C can then be decrypted with the private key. Just as there are no proofs that integer factorization is computationally difficult, there are also no proofs that the RSA problem is similarly difficult. By the above method, the RSA problem is at least as easy as factoring, but it might well be easier. Indeed, there is strong evidence pointing to this conclusion: that a method to break the RSA method cannot be converted necessarily into a method for factorin
https://en.wikipedia.org/wiki/Van%20Wijngaarden%20grammar
In computer science, a Van Wijngaarden grammar (also vW-grammar or W-grammar) is a formalism for defining formal languages. The name derives from the formalism invented by Adriaan van Wijngaarden for the purpose of defining the ALGOL 68 programming language. The resulting specification remains its most notable application. Van Wijngaarden grammars address the problem that context-free grammars cannot express agreement or reference, where two different parts of the sentence must agree with each other in some way. For example, the sentence "The birds was eating" is not Standard English because it fails to agree on number. A context-free grammar would parse "The birds was eating" and "The birds were eating" and "The bird was eating" in the same way. However, context-free grammars have the benefit of simplicity whereas van Wijngaarden grammars are considered highly complex. Two levels W-grammars are two-level grammars: they are defined by a pair of grammars, that operate on different levels: the hypergrammar is an attribute grammar, i.e. a set of context-free grammar rules in which the nonterminals may have attributes; and the metagrammar is a context-free grammar defining possible values for these attributes. The set of strings generated by a W-grammar is defined by a two-stage process: within each hyperrule, for each attribute that occurs in it, pick a value for it generated by the metagrammar; the result is a normal context-free grammar rule; do this in every possible way; use the resulting (possibly infinite) context-free grammar to generate strings in the normal way. The consistent substitution used in the first step is the same as substitution in predicate logic, and actually supports logic programming; it corresponds to unification in Prolog, as noted by Alain Colmerauer. W-grammars are Turing complete; hence, all decision problems regarding the languages they generate, such as whether a W-grammar generates a given string whether a W-grammar generates
https://en.wikipedia.org/wiki/Genotype%20frequency
Genetic variation in populations can be analyzed and quantified by the frequency of alleles. Two fundamental calculations are central to population genetics: allele frequencies and genotype frequencies. Genotype frequency in a population is the number of individuals with a given genotype divided by the total number of individuals in the population. In population genetics, the genotype frequency is the frequency or proportion (i.e., 0 < f < 1) of genotypes in a population. Although allele and genotype frequencies are related, it is important to clearly distinguish them. Genotype frequency may also be used in the future (for "genomic profiling") to predict someone's having a disease or even a birth defect. It can also be used to determine ethnic diversity. Genotype frequencies may be represented by a De Finetti diagram. Numerical example As an example, consider a population of 100 four-o-'clock plants (Mirabilis jalapa) with the following genotypes: 49 red-flowered plants with the genotype AA 42 pink-flowered plants with genotype Aa 9 white-flowered plants with genotype aa When calculating an allele frequency for a diploid species, remember that homozygous individuals have two copies of an allele, whereas heterozygotes have only one. In our example, each of the 42 pink-flowered heterozygotes has one copy of the a allele, and each of the 9 white-flowered homozygotes has two copies. Therefore, the allele frequency for a (the white color allele) equals This result tells us that the allele frequency of a is 0.3. In other words, 30% of the alleles for this gene in the population are the a allele. Compare genotype frequency: let's now calculate the genotype frequency of aa homozygotes (white-flowered plants). Allele and genotype frequencies always sum to one (100%). Equilibrium The Hardy–Weinberg law describes the relationship between allele and genotype frequencies when a population is not evolving. Let's examine the Hardy–Weinberg equation usi
https://en.wikipedia.org/wiki/Document%20Structure%20Description
Document Structure Description, or DSD, is a schema language for XML, that is, a language for describing valid XML documents. It's an alternative to DTD or the W3C XML Schema. An example of DSD in its simplest form: This says that element named "foo" in the XML namespace "http://example.com" may have two attributes, named "first" and "second". A "foo" element may not have any character data. It must contain one subelement, named "bar", also in the "http://example.com" namespace. A "bar" element is not allowed any attributes, character data or subelements. One XML document that would be valid according to the above DSD would be: Current Software store Prototype Java Processor from BRICS External links DSD home page Full DSD specification Comparison of DTD, W3C XML Schema, and DSD XML-based standards XML Data modeling languages
https://en.wikipedia.org/wiki/Leslie%20matrix
The Leslie matrix is a discrete, age-structured model of population growth that is very popular in population ecology named after Patrick H. Leslie. The Leslie matrix (also called the Leslie model) is one of the most well-known ways to describe the growth of populations (and their projected age distribution), in which a population is closed to migration, growing in an unlimited environment, and where only one sex, usually the female, is considered. The Leslie matrix is used in ecology to model the changes in a population of organisms over a period of time. In a Leslie model, the population is divided into groups based on age classes. A similar model which replaces age classes with ontogenetic stages is called a Lefkovitch matrix, whereby individuals can both remain in the same stage class or move on to the next one. At each time step, the population is represented by a vector with an element for each age class where each element indicates the number of individuals currently in that class. The Leslie matrix is a square matrix with the same number of rows and columns as the population vector has elements. The (i,j)th cell in the matrix indicates how many individuals will be in the age class i at the next time step for each individual in stage j. At each time step, the population vector is multiplied by the Leslie matrix to generate the population vector for the subsequent time step. To build a matrix, the following information must be known from the population: , the count of individuals (n) of each age class x , the fraction of individuals that survives from age class x to age class x+1, , fecundity, the per capita average number of female offspring reaching born from mother of the age class x. More precisely, it can be viewed as the number of offspring produced at the next age class weighted by the probability of reaching the next age class. Therefore, From the observations that at time t+1 is simply the sum of all offspring born from the previous time
https://en.wikipedia.org/wiki/Electrical%20mobility
Electrical mobility is the ability of charged particles (such as electrons or protons) to move through a medium in response to an electric field that is pulling them. The separation of ions according to their mobility in gas phase is called ion mobility spectrometry, in liquid phase it is called electrophoresis. Theory When a charged particle in a gas or liquid is acted upon by a uniform electric field, it will be accelerated until it reaches a constant drift velocity according to the formula where is the drift velocity (SI units: m/s), is the magnitude of the applied electric field (V/m), is the mobility (m2/(V·s)). In other words, the electrical mobility of the particle is defined as the ratio of the drift velocity to the magnitude of the electric field: For example, the mobility of the sodium ion (Na+) in water at 25 °C is . This means that a sodium ion in an electric field of 1 V/m would have an average drift velocity of . Such values can be obtained from measurements of ionic conductivity in solution. Electrical mobility is proportional to the net charge of the particle. This was the basis for Robert Millikan's demonstration that electrical charges occur in discrete units, whose magnitude is the charge of the electron. Electrical mobility is also inversely proportional to the Stokes radius of the ion, which is the effective radius of the moving ion including any molecules of water or other solvent that move with it. This is true because the solvated ion moving at a constant drift velocity is subject to two equal and opposite forces: an electrical force and a frictional force , where is the frictional coefficient, is the solution viscosity. For different ions with the same charge such as Li+, Na+ and K+ the electrical forces are equal, so that the drift speed and the mobility are inversely proportional to the radius . In fact, conductivity measurements show that ionic mobility increases from Li+ to Cs+, and therefore that Stokes radius decrease
https://en.wikipedia.org/wiki/UNIX/32V
UNIX/32V is an early version of the Unix operating system from Bell Laboratories, released in June 1979. 32V was a direct port of the Seventh Edition Unix to the DEC VAX architecture. Overview Before 32V, Unix had primarily run on DEC PDP-11 computers. The Bell Labs group that developed the operating system was dissatisfied with DEC, so its members refused DEC's offer to buy a VAX when the machine was announced in 1977. They had already begun a Unix port to the Interdata 8/32 instead. DEC then approached a different Bell Labs group in Holmdel, New Jersey, which accepted the offer and started work on what was to become 32V. Performed by Tom London and John F. Reiser, porting Unix was made possible due to work done between the Sixth and Seventh Editions of the operating system to decouple it from its "native" PDP-11 environment. The 32V team first ported the C compiler (Johnson's pcc), adapting an assembler and loader written for the Interdata 8/32 version of Unix to the VAX. They then ported the April 15, 1978 version of Unix, finding in the process that "[t]he (Bourne) shell [...] required by far the largest conversion effort of any supposedly portable program, for the simple reason that it is not portable." UNIX/32V was released without virtual memory paging, retaining only the swapping architecture of Seventh Edition. A virtual memory system was added at Berkeley by Bill Joy and Özalp Babaoğlu in order to support Franz Lisp; this was released to other Unix licensees as the Third Berkeley Software Distribution (3BSD) in 1979. Thanks to the popularity of the two systems' successors, 4BSD and UNIX System V, UNIX/32V is an antecedent of nearly all modern Unix systems. See also Ancient UNIX References Further reading Marshall Kirk McKusick and George V. Neville-Neil, The Design and Implementation of the FreeBSD Operating System (Boston: Addison-Wesley, 2004), , pp. 4–6. External links The Unix Heritage Society, (TUHS) a website dedicated to the preservation
https://en.wikipedia.org/wiki/WinCustomize
WinCustomize is a website that provides content for users to customize Microsoft Windows. The site hosts thousands of skins, themes, icons, wallpapers, and other graphical content to modify the Windows graphical user interface. There is some premium or paid content, however, the vast majority of the content is free for users to download. Site history WinCustomize was launched in March 2001 by Brad Wardell and Pat Ford, both of whom work at Stardock. After the dot-com recession had taken down many popular skin sites, WinCustomize quickly grew in popularity due to a combination of wide variety of content, uptime reliability, and being the preferred content destination by Stardock customers. The site has grown at a far greater pace than its founders had anticipated. It has managed to avoid having to put many limitations on users or having to resort to pop-up advertising because of its corporate patron Stardock subsidizing its costs. This growth has prompted several site redesigns to offer improved functionality and reliability to users. Since launch, WinCustomize has undergone several iterations: WinCustomize 2k5 — Launched at the end of 2004, WinCustomize was redesigned for improved stability, and added functionality, such as personal pages for subscribers, an articles system, tutorials etc. WinCustomize 2k7 — Launched January 15, 2007, WC2k7 was a fundamental rewrite using ASP.NET. The focus was to build a foundation that was easier to maintain and, in the future, expand. WinCustomize v6 — Planned for Late 2008/Early 2009, the WC v6 project aims be a major revision to how users navigate and interact with the site and the community as a whole. Where 2k7 was focused on the core codebase, v6 is focused on the user interface and experience. In July 2007 the WinCustomize Wiki was launched. WinCustomize 2010 — WinCustomize 2010 was launched on April 20, 2010. This major revision represents a major change in the sites look and navigation for users. A guided tour o
https://en.wikipedia.org/wiki/WPSG
WPSG (channel 57), branded on-air as Philly 57, is an independent television station in Philadelphia, Pennsylvania, United States. It is owned by the CBS News and Stations group alongside CBS station KYW-TV (channel 3). Both stations share studios on Hamilton Street north of Center City Philadelphia, while WPSG's transmitter is located in the city's Roxborough section. Channel 57 was allocated for commercial use in Philadelphia at the start of the 1970s; it was fought over by two groups who sought to broadcast subscription television (STV) programming to paying customers in the metropolitan area. Radio Broadcasting Company prevailed and launched WWSG-TV on June 15, 1981. It offered limited financial news programming, which was abandoned after 18 months, and a subscription service utilizing programming from SelecTV. Two years later, the station switched to broadcasting PRISM, a premium regional sports and movies service seeking to reach potential subscribers in areas beyond cable coverage, such as the city of Philadelphia. The Grant Broadcasting System acquired the station and relaunched it in 1985 as general-entertainment independent WGBS-TV, known on air as "Philly 57". The new owners spent millions of dollars on programming and the rights to Philadelphia Flyers hockey and Villanova Wildcats basketball; the station filled the third independent void left when WKBS-TV (channel 48) folded in 1983, and its entrance into the market clipped multiple separate efforts to establish such a station. However, Grant's strategy to build "full-grown" independents with expensive acquisitions drove the company into bankruptcy in December 1986. Grant's three stations were assumed by a consortium of creditors and bondholders known as Combined Broadcasting; management was controlled from Philadelphia. Combined Broadcasting solicited offers on its stations in 1993; a deal was reached to sell to the Fox network, but an objection caused the sale to be delayed and canceled. In 1995, Pa
https://en.wikipedia.org/wiki/NetWare%20File%20System
In computing, a NetWare File System (NWFS) is a file system based on a heavily modified version of FAT. It was used in the Novell NetWare operating system. It is the default and only file system for all volumes in versions 2.x through 4.x, and the default and only file system for the SYS volume continuing through version 5.x. Novell developed two varieties of NWFS: 16-bit NWFS 286, used in NetWare 2.x 32-bit NWFS 386, used in NetWare 3.x through NetWare 6.x. Novell Storage Services (NSS, released in 1998), superseded the NWFS format. The NWFS on-disk format was never publicly documented by Novell. The published specifications for 32-bit NWFS are: Maximum file size: 4 GB Maximum volume size: 1 TB Maximum files per volume: 2 million when using a single name space. Maximum files per server: 16 million Maximum directory entries: 16 million Maximum volumes per server: 64 Maximum volumes per partition: 8 Maximum open files per server: 100,000 Maximum directory tree depth: 100 levels Characters used: ASCII double-byte Maximum extended attributes: 512 Maximum data streams: 10 Support for different name spaces: Microsoft Windows Long names (a.k.a. OS/2 namespace), Unix, Apple Macintosh Support for restoring deleted files (salvage) Support for journaling (Novell Transaction Tracking System a.k.a. TTS) Support for block suballocation, starting in NetWare 4.x For larger files the file system utilized a performance feature named Turbo FAT. Transparent file compression was also supported, although this had a significant impact on the performance of file serving. Every name space requires its own separate directory entry for each file. While the maximum number of directory entries is 16,000,000, two resident name spaces would reduce the usable maximum number of directory entries to 8,000,000, and three to 5,333,333. 16-bit NWFS could handle volumes of up to 256 MB. However, its only name-space support was a dedicated API to handle Macintosh clients. See a
https://en.wikipedia.org/wiki/Terminal%20aerodrome%20forecast
In meteorology and aviation, terminal aerodrome forecast (TAF) is a format for reporting weather forecast information, particularly as it relates to aviation. TAFs are issued at least four times a day, every six hours, for major civil airfields: 0000, 0600, 1200 and 1800 UTC, and generally apply to a 24- or 30-hour period, and an area within approximately (or in Canada) from the center of an airport runway complex. TAFs are issued every three hours for military airfields and some civil airfields and cover a period ranging from 3 hours to 30 hours. TAFs complement and use similar encoding to METAR reports. They are produced by a human forecaster based on the ground. For this reason, there are considerably fewer TAF locations than there are airports for which METARs are available. TAFs can be more accurate than numerical weather forecasts, since they take into account local, small-scale, geographic effects. In the United States, the weather forecasters responsible for the TAFs in their respective areas are located within one of the 122 Weather Forecast Offices operated by the United States' National Weather Service. In contrast, a trend type forecast (TTF), which is similar to a TAF, is always produced by a person on-site where the TTF applies. In the United Kingdom, most TAFs for military airfields are produced locally, however TAFs for civil airfields are produced at the Met Office headquarters in Exeter. The United States Air Force employs active duty enlisted personnel as TAF writers. Air Force weather personnel are responsible for providing weather support for all Air Force and Army operations. Different countries use different change criteria for their weather groups. In the United Kingdom, TAFs for military airfields use colour states as one of the change criteria. Civil airfields in the UK use slightly different criteria. Code This TAF example of a 30-hour TAF was released on November 5, 2008, at 1730 UTC: TAF KXYZ 051730Z 0518/0624 31008KT 3SM -SHRA
https://en.wikipedia.org/wiki/Hans%20Reichenbach
Hans Reichenbach (September 26, 1891 – April 9, 1953) was a leading philosopher of science, educator, and proponent of logical empiricism. He was influential in the areas of science, education, and of logical empiricism. He founded the Gesellschaft für empirische Philosophie (Society for Empirical Philosophy) in Berlin in 1928, also known as the "Berlin Circle". Carl Gustav Hempel, Richard von Mises, David Hilbert and Kurt Grelling all became members of the Berlin Circle. In 1930, Reichenbach and Rudolf Carnap became editors of the journal Erkenntnis. He also made lasting contributions to the study of empiricism based on a theory of probability; the logic and the philosophy of mathematics; space, time, and relativity theory; analysis of probabilistic reasoning; and quantum mechanics. In 1951, he authored The Rise of Scientific Philosophy, his most popular book. Early life Hans was the second son of a Jewish merchant, Bruno Reichenbach, who had converted to Protestantism. He married Selma Menzel, a school mistress, who came from a long line of Protestant professionals which went back to the Reformation. His elder brother Bernard played a significant role in the left communist movement. His younger brother, Herman was a music educator. After completing secondary school in Hamburg, Hans Reichenbach studied civil engineering at the Hochschule für Technik Stuttgart, and physics, mathematics and philosophy at various universities, including Berlin, Erlangen, Göttingen and Munich. Among his teachers were Ernst Cassirer, David Hilbert, Max Planck, Max Born and Arnold Sommerfeld. Political activism Reichenbach was active in youth movements and student organizations. He joined the Freistudentenschaft in 1910. He attended the founding conference of the Freideutsche Jugend umbrella group at Hoher Meissner in 1913. He published articles about the university reform, the freedom of research, and against anti-Semitic infiltrations in student organizations. His older brother Be
https://en.wikipedia.org/wiki/C.mmp
The C.mmp was an early multiple instruction, multiple data (MIMD) multiprocessor system developed at Carnegie Mellon University (CMU) by William Wulf (1971). The notation C.mmp came from the PMS notation of Gordon Bell and Allen Newell, where a central processing unit (CPU) was designated as C, a variant was noted by the dot notation, and mmp stood for Multi-Mini-Processor. , the machine is on display at CMU, in Wean Hall, on the ninth floor. Structure Sixteen Digital Equipment Corporation PDP-11 minicomputers were used as the processing elements, named Compute Modules (CMs) in the system. Each CM had a local memory of 8K and a local set of peripheral devices. One of the challenges was that a device was only available through its unique connected processor, so the input/output (I/O) system (designed by Roy Levien) hid the connectivity of the devices and routed the requests to the hosting processor. If a processor went down, the devices connected to its Unibus became unavailable, which became a problem in overall system reliability. Processor 0 (the boot processor) had the disk drives attached. Each of the Compute Modules shared these communication pathways: An Interprocessor bus – used to distribute system-wide clock, interrupt, and process control messaging among the CMs A 16x16 crossbar switch – used to connect the 16 CMs on one side and 16 banks of shared memory on the other. If all 16 processors were accessing different banks of memory, the memory accesses would all be concurrent. If two or more processors were trying to access the same bank of memory, one of them would be granted access on one cycle and the remainder would be negotiated on subsequent memory cycles. Since the PDP-11 had a logical address space of 16-bits, another address translation unit was added to expand the address space to 25 bits for the shared memory space. The Unibus architecture provided 18 bits of physical address, and the two high-order bits were used to select one of four reloc
https://en.wikipedia.org/wiki/Conventionally%20grown
Conventionally grown is an agriculture term referring to a method of growing edible plants (such as fruit and vegetables) and other products. It is opposite to organic growing methods which attempt to produce without synthetic chemicals (fertilizers, pesticides, antibiotics, hormones) or genetically modified organisms. Conventionally grown products, meanwhile, often use fertilizers and pesticides which allow for higher yield, out of season growth, greater resistance, greater longevity and a generally greater mass. Conventionally grown fruit: PLU code consists of 4 numbers (e.g. 4012). Organically grown fruit: PLU code consists of 5 numbers and begins with 9 (e.g. 94012) Genetically engineered fruit: PLU code consists of 5 numbers and begins with 8 (e.g. 84012). Food science
https://en.wikipedia.org/wiki/Subjunctive%20possibility
Subjunctive possibility (also called alethic possibility) is a form of modality studied in modal logic. Subjunctive possibilities are the sorts of possibilities considered when conceiving counterfactual situations; subjunctive modalities are modalities that bear on whether a statement might have been or could be true—such as might, could, must, possibly, necessarily, contingently, essentially, accidentally, and so on. Subjunctive possibilities include logical possibility, metaphysical possibility, nomological possibility, and temporal possibility. Subjunctive possibility and other modalities Subjunctive possibility is contrasted with (among other things) epistemic possibility (which deals with how the world may be, for all we know) and deontic possibility (which deals with how the world ought to be). Epistemic possibility The contrast with epistemic possibility is especially important to draw, since in ordinary language the same phrases ("it's possible," "it can't be", "it must be") are often used to express either sort of possibility. But they are not the same. We do not know whether Goldbach's conjecture is true or not (no-one has come up with a proof yet); so it is (epistemically) possible that it is true and it is (epistemically) possible that it is false. But if it is, in fact, provably true (as it may be, for all we know), then it would have to be (subjunctively) necessarily true; what being provable means is that it would not be (logically) possible for it to be false. Similarly, it might not be at all (epistemically) possible that it is raining outside—we might know beyond a shadow of a doubt that it is not—but that would hardly mean that it is (subjunctively) impossible for it to rain outside. This point is also made by Norman Swartz and Raymond Bradley. Deontic possibility There is some overlap in language between subjunctive possibilities and deontic possibilities: for example, we sometimes use the statement "You can/cannot do that" to express (i) w
https://en.wikipedia.org/wiki/Injective%20sheaf
In mathematics, injective sheaves of abelian groups are used to construct the resolutions needed to define sheaf cohomology (and other derived functors, such as sheaf Ext). There is a further group of related concepts applied to sheaves: flabby (flasque in French), fine, soft (mou in French), acyclic. In the history of the subject they were introduced before the 1957 "Tohoku paper" of Alexander Grothendieck, which showed that the abelian category notion of injective object sufficed to found the theory. The other classes of sheaves are historically older notions. The abstract framework for defining cohomology and derived functors does not need them. However, in most concrete situations, resolutions by acyclic sheaves are often easier to construct. Acyclic sheaves therefore serve for computational purposes, for example the Leray spectral sequence. Injective sheaves An injective sheaf is a sheaf that is an injective object of the category of abelian sheaves; in other words, homomorphisms from to can always be extended to any sheaf containing The category of abelian sheaves has enough injective objects: this means that any sheaf is a subsheaf of an injective sheaf. This result of Grothendieck follows from the existence of a generator of the category (it can be written down explicitly, and is related to the subobject classifier). This is enough to show that right derived functors of any left exact functor exist and are unique up to canonical isomorphism. For technical purposes, injective sheaves are usually superior to the other classes of sheaves mentioned above: they can do almost anything the other classes can do, and their theory is simpler and more general. In fact, injective sheaves are flabby (flasque), soft, and acyclic. However, there are situations where the other classes of sheaves occur naturally, and this is especially true in concrete computational situations. The dual concept, projective sheaves, is not used much, because in a general category
https://en.wikipedia.org/wiki/Villarceau%20circles
In geometry, Villarceau circles () are a pair of circles produced by cutting a torus obliquely through the center at a special angle. Given an arbitrary point on a torus, four circles can be drawn through it. One is in a plane parallel to the equatorial plane of the torus and another perpendicular to that plane (these are analogous to lines of latitude and longitude on the Earth). The other two are Villarceau circles. They are obtained as the intersection of the torus with a plane that passes through the center of the torus and touches it tangentially at two antipodal points. If one considers all these planes, one obtains two families of circles on the torus. Each of these families consists of disjoint circles that cover each point of the torus exactly once and thus forms a 1-dimensional foliation of the torus. The Villarceau circles are named after the French astronomer and mathematician Yvon Villarceau (1813–1883) who wrote about them in 1848. Mannheim (1903) showed that the Villarceau circles meet all of the parallel circular cross-sections of the torus at the same angle, a result that he said a Colonel Schoelcher had presented at a congress in 1891. Example Consider a horizontal torus in xyz space, centered at the origin and with major radius 5 and minor radius 3. That means that the torus is the locus of some vertical circles of radius three whose centers are on a circle of radius five in the horizontal xy plane. Points on this torus satisfy this equation: Slicing with the z = 0 plane produces two concentric circles, x2 + y2 = 22 and x2 + y2 = 82, the outer and inner equator. Slicing with the x = 0 plane produces two side-by-side circles, (y − 5)2 + z2 = 32 and (y + 5)2 + z2 = 32. Two example Villarceau circles can be produced by slicing with the plane 3x = 4z. One is centered at (0, +3, 0) and the other at (0, −3, 0); both have radius five. They can be written in parametric form as and The slicing plane is chosen to be tangent to the torus at two poi
https://en.wikipedia.org/wiki/Mother%20Nature
Mother Nature (sometimes known as Mother Earth or the Earth Mother) is a personification of nature that focuses on the life-giving and nurturing aspects of nature by embodying it, in the form of the mother. European tradition history The word "nature" comes from the Latin word, "natura", meaning birth or character [see nature (philosophy)]. In English, its first recorded use (in the sense of the entirety of the phenomena of the world) was in 1266. "Natura" and the personification of Mother Nature were widely popular in the Middle Ages. As a concept, seated between the properly divine and the human, it can be traced to Ancient Greece, though Earth (or "Eorthe" in the Old English period) may have been personified as a goddess. The Norse also had a goddess called Jörð (Jord, or Erth). The earliest written usage is in Mycenaean Greek: Ma-ka (transliterated as ma-ga), "Mother Gaia", written in Linear B syllabic script (13th or 12th century BC). In Greece, the pre-Socratic philosophers had "invented" nature when they abstracted the entirety of phenomena of the world as singular: physis, and this was inherited by Aristotle. Later medieval Christian thinkers did not see nature as inclusive of everything, but thought that she had been created by God; her place lay on earth, below the unchanging heavens and moon. Nature lay somewhere in the center, with agents above her (angels), and below her (demons and hell). For the medieval mind she was only a personification, not a goddess. Greek myth In Greek mythology, Persephone, daughter of Demeter (goddess of the harvest), was abducted by Hades (god of the dead), and taken to the underworld as his queen. Demeter was so distraught that no crops would grow and the "entire human race [would] have perished of cruel, biting hunger if Zeus had not been concerned" (Larousse 152). Zeus forced Hades to return Persephone to her mother, but while in the underworld, Persephone had eaten pomegranate seeds, the food of the dead and thus,
https://en.wikipedia.org/wiki/Fractional-order%20integrator
A fractional-order integrator or just simply fractional integrator is an integrator device that calculates the fractional-order integral or derivative (usually called a differintegral) of an input. Differentiation or integration is a real or complex parameter. The fractional integrator is useful in fractional-order control where the history of the system under control is important to the control system output. Overview The differintegral function, includes the integer order differentiation and integration functions, and allows a continuous range of functions around them. The differintegral parameters are a, t, and q. The parameters a and t describe the range over which to compute the result. The differintegral parameter q may be any real number or complex number. If q is greater than zero, the differintegral computes a derivative. If q is less than zero, the differintegral computes an integral. The integer order integration can be computed as a Riemann–Liouville differintegral, where the weight of each element in the sum is the constant unit value 1, which is equivalent to the Riemann sum. To compute an integer order derivative, the weights in the summation would be zero, with the exception of the most recent data points, where (in the case of the first unit derivative) the weight of the data point at t − 1 is −1 and the weight of the data point at t is 1. The sum of the points in the input function using these weights results in the difference of the most recent data points. These weights are computed using ratios of the Gamma function incorporating the number of data points in the range [a,t], and the parameter q. Digital devices Digital devices have the advantage of being versatile, and are not susceptible to unexpected output variation due to heat or noise. The discrete nature of a computer however, does not allow for all of history to be computed. Some finite range [a,t] must exist. Therefore, the number of data points that can be stored in memory (N), det
https://en.wikipedia.org/wiki/Menuconfig
make menuconfig is one of five similar tools that can configure Linux source, a necessary early step needed to compile the source code. make menuconfig, with a menu-driven user interface, allows the user to choose the features of Linux (and other options) that will be compiled. It is normally invoked using the command make menuconfig; menuconfig is a target in Linux Makefile. History make menuconfig was not in the first version of Linux. The predecessor tool is a question-and-answer-based utility (make config, make oldconfig). A third tool for Linux configuration is make xconfig, which requires Qt. There is also make gconfig, which uses GTK+, and make nconfig, which is similar to make menuconfig. All these tools use the Kconfig language internally. Kconfig is also used in other projects, such as Das U-Boot, a bootloader for embedded devices, Buildroot, a tool for generating embedded Linux systems, and BusyBox, a single-executable shell utility toolbox for embedded systems. Advantages over earlier versions Despite being a simple design, make menuconfig offers considerable advantages to the question-and-answer-based configuration tool make oldconfig, the most notable being a basic search system and the ability to load and save files with filenames different from ".config". make menuconfig gives the user an ability to navigate forwards or backwards directly between features, rather than using make config by pressing the key to navigate linearly to the configuration for a specific feature. If the user is satisfied with a previous .config file, using make oldconfig uses this previous file to answer all questions that it can, only interactively presenting the new features. This is intended for a version upgrade, but may be appropriate at other times. make menuconfig is a light load on system resources unlike make xconfig (uses Qt as of version 2.6.31.1, formerly Tk) or make gconfig, which utilizes GTK+. It's possible to ignore most of the features with make config,
https://en.wikipedia.org/wiki/Hex%20dump
In computing, a hex dump is a textual hexadecimal view (on screen or paper) of (often, but not necessarily binary) computer data, from memory or from a computer file or storage device. Looking at a hex dump of data is usually done in the context of either debugging, reverse engineering or digital forensics. In a hex dump, each byte (8 bits) is represented as a two-digit hexadecimal number. Hex dumps are commonly organized into rows of 8 or 16 bytes, sometimes separated by whitespaces. Some hex dumps have the hexadecimal memory address at the beginning. Some common names for this program function are hexdump, hd, od, xxd and simply dump or even D. Samples A sample text file: 0123456789ABCDEF /* ********************************************** */ Table with TABs (09) 1 2 3 3.14 6.28 9.42 as displayed by Unix hexdump: 0000000 30 31 32 33 34 35 36 37 38 39 41 42 43 44 45 46 0000010 0a 2f 2a 20 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 0000020 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a * 0000040 2a 2a 20 2a 2f 0a 09 54 61 62 6c 65 20 77 69 74 0000050 68 20 54 41 42 73 20 28 30 39 29 0a 09 31 09 09 0000060 32 09 09 33 0a 09 33 2e 31 34 09 36 2e 32 38 09 0000070 39 2e 34 32 0a 0000075 The leftmost column is the hexadecimal displacement (or address) for the values of the following columns. Each row displays 16 bytes, with the exception of the row containing a single *. The * is used to indicate multiple occurrences of the same display were omitted. The last line displays the number of bytes taken from the input. An additional column shows the corresponding ASCII character translation with or : 00000000 30 31 32 33 34 35 36 37 38 39 41 42 43 44 45 46 |0123456789ABCDEF| 00000010 0a 2f 2a 20 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a |./* ************| 00000020 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a |****************| * 00000040 2a 2a 20 2a 2f 0a 09 54 61 62 6c 65 20 77 69 74 |** */..Table wit| 00000050 68 2
https://en.wikipedia.org/wiki/Fractional-order%20control
Fractional-order control (FOC) is a field of control theory that uses the fractional-order integrator as part of the control system design toolkit. The use of fractional calculus (FC) can improve and generalize well-established control methods and strategies. The fundamental advantage of FOC is that the fractional-order integrator weights history using a function that decays with a power-law tail. The effect is that the effects of all time are computed for each iteration of the control algorithm. This creates a 'distribution of time constants,' the upshot of which is there is no particular time constant, or resonance frequency, for the system. In fact, the fractional integral operator is different from any integer-order rational transfer function , in the sense that it is a non-local operator that possesses an infinite memory and takes into account the whole history of its input signal. Fractional-order control shows promise in many controlled environments that suffer from the classical problems of overshoot and resonance, as well as time diffuse applications such as thermal dissipation and chemical mixing. Fractional-order control has also been demonstrated to be capable of suppressing chaotic behaviors in mathematical models of, for example, muscular blood vessels. Initiated from the 80's by the Pr. Oustaloup's group, the CRONE approach is one of the most developed control-system design methodologies that uses fractional-order operator properties. See also Differintegral Fractional calculus Fractional-order system External links Dr. YangQuan Chen's latest homepage for the applied fractional calculus (AFC) Dr. YangQuan Chen's page about fractional calculus on Google Sites References Control theory Cybernetics
https://en.wikipedia.org/wiki/WRDC
WRDC (channel 28) is a television station licensed to Durham, North Carolina, United States, serving the Research Triangle area as an affiliate of MyNetworkTV. It is owned by Sinclair Broadcast Group alongside Raleigh-licensed CW affiliate WLFL (channel 22). Both stations share studios in the Highwoods Office Park, just outside downtown Raleigh, while WRDC's transmitter is located in Auburn, North Carolina. Channel 28 is the third-oldest television station in the Triangle and was the market's NBC affiliate for its first 27 years of operation. It was perennially the third-rated station in the market and did not produce local newscasts for significant portions of its tenure with NBC, which contributed to the network moving to another station. Prior use of channel 28 in Raleigh Channel 28 in Raleigh was initially occupied by WNAO-TV, the first television station in the Raleigh–Durham market and North Carolina's first UHF station. Owned by the Sir Walter Television Company, WNAO-TV broadcast from July 12, 1953, to December 31, 1957, primarily as a CBS affiliate with secondary affiliations with other networks. The station was co-owned with WNAO radio (850 AM and 96.1 FM)), which Sir Walter had bought from The News & Observer newspaper after obtaining the television construction permit. After the Raleigh–Durham market received two VHF television stations in 1954 and 1956 (WTVD, channel 11, and WRAL-TV, channel 5, respectively), WNAO-TV found the going increasingly difficult, as did many early UHF stations. The station signed off December 31, 1957, and its owner entered into a joint venture with another dark UHF outlet that was successful in obtaining channel 8 in High Point. History WRDU-TV/Triangle Telecasters In 1966, a major overhaul of the UHF allocation table moved the market's channel 28 allotment from Raleigh to Durham. On November 18 of that year, Triangle Telecasters, Inc., a group led by law professor Robinson O. Everett, applied to the Federal Communicatio
https://en.wikipedia.org/wiki/Euclidean%20tilings%20by%20convex%20regular%20polygons
Euclidean plane tilings by convex regular polygons have been widely used since antiquity. The first systematic mathematical treatment was that of Kepler in his Harmonices Mundi (Latin: The Harmony of the World, 1619). Notation of Euclidean tilings Euclidean tilings are usually named after Cundy & Rollett’s notation. This notation represents (i) the number of vertices, (ii) the number of polygons around each vertex (arranged clockwise) and (iii) the number of sides to each of those polygons. For example: 36; 36; 34.6, tells us there are 3 vertices with 2 different vertex types, so this tiling would be classed as a ‘3-uniform (2-vertex types)’ tiling. Broken down, 36; 36 (both of different transitivity class), or (36)2, tells us that there are 2 vertices (denoted by the superscript 2), each with 6 equilateral 3-sided polygons (triangles). With a final vertex 34.6, 4 more contiguous equilateral triangles and a single regular hexagon. However, this notation has two main problems related to ambiguous conformation and uniqueness First, when it comes to k-uniform tilings, the notation does not explain the relationships between the vertices. This makes it impossible to generate a covered plane given the notation alone. And second, some tessellations have the same nomenclature, they are very similar but it can be noticed that the relative positions of the hexagons are different. Therefore, the second problem is that this nomenclature is not unique for each tessellation. In order to solve those problems, GomJau-Hogg’s notation is a slightly modified version of the research and notation presented in 2012, about the generation and nomenclature of tessellations and double-layer grids. Antwerp v3.0, a free online application, allows for the infinite generation of regular polygon tilings through a set of shape placement stages and iterative rotation and reflection operations, obtained directly from the GomJau-Hogg’s notation. Regular tilings Following Grünbaum and Sheph
https://en.wikipedia.org/wiki/Vision%20mixer
A vision mixer is a device used to select between several different live video sources and, in some cases, compositing live video sources together to create visual effects. In most of the world, both the equipment and its operator are called a vision mixer or video mixer; however, in the United States, the equipment is called a video switcher, production switcher or video production switcher, and its operator is known as a technical director (TD). The role of the vision mixer for video is similar to what a mixing console does for audio. Typically a vision mixer would be found in a video production environment such as a production control room of a television studio, production truck or post-production facility. Capabilities and usage Besides hard cuts (switching directly between two input signals), mixers can also generate a variety of transitions, from simple dissolves to pattern wipes. Additionally, most vision mixers can perform keying operations (called mattes in this context) and generate color signals. Vision mixers may include digital video effects (DVE) and still store functionality. Most vision mixers are targeted at the professional market, with newer analog models having component video connections and digital ones using serial digital interface (SDI) or SMPTE 2110. They are used in live television, such as outside broadcasting, with video tape recording (VTR) and video servers for linear video editing, even though the use of vision mixers in video editing has been largely supplanted by computer-based non-linear editing systems. Older professional mixers worked with composite video, analog signal inputs. There were a number of consumer video switchers with composite video or S-Video. These are often used for VJing, presentations, and small multi-camera productions. Operation The most basic part of a vision mixer is a bus, which is a signal path consisting of multiple video inputs that feeds a single output. On the panel, a bus is represented by a
https://en.wikipedia.org/wiki/AES47
AES47 is a standard which describes a method for transporting AES3 professional digital audio streams over Asynchronous Transfer Mode (ATM) networks. The Audio Engineering Society (AES) published AES47 in 2002. The method described by AES47 is also published by the International Electrotechnical Commission as IEC 62365. Introduction Many professional audio systems are now combined with telecommunication and IT technologies to provide new functionality, flexibility and connectivity over both local and wide area networks. AES47 was developed to provide a standardised method of transporting the standard digital audio per AES3 over telecommunications networks that provide a quality of service required by many professional low-latency live audio uses. AES47 may be used directly between specialist audio devices or in combination with telecommunication and computer equipment with suitable network interfaces. In both cases, AES47 the same physical structured cable used as standard by the telecommunications networks. Common network protocols like Ethernet use large packet sizes, which produce a larger minimum latency. Asynchronous transfer mode divides data into 48-byte cells which provide lower latency. History The original work was carried out at the British Broadcasting Corporation’s R&D department and published as "White Paper 074", which established that this approach provides the necessary performance for professional media production. AES47 was originally published in 2002 and was republished with minor revisions in February 2006. Amendment 1 to AES47 was published in February 2009, adding code points in the ATM Adaptation Layer Parameters Information Element to signal that the time to which each audio sample relates can be identified as specified in AES53. The change in thinking from traditional ATM network design is not to necessarily use ATM to pass IP traffic (apart from management traffic) but to use AES47 in parallel with standard Ethernet structures to d
https://en.wikipedia.org/wiki/Head%20over%20Heels%20%28video%20game%29
Head Over Heels is an action-adventure video game released by Ocean Software in 1987 for several 8-bit home computers. It uses an isometric engine that is similar to the Filmation technique first developed by Ultimate Play the Game. Head Over Heels is the second isometric game by Jon Ritman and Bernie Drummond, after their earlier Batman computer game was released in 1986. The game received very favourable reviews and was described as an all time classic. In 2003, Retrospec released a remake of Head Over Heels for Microsoft Windows, Mac OS X, BeOS, and Linux. In 2019, Piko Interactive released an Atari ST port of Head Over Heels for Atari Jaguar. A Nintendo Switch port was released on October 28, 2021. Gameplay The player controls two characters instead of just one, each with different abilities. Head can jump higher than Heels, control himself in the air, and fire doughnuts from a hooter to paralyze enemies. Heels can run twice as fast as Head, climb certain staircases that Head cannot, and carry objects around a room in a bag. These abilities become complementary when the player combines them after completing roughly a sixth of the game. Compared to its predecessors, the game offers unique and revolutionary gameplay, complex puzzles, and more than 300 rooms to explore. Drummond contributed some famously surreal touches, including robots (controlled by push switches) that bore a remarkable resemblance to the head of Charles III (then Prince of Wales) on the body of a Dalek. Other surreal touches include enemies with the heads of elephants and staircases made of dogs that teleport themselves away as soon as Head enters the room. Plot Headus Mouthion (Head) and Footus Underium (Heels) are two spies from the planet Freedom. They are sent to Blacktooth to liberate the enslaved planets of Penitentiary, Safari, Book World and Egyptus, and then to defeat the Emperor to prevent further planets from falling under his rule. Captured and separated, the spies are pla
https://en.wikipedia.org/wiki/Cuisenaire%20rods
Cuisenaire rods are mathematics learning aids for students that provide an interactive, hands-on way to explore mathematics and learn mathematical concepts, such as the four basic arithmetical operations, working with fractions and finding divisors. In the early 1950s, Caleb Gattegno popularised this set of coloured number rods created by Georges Cuisenaire (1891–1975), a Belgian primary school teacher, who called the rods réglettes. According to Gattegno, "Georges Cuisenaire showed in the early 1950s that students who had been taught traditionally, and were rated 'weak', took huge strides when they shifted to using the material. They became 'very good' at traditional arithmetic when they were allowed to manipulate the rods." History The educationalists Maria Montessori and Friedrich Fröbel had used rods to represent numbers, but it was Georges Cuisenaire who introduced the rods that were to be used across the world from the 1950s onwards. In 1952 he published Les nombres en couleurs, Numbers in Color, which outlined their use. Cuisenaire, a violin player, taught music as well as arithmetic in the primary school in Thuin. He wondered why children found it easy and enjoyable to pick up a tune and yet found mathematics neither easy nor enjoyable. These comparisons with music and its representation led Cuisenaire to experiment in 1931 with a set of ten rods sawn out of wood, with lengths from 1 cm to 10 cm. He painted each length of rod a different colour and began to use these in his teaching of arithmetic. The invention remained almost unknown outside the village of Thuin for about 23 years until, in April 1953, British mathematician and mathematics education specialist Caleb Gattegno was invited to see students using the rods in Thuin. At this point he had already founded the International Commission for the Study and Improvement of Mathematics Education (CIEAEM) and the Association of Teachers of Mathematics, but this marked a turning point in his understanding:
https://en.wikipedia.org/wiki/NOTAR
NOTAR ("no tail rotor") is a helicopter system which avoids the use of a tail rotor. It was developed by McDonnell Douglas Helicopter Systems (through their acquisition of Hughes Helicopters). The system uses a fan inside the tail boom to build a high volume of low-pressure air, which exits through two slots and creates a boundary layer flow of air along the tailboom utilizing the Coandă effect. The boundary layer changes the direction of airflow around the tailboom, creating thrust opposite the motion imparted to the fuselage by the torque effect of the main rotor. Directional yaw control is gained through a vented, rotating drum at the end of the tailboom, called the direct jet thruster. Advocates of NOTAR assert that the system offers quieter and safer operation than a traditional tail rotor. Development The use of directed air to provide anti-torque control had been tested as early as 1945 in the British Cierva W.9. During 1957, a Spanish prototype designed and built by Aerotecnica flew using exhaust gases from the turbine instead of a tail rotor. This model was designated as Aerotecnica AC-14. The Fiat 7005 used a pusher propellor that blew against a cascade of tail vanes at the rear of its fuselage. Development of the NOTAR system dates back to 1975, when engineers at Hughes Helicopters began concept development work. On December 17, 1981, Hughes flew an OH-6A fitted with NOTAR for the first time. The OH-6A helicopter (serial number 65-12917) was supplied by the U.S. Army for Hughes to develop the NOTAR technology and was the second OH-6 built by Hughes for the U.S. Army. A more heavily modified version of the prototype demonstrator first flew in March 1986 (by which time McDonnell Douglas had acquired Hughes Helicopters). The original prototype last flew in June 1986 and is now at the U.S. Army Aviation Museum in Fort Rucker, Alabama. A production model NOTAR 520N (N520NT) was later produced and first flew on May 1, 1990. It collided with an Apache AH-64D
https://en.wikipedia.org/wiki/Index%20calculus%20algorithm
In computational number theory, the index calculus algorithm is a probabilistic algorithm for computing discrete logarithms. Dedicated to the discrete logarithm in where is a prime, index calculus leads to a family of algorithms adapted to finite fields and to some families of elliptic curves. The algorithm collects relations among the discrete logarithms of small primes, computes them by a linear algebra procedure and finally expresses the desired discrete logarithm with respect to the discrete logarithms of small primes. Description Roughly speaking, the discrete log problem asks us to find an x such that , where g, h, and the modulus n are given. The algorithm (described in detail below) applies to the group where q is prime. It requires a factor base as input. This factor base is usually chosen to be the number −1 and the first r primes starting with 2. From the point of view of efficiency, we want this factor base to be small, but in order to solve the discrete log for a large group we require the factor base to be (relatively) large. In practical implementations of the algorithm, those conflicting objectives are compromised one way or another. The algorithm is performed in three stages. The first two stages depend only on the generator g and prime modulus q, and find the discrete logarithms of a factor base of r small primes. The third stage finds the discrete log of the desired number h in terms of the discrete logs of the factor base. The first stage consists of searching for a set of r linearly independent relations between the factor base and power of the generator g. Each relation contributes one equation to a system of linear equations in r unknowns, namely the discrete logarithms of the r primes in the factor base. This stage is embarrassingly parallel and easy to divide among many computers. The second stage solves the system of linear equations to compute the discrete logs of the factor base. A system of hundreds of thousands or milli
https://en.wikipedia.org/wiki/136%20%28number%29
136 (one hundred [and] thirty-six) is the natural number following 135 and preceding 137. In mathematics 136 is itself a factor of the Eddington number. With a total of 8 divisors, 8 among them, 136 is a refactorable number. It is a composite number. 136 is a centered triangular number and a centered nonagonal number. The sum of the ninth row of Lozanić's triangle is 136. 136 is a self-descriptive number in base 4, and a repdigit in base 16. In base 10, the sum of the cubes of its digits is . The sum of the cubes of the digits of 244 is . 136 is a triangular number, because it's the sum of the first 16 positive integers. In the military Force 136 branch of the British organization, the Special Operations Executive (SOE), in the South-East Asian Theatre of World War II USNS Mission Soledad (T-AO-136) was a United States Navy Mission Buenaventura-class fleet oiler during World War II USS Admirable (AM-136) was a United States Navy Admirable class minesweeper USS Ara (AK-136) was a United States Navy during World War II was a United States Navy during World War II USS Botetourt (APA-136) was a United States Navy during World War II and the Korean War was a United States Navy tanker during World War II was a United States Navy during World War II was a United States Navy heavy cruiser during World War II USS Frederick C. Davis (DE-136) was a United States Navy during World War II was a United States Navy General G. O. Squier-class transport ship during World War II Electronic Attack Squadron 136 (VAQ-136) also known as "The Gauntlets" is a United States Navy attack squadron at Naval Air Station Atsugi, Japan Strike Fighter Squadron 136 (VFA-136) is a United States Navy strike fighter squadron based at Naval Air Station Oceana, Virginia In transportation London Buses route 136 is a Transport for London contracted bus route in London In TV and radio 136 kHz band is the lowest frequency band amateur radio operators are allowed to transmit
https://en.wikipedia.org/wiki/173%20%28number%29
173 (one hundred [and] seventy-three) is the natural number following 172 and preceding 174. In mathematics 173 is: an odd number. a deficient number. an odious number. a balanced prime. an Eisenstein prime with no imaginary part. a Sophie Germain prime. an inconsummate number. the sum of 2 squares: 22 + 132. the sum of three consecutive prime numbers: 53 + 59 + 61. Palindromic number in bases 3 (201023) and 9 (2129). In astronomy 173 Ino is a large dark main belt asteroid 173P/Mueller is a periodic comet in the Solar System Arp 173 (VV 296, KPG 439) is a pair of galaxies in the constellation Boötes In the military 173rd Air Refueling Squadron unit of the Nebraska Air National Guard 173rd Airborne Brigade Combat Team of the United States Army based in Vicenza 173rd Battalion unit of the Canadian Expeditionary Force during the World War I 173rd Special Operations Aviation Squadron of the Australian Army K-173 Chelyabinsk Russian was a U.S. Navy Phoenix-class auxiliary ship following World War II was a U.S. Navy during World War II was a U.S. Navy during World War II was a U.S. Navy during World War II was a U.S. Navy yacht during World War I was a U.S. Navy ship during World War II was a U.S. Navy submarine chaser during World War II was a U.S. Navy Porpoise-class submarine during World War II was a U.S. Navy following World War II Vought V-173 (Flying Pancake) was a U.S. Navy experimental test aircraft during World War II In transportation The Georgia Railroad, the world longest railroad in 1845, ran for from Augusta to Marthasville (Atlanta, Georgia) United Airlines Flight 173 en route from Denver to Portland crashed on December 28, 1978 The Velocity 173 was a kit aircraft produced by Velocity Aircraft in the early 1990s. In popular culture The book 173 Hours in Captivity (2000) SCP-173, a fictional statue In other fields 173 is also: The year AD 173 or 173 BC 173 AH is a year in the Islamic calendar that corresponds
https://en.wikipedia.org/wiki/API%20gravity
The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks. API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity). It is used to compare densities of petroleum liquids. For example, if one petroleum liquid is less dense than another, it has a greater API gravity. Although API gravity is mathematically a dimensionless quantity (see the formula below), it is referred to as being in 'degrees'. API gravity is graduated in degrees on a hydrometer instrument. API gravity values of most petroleum liquids fall between 10 and 70 degrees. In 1916, the U.S. National Bureau of Standards accepted the Baumé scale, which had been developed in France in 1768, as the U.S. standard for measuring the specific gravity of liquids less dense than water. Investigation by the U.S. National Academy of Sciences found major errors in salinity and temperature controls that had caused serious variations in published values. Hydrometers in the U.S. had been manufactured and distributed widely with a modulus of 141.5 instead of the Baumé scale modulus of 140. The scale was so firmly established that, by 1921, the remedy implemented by the American Petroleum Institute was to create the API gravity scale, recognizing the scale that was actually being used. API gravity formulas The formula to calculate API gravity from specific gravity (SG) is: Conversely, the specific gravity of petroleum liquids can be derived from their API gravity value as Thus, a heavy oil with a specific gravity of 1.0 (i.e., with the same density as pure water at 60 °F) has an API gravity of: Using API gravity to calculate barrels of crude oil per metric ton In the oil industry, quantities of crude oil are often measured in metric tons. One can calculate the
https://en.wikipedia.org/wiki/Radial%20stress
Radial stress is stress toward or away from the central axis of a component. Pressure vessels The walls of pressure vessels generally undergo triaxial loading. For cylindrical pressure vessels, the normal loads on a wall element are longitudinal stress, circumferential (hoop) stress and radial stress. The radial stress for a thick-walled cylinder is equal and opposite to the gauge pressure on the inside surface, and zero on the outside surface. The circumferential stress and longitudinal stresses are usually much larger for pressure vessels, and so for thin-walled instances, radial stress is usually neglected. Formula The radial stress for a thick walled pipe at a point from the central axis is given by where is the inner radius, is the outer radius, is the inner absolute pressure and is the outer absolute pressure. Maximum radial stress occurs when (at the inside surface) and is equal to gauge pressure, or . References Solid mechanics
https://en.wikipedia.org/wiki/Strong%20antichain
In order theory, a subset A of a partially ordered set P is a strong downwards antichain if it is an antichain in which no two distinct elements have a common lower bound in P, that is, In the case where P is ordered by inclusion, and closed under subsets, but does not contain the empty set, this is simply a family of pairwise disjoint sets. A strong upwards antichain B is a subset of P in which no two distinct elements have a common upper bound in P. Authors will often omit the "upwards" and "downwards" term and merely refer to strong antichains. Unfortunately, there is no common convention as to which version is called a strong antichain. In the context of forcing, authors will sometimes also omit the "strong" term and merely refer to antichains. To resolve ambiguities in this case, the weaker type of antichain is called a weak antichain. If (P, ≤) is a partial order and there exist distinct x, y ∈ P such that {x, y} is a strong antichain, then (P, ≤) cannot be a lattice (or even a meet semilattice), since by definition, every two elements in a lattice (or meet semilattice) must have a common lower bound. Thus lattices have only trivial strong antichains (i.e., strong antichains of cardinality at most 1). References Order theory
https://en.wikipedia.org/wiki/Countable%20chain%20condition
In order theory, a partially ordered set X is said to satisfy the countable chain condition, or to be ccc, if every strong antichain in X is countable. Overview There are really two conditions: the upwards and downwards countable chain conditions. These are not equivalent. The countable chain condition means the downwards countable chain condition, in other words no two elements have a common lower bound. This is called the "countable chain condition" rather than the more logical term "countable antichain condition" for historical reasons related to certain chains of open sets in topological spaces and chains in complete Boolean algebras, where chain conditions sometimes happen to be equivalent to antichain conditions. For example, if κ is a cardinal, then in a complete Boolean algebra every antichain has size less than κ if and only if there is no descending κ-sequence of elements, so chain conditions are equivalent to antichain conditions. Partial orders and spaces satisfying the ccc are used in the statement of Martin's axiom. In the theory of forcing, ccc partial orders are used because forcing with any generic set over such an order preserves cardinals and cofinalities. Furthermore, the ccc property is preserved by finite support iterations (see iterated forcing). For more information on ccc in the context of forcing, see . More generally, if κ is a cardinal then a poset is said to satisfy the κ-chain condition if every antichain has size less than κ. The countable chain condition is the ℵ1-chain condition. Examples and properties in topology A topological space is said to satisfy the countable chain condition, or Suslin's Condition, if the partially ordered set of non-empty open subsets of X satisfies the countable chain condition, i.e. every pairwise disjoint collection of non-empty open subsets of X is countable. The name originates from Suslin's Problem. Every separable topological space is ccc. Furthermore, the product space of at most separable
https://en.wikipedia.org/wiki/Martin%27s%20axiom
In the mathematical field of set theory, Martin's axiom, introduced by Donald A. Martin and Robert M. Solovay, is a statement that is independent of the usual axioms of ZFC set theory. It is implied by the continuum hypothesis, but it is consistent with ZFC and the negation of the continuum hypothesis. Informally, it says that all cardinals less than the cardinality of the continuum, , behave roughly like . The intuition behind this can be understood by studying the proof of the Rasiowa–Sikorski lemma. It is a principle that is used to control certain forcing arguments. Statement For any cardinal 𝛋, consider the following statement: MA(𝛋) For any partial order P satisfying the countable chain condition (hereafter ccc) and any family D of dense subsets of P such that |D| ≤ 𝛋, there is a filter F on P such that F ∩ d is non-empty for every d in D. In this case (for application of ccc), an antichain is a subset A of P such that any two distinct members of A are incompatible (two elements are said to be compatible if there exists a common element below both of them in the partial order). This differs from, for example, the notion of antichain in the context of trees. MA(&aleph;0) is simply true — the Rasiowa–Sikorski lemma. MA(2&aleph;0) is false: [0, 1] is a separable compact Hausdorff space, and so (P, the poset of open subsets under inclusion, is) ccc. But now consider the following two size-2&aleph;0= families of dense sets in P: no x∈[0, 1] is isolated, and so each x defines the dense subset {S : x∉S}. And each r∈(0, 1], defines the dense subset {S : diam(S)<r}. The two families combined are also of size , and a filter meeting both must simultaneously avoid all points of [0, 1] while containing sets of arbitrarily small diameter. But a filter F containing sets of arbitrarily small diameter must contain a point in &bigcap;F by compactness. (See also .) Martin's axiom is then that MA(κ) holds "as long as possible": Martin's axiom (MA) For every 𝛋 < , MA(𝛋) ho
https://en.wikipedia.org/wiki/Comparison%20of%20wiki%20software
The following tables compare general and technical information for a number of wiki software packages. General information Systems listed on a light purple background are no longer in active development. Target audience Features 1 Features 2 Installation See also Comparison of wiki farms notetaking software text editors HTML editors word processors wiki hosting services List of wikis wiki software personal information managers text editors outliners for desktops mobile devices web-based Footnotes Comparison Wiki software Text editor comparisons
https://en.wikipedia.org/wiki/Flat%20roof
A flat roof is a roof which is almost level in contrast to the many types of sloped roofs. The slope of a roof is properly known as its pitch and flat roofs have up to approximately 10°. Flat roofs are an ancient form mostly used in arid climates and allow the roof space to be used as a living space or a living roof. Flat roofs, or "low-slope" roofs, are also commonly found on commercial buildings throughout the world. The U.S.-based National Roofing Contractors Association defines a low-slope roof as having a slope of 3 in 12 (1:4) or less. Flat roofs exist all over the world, and each area has its own tradition or preference for materials used. In warmer climates, where there is less rainfall and freezing is unlikely to occur, many flat roofs are simply built of masonry or concrete and this is good at keeping out the heat of the sun and cheap and easy to build where timber is not readily available. In areas where the roof could become saturated by rain and leak, or where water soaked into the brickwork could freeze to ice and thus lead to 'blowing' (breaking up of the mortar/brickwork/concrete by the expansion of ice as it forms) these roofs are not suitable. Flat roofs are characteristic of the Egyptian, Persian, and Arabian styles of architecture. Around the world, many modern commercial buildings have flat roofs. The roofs are usually clad with a deeper profile roof sheet (usually 40mm deep or greater). This gives the roof sheet very high water carrying capacity and allows the roof sheets to be more than 100 metres long in some cases. The pitch of this type of roof is usually between 1 and 3 degrees depending upon sheet length. Construction methods Any sheet of material used to cover a flat or low-pitched roof is usually known as a membrane and the primary purpose of these membranes is to waterproof the roof area. Materials that cover flat roofs typically allow the water to run off from a slight inclination or camber into a gutter system. Water from some f
https://en.wikipedia.org/wiki/Medical%20algorithm
A medical algorithm is any computation, formula, statistical survey, nomogram, or look-up table, useful in healthcare. Medical algorithms include decision tree approaches to healthcare treatment (e.g., if symptoms A, B, and C are evident, then use treatment X) and also less clear-cut tools aimed at reducing or defining uncertainty. A medical prescription is also a type of medical algorithm. Scope Medical algorithms are part of a broader field which is usually fit under the aims of medical informatics and medical decision-making. Medical decisions occur in several areas of medical activity including medical test selection, diagnosis, therapy and prognosis, and automatic control of medical equipment. In relation to logic-based and artificial neural network-based clinical decision support systems, which are also computer applications used in the medical decision-making field, algorithms are less complex in architecture, data structure and user interface. Medical algorithms are not necessarily implemented using digital computers. In fact, many of them can be represented on paper, in the form of diagrams, nomographs, etc. Examples A wealth of medical information exists in the form of published medical algorithms. These algorithms range from simple calculations to complex outcome predictions. Most clinicians use only a small subset routinely. Examples of medical algorithms are: Calculators, e.g. an on-line or stand-alone calculator for body mass index (BMI) when stature and body weight are given; Flowcharts and drakon-charts, e.g. a binary decision tree for deciding what is the etiology of chest pain Look-up tables, e.g. for looking up food energy and nutritional contents of foodstuffs Nomograms, e.g. a moving circular slide to calculate body surface area or drug dosages. A common class of algorithms are embedded in guidelines on the choice of treatments produced by many national, state, financial and local healthcare organisations and provided as knowledge re
https://en.wikipedia.org/wiki/Chamfer
A chamfer or is a transitional edge between two faces of an object. Sometimes defined as a form of bevel, it is often created at a 45° angle between two adjoining right-angled faces. Chamfers are frequently used in machining, carpentry, furniture, concrete formwork, mirrors, and to facilitate assembly of many mechanical engineering designs. Terminology In machining the word bevel is not used to refer to a chamfer. Machinists use chamfers to "ease" otherwise sharp edges, both for safety and to prevent damage to the edges. A chamfer may sometimes be regarded as a type of bevel, and the terms are often used interchangeably. In furniture-making, a lark's tongue is a chamfer which ends short of a piece in a gradual outward curve, leaving the remainder of the edge as a right angle. Chamfers may be formed in either inside or outside adjoining faces of an object or room. By comparison, a fillet is the rounding-off of an interior corner, and a round (or radius) the rounding of an outside one. Carpentry and furniture Chamfers are used in furniture such as counters and table tops to ease their edges to keep people from bruising themselves in the otherwise sharp corner. When the edges are rounded instead, they are called bullnosed. Special tools such as chamfer mills and chamfer planes are sometimes used. Architecture Chamfers are commonly used in architecture, both for functional and aesthetic reasons. For example, the base of the Taj Mahal is a cube with chamfered corners, thereby creating an octagonal architectural footprint. Its great gate is formed of chamfered base stones and chamfered corbels for a balcony or equivalent cornice towards the roof. Urban planning Many city blocks in Barcelona, Valencia and various other cities in Spain, and street corners (curbs) in Ponce, Puerto Rico, are chamfered. The chamfering was designed as an embellishment and a modernization of urban space in Barcelona's mid-19th century Eixample or Expansion District, where the bui
https://en.wikipedia.org/wiki/Cylinder-head-sector
Cylinder-head-sector (CHS) is an early method for giving addresses to each physical block of data on a hard disk drive. It is a 3D-coordinate system made out of a vertical coordinate head, a horizontal (or radial) coordinate cylinder, and an angular coordinate sector. Head selects a circular surface: a platter in the disk (and one of its two sides). Cylinder is a cylindrical intersection through the stack of platters in a disk, centered around the disk's spindle. Combined, cylinder and head intersect to a circular line, or more precisely: a circular strip of physical data blocks called track. Sector finally selects which data block in this track is to be addressed, as the track is subdivided into several equally-sized portions, each of which is an arc of (360/n) degrees, where n is the number of sectors in the track. CHS addresses were exposed, instead of simple linear addresses (going from 0 to the total block count on disk - 1), because early hard drives didn't come with an embedded disk controller, that would hide the physical layout. A separate generic controller card was used, so that the operating system had to know the exact physical "geometry" of the specific drive attached to the controller, to correctly address data blocks. The traditional limits were 512 bytes/sector × 63 sectors/track × 255 heads (tracks/cylinder) × 1024 cylinders, resulting in a limit of 8032.5 MiB for the total capacity of a disk. As the geometry became more complicated (for example, with the introduction of zone bit recording) and drive sizes grew over time, the CHS addressing method became restrictive. Since the late 1980s, hard drives began shipping with an embedded disk controller that had good knowledge of the physical geometry; they would however report a false geometry to the computer, e.g., a larger number of heads than actually present, to gain more addressable space. These logical CHS values would be translated by the controller, thus CHS addressing no longer corresponded
https://en.wikipedia.org/wiki/Exposed%20node%20problem
In wireless networks, the exposed node problem occurs when a node is prevented from sending packets to other nodes because of co-channel interference with a neighboring transmitter. Consider an example of four nodes labeled R1, S1, S2, and R2, where the two receivers (R1, R2) are out of range of each other, yet the two transmitters (S1, S2) in the middle are in range of each other. Here, if a transmission between S1 and R1 is taking place, node S2 is prevented from transmitting to R2 as it concludes after carrier sense that it will interfere with the transmission by its neighbor S1. However note that R2 could still receive the transmission of S2 without interference because it is out of range of S1. IEEE 802.11 RTS/CTS mechanism helps to solve this problem only if the nodes are synchronized and packet sizes and data rates are the same for both the transmitting nodes. When a node hears an RTS from a neighboring node, but not the corresponding CTS, that node can deduce that it is an exposed node and is permitted to transmit to other neighboring nodes. If the nodes are not synchronised (or if the packet sizes are different or the data rates are different) the problem may occur that the sender will not hear the CTS or the ACK during the transmission of data of the second sender. The exposed node problem is not an issue in cellular networks as the power and distance between cells is controlled to avoid it. See also Hidden node problem IEEE 802.11 RTS/CTS Multiple Access with Collision Avoidance for Wireless (MACAW) References Further reading Wireless networking E
https://en.wikipedia.org/wiki/Delphi%20%28online%20service%29
Delphi Forums is a U.S. online service provider and since the mid-1990s has been a community internet forum site. It started as a nationwide dialup service in 1983. Delphi Forums remains active as of 2023. History The company that became Delphi was founded by Wes Kussmaul as Kussmaul Encyclopedia in 1981 and featured an encyclopedia, e-mail, and a primitive chat. Newswires, bulletin boards and better chat were added in early 1982. Kussmaul recalled: Delphi was actually launched in October 1981, at Jerry Milden's Northeast Computer Show, as the Kussmaul Encyclopedia--the world's first commercially available computerized encyclopedia. (Frank Greenagle's Arête Encyclopedia was announced at about the same time, but you couldn't buy it until much later.) The Kussmaul Encyclopedia was actually a complete home computer system (your choice of Tandy Color Computer or Apple II) with a 300-bps modem that dialed up to a VAX computer hosting our online encyclopedia database. We sold the system for about the same price and terms as Britannica. People wandered around in it and were impressed with the ease with which they could find information. We had a wonderful cross-referencing system that turned every occurrence of a word that was the name of an entry in the encyclopedia into a hypertext link—in 1981... In November 1982, Wes hired Glenn McIntyre as a software engineer primarily doing internal systems. Glenn brought in colleagues Kip Bryan and Dan Bruns. Kip wrote the software that became Delphi Conference and Delphi Forums. Dan upon finishing his MBA at Harvard, become President and subsequently CEO when Wes moved on to form Global Villages. On March 15, 1983, the Delphi name was first used by General Videotex Corporation. Forums were text-based, and accessed via Telenet, Sprintnet, Tymnet, Uninet, and Datapac. In 1984, it had 4 million members. Delphi was extended to Argentina in 1985, through a partnership with the Argentine IT company Siscotel S.A. Delphi partnered
https://en.wikipedia.org/wiki/Rhapsody%20%28operating%20system%29
Rhapsody is an operating system that was developed by Apple Computer after its purchase of NeXT in the late 1990s. It is the fifth major release of the Mach-based operating system that was developed at NeXT in the late 1980s, previously called OPENSTEP and NEXTSTEP. Rhapsody was targeted to developers for a transition period between the Classic Mac OS and Mac OS X. Rhapsody represented a new and exploratory strategy for Apple, more than an operating system, and runs on x86-based PCs and on Power Macintosh. Rhapsody's OPENSTEP based Yellow Box API frameworks were ported to Windows NT for creating cross-platform applications. Eventually, the non-Apple platforms were discontinued, and later versions consist primarily of the OPENSTEP operating system ported to Power Macintosh, merging the Copland-originated GUI of Mac OS 8 with that of OPENSTEP. Several existing classic Mac OS frameworks were ported, including QuickTime and AppleSearch. Rhapsody can run Mac OS 8 and its applications in a paravirtualization layer called Blue Box for backward compatibility during migration to Mac OS X. Background Naming Rhapsody follows Apple's pattern through the 1990s of music-related codenames for operating system releases (see Rhapsody (music)). Apple had canceled its previous next-generation operating system strategy of Copland (named for American composer, Aaron Copland) and its pre-announced successor Gershwin (named for George Gershwin, composer of Rhapsody in Blue). Other musical code names include Harmony (Mac OS 7.6), Tempo (Mac OS 8), Allegro (Mac OS 8.5), and Sonata (Mac OS 9). Previous attempts to develop a successor to the Classic Mac OS In the mid-1990s, Mac OS was falling behind Windows. In 1993, Microsoft had introduced the next-generation Windows NT, which was a processor-independent, multiprocessing and multi-user operating system. At the time, Mac OS was still a single-user OS, and had gained a reputation for being unstable. Apple made several attempts to devel
https://en.wikipedia.org/wiki/Video%20production
Video production is the process of producing video content for video. It is the equivalent of filmmaking, but with video recorded either as analog signals on videotape, digitally in video tape or as computer files stored on optical discs, hard drives, SSDs, magnetic tape or memory cards instead of film stock. There are three stages of video production: pre-production, production (also known as principal photography), and post-production. Pre-production involves all of the planning aspects of the video production process before filming begins. This includes scriptwriting, scheduling, logistics, and other administrative duties. Production is the phase of video production which captures the video content (electronic moving images) and involves filming the subject(s) of the video. Post-production is the action of selectively combining those video clips through video editing into a finished product that tells a story or communicates a message in either a live event setting (live production), or after an event has occurred (post-production). Currently, the majority of video content is captured through electronic media like an SD card for consumer grade cameras, or on solid state storage and flash storage for professional grade cameras. Video content that is distributed digitally on the internet often appears in common formats such as the MPEG container format (.mpeg, .mpg, .mp4), QuickTime (.mov), Audio Video Interleave (.avi), Windows Media Video (.wmv), and DivX (.avi, .divx). Types of videos There are many different types of video production. The most common include film and TV production, television commercials, internet commercials, corporate videos, product videos, customer testimonial videos, marketing videos, event videos, wedding videos. The term "Video Production" is reserved only for content creation that is taken through all phases of production (Pre-production, Production, and Post-production) and created with a specific audience in mind. A person filming
https://en.wikipedia.org/wiki/Brianchon%27s%20theorem
In geometry, Brianchon's theorem is a theorem stating that when a hexagon is circumscribed around a conic section, its principal diagonals (those connecting opposite vertices) meet in a single point. It is named after Charles Julien Brianchon (1783–1864). Formal statement Let be a hexagon formed by six tangent lines of a conic section. Then lines (extended diagonals each connecting opposite vertices) intersect at a single point , the Brianchon point. Connection to Pascal's theorem The polar reciprocal and projective dual of this theorem give Pascal's theorem. Degenerations As for Pascal's theorem there exist degenerations for Brianchon's theorem, too: Let coincide two neighbored tangents. Their point of intersection becomes a point of the conic. In the diagram three pairs of neighbored tangents coincide. This procedure results in a statement on inellipses of triangles. From a projective point of view the two triangles and lie perspectively with center . That means there exists a central collineation, which maps the one onto the other triangle. But only in special cases this collineation is an affine scaling. For example for a Steiner inellipse, where the Brianchon point is the centroid. In the affine plane Brianchon's theorem is true in both the affine plane and the real projective plane. However, its statement in the affine plane is in a sense less informative and more complicated than that in the projective plane. Consider, for example, five tangent lines to a parabola. These may be considered sides of a hexagon whose sixth side is the line at infinity, but there is no line at infinity in the affine plane. In two instances, a line from a (non-existent) vertex to the opposite vertex would be a line parallel to one of the five tangent lines. Brianchon's theorem stated only for the affine plane would therefore have to be stated differently in such a situation. The projective dual of Brianchon's theorem has exceptions in the affine plane but not in the
https://en.wikipedia.org/wiki/Minimal%20realization
In control theory, given any transfer function, any state-space model that is both controllable and observable and has the same input-output behaviour as the transfer function is said to be a minimal realization of the transfer function. The realization is called "minimal" because it describes the system with the minimum number of states. The minimum number of state variables required to describe a system equals the order of the differential equation; more state variables than the minimum can be defined. For example, a second order system can be defined by two or more state variables, with two being the minimal realization. Gilbert's realization Given a matrix transfer function, it is possible to directly construct a minimal state-space realization by using Gilbert's method (also known as Gilbert's realization). References Control theory
https://en.wikipedia.org/wiki/Jazz%20DSP
The Jazz DSP, by Improv Systems, is a VLIW embedded digital signal processor architecture with a 2-stage instruction pipeline, and single-cycle execution units. The baseline DSP includes one arithmetic logic unit (ALU), dual memory interfaces, and the control unit (instruction decoder, branch control, task control). Most aspects of the architecture, such as the number and sizes of Memory Interface Units (MIU) or the types and number of Computation Units (CU), datapath width (16 or 32-bit), the number of interrupts and priority levels, and debugging support may be independently configured using a proprietary graphical user interface (GUI) tool. A key feature of the architecture allows the user to add custom instructions and/or custom execution units to enhance the performance of their application. Typical Jazz DSP performance can exceed 1000 million operations per second (MOPS) at a modest 100 MHz clock frequency. Please refer to the EEMBC Benchmark site for more details on Jazz DSP performance as compared to other benchmarked processors. Digital signal processors Parallel computing Very long instruction word computing
https://en.wikipedia.org/wiki/Web%202.0
Web 2.0 (also known as participative (or participatory) web and social web) refers to websites that emphasize user-generated content, ease of use, participatory culture and interoperability (i.e., compatibility with other products, systems, and devices) for end users. The term was coined by Darcy DiNucci in 1999 and later popularized by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference in 2004. Although the term mimics the numbering of software versions, it does not denote a formal change in the nature of the World Wide Web, but merely describes a general change that occurred during this period as interactive websites proliferated and came to overshadow the older, more static websites of the original Web. A Web 2.0 website allows users to interact and collaborate with each other through social media dialogue as creators of user-generated content in a virtual community. This contrasts the first generation of Web 1.0-era websites where people were limited to viewing content in a passive manner. Examples of Web 2.0 features include social networking sites or social media sites (e.g., Facebook), blogs, wikis, folksonomies ("tagging" keywords on websites and links), video sharing sites (e.g., YouTube), image sharing sites (e.g., Flickr), hosted services, Web applications ("apps"), collaborative consumption platforms, and mashup applications. Whether Web 2.0 is substantially different from prior Web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who describes the term as jargon. His original vision of the Web was "a collaborative medium, a place where we [could] all meet and read and write". On the other hand, the term Semantic Web (sometimes referred to as Web 3.0) was coined by Berners-Lee to refer to a web of content where the meaning can be processed by machines. History Web 1.0 Web 1.0 is a retronym referring to the first stage of the World Wide Web's evolution, from roughly 1989 to 2004. According to Graham Cormode an
https://en.wikipedia.org/wiki/General%20insurance
General insurance or non-life insurance policy, including automobile and homeowners policies, provide payments depending on the loss from a particular financial event. General insurance is typically defined as any insurance that is not determined to be life insurance. It is called property and casualty insurance in the United States and Canada and non-life insurance in Continental Europe. In the United Kingdom, insurance is broadly divided into three areas: personal lines, commercial lines and London market. The London market insures large commercial risks such as supermarkets, football players, corporation risks, and other very specific risks. It consists of a number of insurers, reinsurers, P&I Clubs, brokers and other companies that are typically physically located in the City of London. Lloyd's of London is a big participant in this market. The London market also participates in personal lines and commercial lines, domestic and foreign, through reinsurance. Commercial lines products are usually designed for relatively small legal entities. These would include workers' compensation (employers liability), public liability, product liability, commercial fleet and other general insurance products sold in a relatively standard fashion to many organisations. There are many companies that supply comprehensive commercial insurance packages for a wide range of different industries, including shops, restaurants and hotels. Personal lines products are designed to be sold in large quantities. This would include autos (private car), homeowners (household), pet insurance, creditor insurance and others. ACORD, which is the insurance industry global standards organization, has standards for personal and commercial lines and has been working with the Australian General Insurers to develop those XML standards, standard applications for insurance, and certificates of currency. Types of general insurance General insurance can be categorised in to following: Motor Insuranc
https://en.wikipedia.org/wiki/Workprint
A workprint is a rough version of a motion picture, used by the film editor(s) during the editing process. Such copies generally contain original recorded sound that will later be re-dubbed, stock footage as placeholders for missing shots or special effects, and animation tests for in-production animated shots or sequences. For most of the first century of filmmaking, workprints were done using second-generation prints from the original camera negatives. After the editor and director approved of the final edit of the workprint, the same edits were made to the negative. With the conversion to digital editing, workprints are now generally created on a non-linear editing system using telecined footage from the original film or video sources (in contrast to a pirate "telecine", which is made with a much higher-generation film print). Occasionally, early digital workprints of films have been bootlegged and made available on the Internet. They sometimes appear months in advance of an official release. There are also director's cut versions of films that are only available on bootleg, such as the workprint version of Richard Williams' The Thief and the Cobbler. Although movie studios generally do not make full-length workprints readily available to the public, there are exceptions. Examples include the "Work-In-Progress" version of Beauty and the Beast (albeit it's just unfinished footage intertwined with the DVD release on top with the finalized sound mix), and the Denver/Dallas pre-release version of Blade Runner. Deleted scenes or bonus footage included on DVD releases are sometimes left in workprint format as well, e.g. the Scrubs DVD extras. A workprint as source for a leaked television show is rather unusual, but it happened with the third season's first episode of Homeland a month before it aired. Notable examples on the internet Hulk – Appeared on the internet two weeks before the film opening. Star Wars Episode III: Revenge of the Sith – Appeared on the web
https://en.wikipedia.org/wiki/Phase%20response%20curve
A phase response curve (PRC) illustrates the transient change (phase response) in the cycle period of an oscillation induced by a perturbation as a function of the phase at which it is received. PRCs are used in various fields; examples of biological oscillations are the heartbeat, circadian rhythms, and the regular, repetitive firing observed in some neurons in the absence of noise. In circadian rhythms In humans and animals, there is a regulatory system that governs the phase relationship of an organism's internal circadian clock to a regular periodicity in the external environment (usually governed by the solar day). In most organisms, a stable phase relationship is desired, though in some cases the desired phase will vary by season, especially among mammals with seasonal mating habits. In circadian rhythm research, a PRC illustrates the relationship between a chronobiotic's time of administration (relative to the internal circadian clock) and the magnitude of the treatment's effect on circadian phase. Specifically, a PRC is a graph showing, by convention, time of the subject's endogenous day along the x-axis and the amount of the phase shift (in hours) along the y-axis. Each curve has one peak and one trough in each 24-hour cycle. Relative circadian time is plotted against phase-shift magnitude. The treatment is usually narrowly specified as a set intensity and colour and duration of light exposure to the retina and skin, or a set dose and formulation of melatonin. These curves are often consulted in the therapeutic setting. Normally, the body's various physiological rhythms will be synchronized within an individual organism (human or animal), usually with respect to a master biological clock. Of particular importance is the sleep–wake cycle. Various sleep disorders and externals stresses (such as jet lag) can interfere with this. People with non-24-hour sleep–wake disorder often experience an inability to maintain a consistent internal clock. Extreme chron
https://en.wikipedia.org/wiki/Dictator%20game
The dictator game is a popular experimental instrument in social psychology and economics, a derivative of the ultimatum game. The term "game" is a misnomer because it captures a decision by a single player: to send money to another or not. Thus, the dictator has the most power and holds the preferred position in this “game.” Although the “dictator” has the most power and presents a take it or leave it offer, the game has mixed results based on different behavioral attributes. The results – where most "dictators" choose to send money – evidence the role of fairness and norms in economic behavior, and undermine the assumption of narrow self-interest when given the opportunity to maximise one's own profits. Description The dictator game is a derivative of the ultimatum game, in which one player (the proposer) provides a one-time offer to the other (the responder). The responder can choose to either accept or reject the proposer's bid, but rejecting the bid would result in both players receiving a payoff of 0. In the dictator game, the first player, "the dictator", determines how to split an endowment (such as a cash prize) between themselves and the second player (the recipient). The dictator's action space is complete and therefore is at their own will to determine the endowment, which ranges from giving nothing to giving all the endowment. The recipient has no influence over the outcome of the game, which means the recipient plays a passive role. While the ultimatum game is informative, it can be considered too simple a model when discussing most real-world negotiation situations. Real-world games tend to involve offers and counteroffers while the ultimatum game is simply player one placing forward a division of an amount that player 2 has to accept or reject. Based on this limited scope, it is expected that the second player will accept any offer they are given, which is not necessarily seen in real world examples. Application The initial game was developed by D
https://en.wikipedia.org/wiki/Charles%20Yanofsky
Charles Yanofsky (April 17, 1925 – March 16, 2018) was an American geneticist on the faculty of Stanford University who contributed to the establishment of the one gene-one enzyme hypothesis and discovered attenuation, a riboswitch mechanism in which messenger RNA changes shape in response to a small molecule and thus alters its binding ability for the regulatory region of a gene or operon. Education and early life Charles Yanofsky was born on April 17, 1925, in New York. He was one of the earliest graduates of the Bronx High School of Science, then studied at the City College of New York and completed his degree in biochemistry in spite of having had his education interrupted by military service in World War II including participation in the Battle of the Bulge. In 1948, having returned and completed college, he took up graduate work towards his master's degree and PhD, both granted by Yale University. He pursued postdoctoral work at Yale for a time, completing work started during his PhD training. Career and research Yanofsky joined the Case Western Reserve Medical School faculty in 1954. He moved to the faculty at Stanford University as an Associate Professor in 1958. In 1964, Yanofsky and colleagues established that gene sequences and protein sequences are colinear in bacteria. Yanofsky showed that changes in DNA sequence can produce changes in protein sequence at corresponding positions. His work is considered the best evidence in favor of the one gene-one enzyme hypothesis. His laboratory also revealed how controlled alterations in RNA shapes allow RNA to serve as a regulatory molecule in both bacterial and animal cells. His graduate student Iwona Stroynowski and Mitzi Kuroda discovered the process of attenuation of expression based on regulated binding ability of the five-prime untranslated region of the messenger RNA for the bacterial tryptophan operon. They had thus discovered the first regulatory riboswitch, although that terminology was not used
https://en.wikipedia.org/wiki/Wake-on-ring
Wake-on-Ring (WOR), sometimes referred to as Wake-on-Modem (WOM), is a specification that allows supported computers and devices to "wake up" or turn on from a sleeping, hibernating or "soft off" state (e.g. ACPI state G1 or G2), and begin operation. The basic premise is that a special signal is sent over phone lines to the computer through its dial-up modem, telling it to fully power-on and begin operation. Common uses were archive databases and BBSes, although hobbyist use was significant. Fax machines use a similar system, in which they are mostly idle until receiving an incoming fax signal, which spurs operation. This style of remote operation has mostly been supplanted by Wake-on-LAN, which is newer but works in much the same way. See also ACPI RS-232 Signals, Ring Indicator Wake-on-LAN Additional resources "Wake on Modem" entry from Smart Computing Encyclopedia Networking standards BIOS Unified Extensible Firmware Interface Remote control
https://en.wikipedia.org/wiki/Propositional%20variable
In mathematical logic, a propositional variable (also called a sentential variable or sentential letter) is an input variable (that can either be true or false) of a truth function. Propositional variables are the basic building-blocks of propositional formulas, used in propositional logic and higher-order logics. Uses Formulas in logic are typically built up recursively from some propositional variables, some number of logical connectives, and some logical quantifiers. Propositional variables are the atomic formulas of propositional logic, and are often denoted using capital roman letters such as , and . Example In a given propositional logic, a formula can be defined as follows: Every propositional variable is a formula. Given a formula X, the negation ¬X is a formula. Given two formulas X and Y, and a binary connective b (such as the logical conjunction ∧),the expression (X b Y) is a formula. (Note the parentheses.) Through this construction, all of the formulas of propositional logic can be built up from propositional variables as a basic unit. Propositional variables should not be confused with the metavariables, which appear in the typical axioms of propositional calculus; the latter effectively range over well-formed formulae, and are often denoted using lower-case greek letters such as , and . Predicate logic Propositional variables with no object variables such as x and y attached to predicate letters such as Px and xRy, having instead individual constants a, b, ..attached to predicate letters are propositional constants Pa, aRb. These propositional constants are atomic propositions, not containing propositional operators. The internal structure of propositional variables contains predicate letters such as P and Q, in association with bound individual variables (e.g., x, y), individual constants such as a and b (singular terms from a domain of discourse D), ultimately taking a form such as Pa, aRb.(or with parenthesis, and ). Propositional l
https://en.wikipedia.org/wiki/Conservation%20genetics
Conservation genetics is an interdisciplinary subfield of population genetics that aims to understand the dynamics of genes in a population for the purpose of natural resource management and extinction prevention. Researchers involved in conservation genetics come from a variety of fields including population genetics, natural resources, molecular ecology, biology, evolutionary biology, and systematics. Genetic diversity is one of the three fundamental measures of biodiversity (along with species diversity and ecosystem diversity), so it is an important consideration in the wider field of conservation biology. Genetic diversity Genetic diversity is the total number of genetic characteristics in a species. It can be measured in several ways: observed heterozygosity, expected heterozygosity, the mean number of alleles per locus, or the percentage of polymorphic loci. Genetic diversity on the population level is a crucial focus for conservation genetics as it influences both the health and long-term survival of populations: decreased genetic diversity has been associated with reduced fitness, such as high juvenile mortality, diminished population growth, reduced immunity, and ultimately, higher extinction risk. Heterozygosity, a fundamental measurement of genetic diversity in population genetics, plays an important role in determining the chance of a population surviving environmental change, novel pathogens not previously encountered, as well as the average fitness of a population over successive generations. Heterozygosity is also deeply connected, in population genetics theory, to population size (which itself clearly has a fundamental importance to conservation). All things being equal, small populations will be less heterozygous– across their whole genomes– than comparable, but larger, populations. This lower heterozygosity (i.e. low genetic diversity) renders small populations more susceptible to the challenges mentioned above. In a small population, over succ
https://en.wikipedia.org/wiki/Mike%20Lesk
Michael E. Lesk (born 1945) is an American computer scientist. Biography In the 1960s, Michael Lesk worked for the SMART Information Retrieval System project, wrote much of its retrieval code and did many of the retrieval experiments, as well as obtaining a BA degree in Physics and Chemistry from Harvard College in 1964 and a PhD from Harvard University in Chemical Physics in 1969. From 1970 to 1984, Lesk worked at Bell Labs in the group that built Unix. Lesk wrote Unix tools for word processing (tbl, refer, and the standard ms macro package, all for troff), for compiling (Lex), and for networking (uucp). He also wrote the Portable I/O Library (the predecessor to stdio.h in C) and contributed significantly to the development of the C language preprocessor. In 1984, he left to work for Bellcore, where he managed the computer science research group. There, Lesk worked on specific information systems applications, mostly with geography (a system for driving directions) and dictionaries (a system for disambiguating words in context). In the 1990s, Lesk worked on a large chemical information system, the CORE project, with Cornell, Online Computer Library Center, American Chemical Society, and Chemical Abstracts Service. From 1998 to 2002, Lesk headed the National Science Foundation's Division of Information and Intelligent Systems, where he oversaw Phase 2 of the NSF's Digital Library Initiative. Currently, he is a professor on the faculty of the Library and Information Science Department, School of Communication & Information, Rutgers University. Lesk received the Flame award for lifetime achievement from Usenix in 1994, is a Fellow of the ACM in 1996, and in 2005 was elected to the National Academy of Engineering. He has authored a number of books. See also Lesk algorithm Bibliography Selected books by Michael Lesk: Practical Digital Libraries: Books, Bytes, and Bucks, 1997. . Understanding Digital Libraries, 2nd ed., December 2004. . References External lin
https://en.wikipedia.org/wiki/Propositional%20formula
In propositional logic, a propositional formula is a type of syntactic formula which is well formed and has a truth value. If the values of all variables in a propositional formula are given, it determines a unique truth value. A propositional formula may also be called a propositional expression, a sentence, or a sentential formula. A propositional formula is constructed from simple propositions, such as "five is greater than three" or propositional variables such as p and q, using connectives or logical operators such as NOT, AND, OR, or IMPLIES; for example: (p AND NOT q) IMPLIES (p OR q). In mathematics, a propositional formula is often more briefly referred to as a "proposition", but, more precisely, a propositional formula is not a proposition but a formal expression that denotes a proposition, a formal object under discussion, just like an expression such as "" is not a value, but denotes a value. In some contexts, maintaining the distinction may be of importance. Propositions For the purposes of the propositional calculus, propositions (utterances, sentences, assertions) are considered to be either simple or compound. Compound propositions are considered to be linked by sentential connectives, some of the most common of which are "AND", "OR", "IF ... THEN ...", "NEITHER ... NOR ...", "... IS EQUIVALENT TO ..." . The linking semicolon ";", and connective "BUT" are considered to be expressions of "AND". A sequence of discrete sentences are considered to be linked by "AND"s, and formal analysis applies a recursive "parenthesis rule" with respect to sequences of simple propositions (see more below about well-formed formulas). For example: The assertion: "This cow is blue. That horse is orange but this horse here is purple." is actually a compound proposition linked by "AND"s: ( ("This cow is blue" AND "that horse is orange") AND "this horse here is purple" ) . Simple propositions are declarative in nature, that is, they make assertions about the condition
https://en.wikipedia.org/wiki/Longeron
In engineering, a longeron or stringer is a load-bearing component of a framework. The term is commonly used in connection with aircraft fuselages and automobile chassis. Longerons are used in conjunction with stringers to form structural frameworks. Aircraft In an aircraft fuselage, stringers are attached to formers (also called frames) and run in the longitudinal direction of the aircraft. They are primarily responsible for transferring the aerodynamic loads acting on the skin onto the frames and formers. In the wings or horizontal stabilizer, longerons run spanwise (from wing root to wing tip) and attach between the ribs. The primary function here also is to transfer the bending loads acting on the wings onto the ribs and spar. Sometimes the terms "longeron" and "stringer" are used interchangeably. Historically, though, there is a subtle difference between the two terms. If the longitudinal members in a fuselage are few in number (usually 4 to 8) and run all along the fuselage length, then they are called "longerons". The longeron system also requires that the fuselage frames be closely spaced (about every ). If the longitudinal members are numerous (usually 50 to 100) and are placed just between two formers/frames, then they are called "stringers". In the stringer system the longitudinal members are smaller and the frames are spaced farther apart (about ). Generally, longerons are of larger cross-section when compared to stringers. On large modern aircraft the stringer system is more common because it is more weight-efficient, despite being more complex to construct and analyze. Some aircraft use a combination of both stringers and longerons. Longerons often carry larger loads than stringers and also help to transfer skin loads to internal structure. Longerons nearly always attach to frames or ribs. Stringers often are not attached to anything but the skin, where they carry a portion of the fuselage bending moment through axial loading. It is not unco
https://en.wikipedia.org/wiki/Web%20Services%20Resource%20Framework
Web Services Resource Framework (WSRF) is a family of OASIS-published specifications for web services. Major contributors include the Globus Alliance and IBM. A web service by itself is nominally stateless, i.e., it retains no data between invocations. This limits the things that can be done with web services, Before WSRF, no standard in the Web Services family of specifications explicitly defined how to deal with stateful interactions with remote resources. This does not mean that web services could not be stateful. Where required a web service could read from a database, or use session state by way of cookies or WS-Session. WSRF provides a set of operations that web services can use to implement stateful interaction; web service clients communicate with resource services which allow data to be stored and retrieved. When clients talk to the web service they include the identifier of the specific resource that should be used inside the request, encapsulated within the WS-Addressing endpoint reference. This may be a simple URI address, or it may be complex XML content that helps identify or even fully describe the specific resource in question. Alongside the notion of an explicit resource reference comes a standardized set of web service operations to get/set resource properties. These can be used to read and perhaps write resource state, in a manner somewhat similar to having member variables of an object alongside its methods. The primary beneficiary of such a model are management tools, which can enumerate and view resources, even if they have no other knowledge of them. This is the basis for WSDM. Issues with WSRF WSRF is not without controversy. Most fundamental is architectural: are distributed objects with state and operations the best way to represent remote resources? It is almost a port into XML of the distributed objects pattern, of which CORBA and DCOM are examples. A WSRF resource may be a stateful entity to which multiple clients have resource re
https://en.wikipedia.org/wiki/John%20Tierney%20%28journalist%29
John Marion Tierney (born March 25, 1953) is an American journalist and a contributing editor to City Journal, the Manhattan Institute's quarterly publication. Previously he had been a reporter and columnist at the New York Times for three decades since 1990. A self-described contrarian, Tierney is a critic of aspects of environmentalism, the "science establishment," and big government, but he does support the goal of limiting overall emissions of carbon dioxide. Early and personal life Tierney was born in 1953 outside Chicago, and grew up in "the Midwest, South America and Pittsburgh". He graduated from Yale University in 1976. He was previously married to Dana Tierney, with whom he had one child. They later divorced; Tierney married anthropologist and love expert Helen Fisher in 2020. Career After graduating college, Tierney was a newspaper reporter for four years, first at the Bergen Record in New Jersey and then at the Washington Star. Starting in 1980, he spent ten years in magazine journalism writing for such magazines as Atlantic Monthly, Discover, Esquire, Health, National Geographic Traveler, New York, Newsweek, Outside, Rolling Stone. Tierney began working at The New York Times in 1990 as a "general assignment" reporter in the Metro section. Tierney writes a science column, "Findings", for the Times. He previously wrote the TierneyLab blog for the Times. In 2005 Tierney began to write for the Times Op-Ed page and as of 2015 his writings appeared in both the Times Op-Ed and "Findings" science column. He also writes for the conservative City Journal. In 2009 Tierney wrote about mathematics popularizer Martin Gardner and in that same year started featuring recreational mathematics problems, often curated by Pradeep Mutalik in his New York Times TierneyLab blog. In 2010, Tierney retired from writing the blog, and Mutalik continued it under a new name (NumberPlay). In time, Gary Antonick took that over until he retired it in October 2016. Views Tierney des
https://en.wikipedia.org/wiki/Oxygen%20scavenger
Oxygen scavengers or oxygen absorbers are added to enclosed packaging to help remove or decrease the level of oxygen in the package. They are used to help maintain product safety and extend shelf life. There are many types of oxygen absorbers available to cover a wide array of applications. The components of an oxygen absorber vary according to intended use, the water activity of the product being preserved, and other factors. Often the oxygen absorber or scavenger is enclosed in a porous sachet or packet but it can also be part of packaging films and structures. Others are part of a polymer structure. Oxygen absorbing chemicals are also commonly added to boiler feedwater used in boiler systems, to reduce corrosion of components within the system. Mechanism The first patent for an oxygen scavenger used an alkaline solution of pyrogallic acid in an air-tight vessel. Modern scavenger sachets use a mixture of iron powder and sodium chloride. Often activated carbon is also included as it adsorbs some other gases and many organic molecules, further preserving products and removing odors. When an oxygen absorber is removed from its protective packaging, the moisture in the surrounding atmosphere begins to permeate into the iron particles inside of the absorber sachet. Moisture activates the iron, and it oxidizes to form iron oxide. Typically, there is required to be at least 65% relative humidity in the surrounding atmosphere before the rusting process can begin. To assist in the process of oxidation, sodium chloride is added to the mixture, acting as a catalyst or activator, causing the iron powder to be able to oxidize even with relatively low humidity. As oxygen is consumed to form iron oxide the level of oxygen in the surrounding atmosphere is reduced. Absorber technology of this type may reduce the oxygen level in the surrounding atmosphere to below 0.01%. Complete oxidation of 1 g of iron can remove 300 cm3 of oxygen in standard conditions. Though other techn
https://en.wikipedia.org/wiki/Fast%20wavelet%20transform
The fast wavelet transform is a mathematical algorithm designed to turn a waveform or signal in the time domain into a sequence of coefficients based on an orthogonal basis of small finite waves, or wavelets. The transform can be easily extended to multidimensional signals, such as images, where the time domain is replaced with the space domain. This algorithm was introduced in 1989 by Stéphane Mallat. It has as theoretical foundation the device of a finitely generated, orthogonal multiresolution analysis (MRA). In the terms given there, one selects a sampling scale J with sampling rate of 2J per unit interval, and projects the given signal f onto the space ; in theory by computing the scalar products where is the scaling function of the chosen wavelet transform; in practice by any suitable sampling procedure under the condition that the signal is highly oversampled, so is the orthogonal projection or at least some good approximation of the original signal in . The MRA is characterised by its scaling sequence or, as Z-transform, and its wavelet sequence or (some coefficients might be zero). Those allow to compute the wavelet coefficients , at least some range k=M,...,J-1, without having to approximate the integrals in the corresponding scalar products. Instead, one can directly, with the help of convolution and decimation operators, compute those coefficients from the first approximation . Forward DWT For the discrete wavelet transform (DWT), one computes recursively, starting with the coefficient sequence and counting down from k = J - 1 to some M < J, or and or , for k=J-1,J-2,...,M and all . In the Z-transform notation: The downsampling operator reduces an infinite sequence, given by its Z-transform, which is simply a Laurent series, to the sequence of the coefficients with even indices, . The starred Laurent-polynomial denotes the adjoint filter, it has time-reversed adjoint coefficients, . (The adjoint of a real number being t
https://en.wikipedia.org/wiki/Internet%20background%20noise
Internet background noise (IBN, also known as Internet background radiation) consists of data packets on the Internet which are addressed to IP addresses or ports where there is no network device set up to receive them. These packets often contain unsolicited commercial or network control messages, or are the result of port scans and worm activities. Smaller devices such as DSL modems may have a hard-coded IP address to look up the correct time using the Network Time Protocol. If, for some reason, the hard-coded NTP server is no longer available, faulty software might retry failed requests up to every second, which, if many devices are affected, generates a significant amount of unnecessary request traffic. Historical context In the first 10 years of the Internet, there was very little background noise but with its commercialization in the 1990s the noise factor became a permanent feature. The Conficker worm in recent times was responsible for a large amount of background noise generated by viruses looking for new victims. In addition to malicious activities, misconfigured hardware and leaks from private networks are also sources of background noise. 2000s As of November 2010, it is estimated that 5.5 gigabits (687.5 megabytes) of background noise are generated every second. It was also estimated in the early 2000s that a dial-up modem user loses about 20 bits per second of their bandwidth to unsolicited traffic. Over the past decade, the amount of background noise for an IPv4 /8 address block (which contains 16.7 million address) has increased from 1 to 50 Mbit/s (1KB/s to 6.25MB/s). The newer IPv6 protocol, which has a much larger address space, will make it more difficult for viruses to scan ports and also limit the impact of misconfigured equipment. Internet background noise has been used to detect significant changes in Internet traffic and connectivity during the 2011 political unrest from IP address blocks that were geolocated to Libya. Backscatter
https://en.wikipedia.org/wiki/Intel%208259
The Intel 8259 is a Programmable Interrupt Controller (PIC) designed for the Intel 8085 and Intel 8086 microprocessors. The initial part was 8259, a later A suffix version was upward compatible and usable with the 8086 or 8088 processor. The 8259 combines multiple interrupt input sources into a single interrupt output to the host microprocessor, extending the interrupt levels available in a system beyond the one or two levels found on the processor chip. The 8259A was the interrupt controller for the ISA bus in the original IBM PC and IBM PC AT. The 8259 was introduced as part of Intel's MCS 85 family in 1976. The 8259A was included in the original PC introduced in 1981 and maintained by the PC/XT when introduced in 1983. A second 8259A was added with the introduction of the PC/AT. The 8259 has coexisted with the Intel APIC Architecture since its introduction in Symmetric Multi-Processor PCs. Modern PCs have begun to phase out the 8259A in favor of the Intel APIC Architecture. However, while not anymore a separate chip, the 8259A interface is still provided by the Platform Controller Hub or Southbridge chipset on modern x86 motherboards. Functional description The main signal pins on an 8259 are as follows: eight interrupt input request lines named IRQ0 through IRQ7, an interrupt request output line named INTR, interrupt acknowledgment line named INTA, D0 through D7 for communicating the interrupt level or vector offset. Other connections include CAS0 through CAS2 for cascading between 8259s. Up to eight slave 8259s may be cascaded to a master 8259 to provide up to 64 IRQs. 8259s are cascaded by connecting the INT line of one slave 8259 to the IRQ line of one master 8259. End of Interrupt (EOI) operations support specific EOI, non-specific EOI, and auto-EOI. A specific EOI specifies the IRQ level it is acknowledging in the ISR. A non-specific EOI resets the IRQ level in the ISR. Auto-EOI resets the IRQ level in the ISR immediately after the interrupt is acknowl
https://en.wikipedia.org/wiki/Secure%20voice
Secure voice (alternatively secure speech or ciphony) is a term in cryptography for the encryption of voice communication over a range of communication types such as radio, telephone or IP. History The implementation of voice encryption dates back to World War II when secure communication was paramount to the US armed forces. During that time, noise was simply added to a voice signal to prevent enemies from listening to the conversations. Noise was added by playing a record of noise in sync with the voice signal and when the voice signal reached the receiver, the noise signal was subtracted out, leaving the original voice signal. In order to subtract out the noise, the receiver need to have exactly the same noise signal and the noise records were only made in pairs; one for the transmitter and one for the receiver. Having only two copies of records made it impossible for the wrong receiver to decrypt the signal. To implement the system, the army contracted Bell Laboratories and they developed a system called SIGSALY. With SIGSALY, ten channels were used to sample the voice frequency spectrum from 250 Hz to 3 kHz and two channels were allocated to sample voice pitch and background hiss. In the time of SIGSALY, the transistor had not been developed and the digital sampling was done by circuits using the model 2051 Thyratron vacuum tube. Each SIGSALY terminal used 40 racks of equipment weighing 55 tons and filled a large room. This equipment included radio transmitters and receivers and large phonograph turntables. The voice was keyed to two vinyl phonograph records that contained a Frequency Shift Keying (FSK) audio tone. The records were played on large precise turntables in sync with the voice transmission. From the introduction of voice encryption to today, encryption techniques have evolved drastically. Digital technology has effectively replaced old analog methods of voice encryption and by using complex algorithms, voice encryption has become much mo
https://en.wikipedia.org/wiki/Proof%20of%20work
Proof of work (PoW) is a form of cryptographic proof in which one party (the prover) proves to others (the verifiers) that a certain amount of a specific computational effort has been expended. Verifiers can subsequently confirm this expenditure with minimal effort on their part. The concept was invented by Moni Naor and Cynthia Dwork in 1993 as a way to deter denial-of-service attacks and other service abuses such as spam on a network by requiring some work from a service requester, usually meaning processing time by a computer. The term "proof of work" was first coined and formalized in a 1999 paper by Markus Jakobsson and Ari Juels. Proof of work was later popularized by Bitcoin as a foundation for consensus in a permissionless decentralized network, in which miners compete to append blocks and mine new currency, each miner experiencing a success probability proportional to the computational effort expended. PoW and PoS (proof of stake) remain the two best known Sybil deterrence mechanisms. In the context of cryptocurrencies they are the most common mechanisms. A key feature of proof-of-work schemes is their asymmetry: the work – the computation – must be moderately hard (yet feasible) on the prover or requester side but easy to check for the verifier or service provider. This idea is also known as a CPU cost function, client puzzle, computational puzzle, or CPU pricing function. Another common feature is built-in incentive-structures that reward allocating computational capacity to the network with value in the form of cryptocurrency. The purpose of proof-of-work algorithms is not proving that certain work was carried out or that a computational puzzle was "solved", but deterring manipulation of data by establishing large energy and hardware-control requirements to be able to do so. Proof-of-work systems have been criticized by environmentalists for their energy consumption. Background One popular system, used in Hashcash, uses partial hash inversions to p
https://en.wikipedia.org/wiki/Mobile%20QoS
Quality of service (QoS) mechanism controls the performance, reliability and usability of a telecommunications service. Mobile cellular service providers may offer mobile QoS to customers just as the fixed line PSTN services providers and Internet service providers may offer QoS. QoS mechanisms are always provided for circuit switched services, and are essential for non-elastic services, for example streaming multimedia. It is also essential in networks dominated by such services, which is the case in today's mobile communication networks. Mobility adds complication to the QoS mechanisms, for several reasons: A phone call or other session may be interrupted after a handover, if the new base station is overloaded. Unpredictable handovers make it impossible to give an absolute QoS guarantee during a session initiation phase. The pricing structure is often based on per-minute or per-megabyte fee rather than flat rate, and may be different for different content services. A crucial part of QoS in mobile communications is grade of service, involving outage probability (the probability that the mobile station is outside the service coverage area, or affected by co-channel interference, i.e. crosstalk) blocking probability (the probability that the required level of QoS can not be offered) and scheduling starvation. These performance measures are affected by mechanisms such as mobility management, radio resource management, admission control, fair scheduling, channel-dependent scheduling etc. Factors affecting QoS Many factors affect the quality of service of a mobile network. It is correct to look at QoS mainly from the customer's point of view, that is, QoS as judged by the user. There are standard metrics of QoS to the user that can be measured to rate the QoS. These metrics are: the coverage, accessibility (includes GoS), and the audio quality. In coverage the strength of the signal is measured using test equipment and this can be used to estimate the size of the
https://en.wikipedia.org/wiki/Variance%20swap
A variance swap is an over-the-counter financial derivative that allows one to speculate on or hedge risks associated with the magnitude of movement, i.e. volatility, of some underlying product, like an exchange rate, interest rate, or stock index. One leg of the swap will pay an amount based upon the realized variance of the price changes of the underlying product. Conventionally, these price changes will be daily log returns, based upon the most commonly used closing price. The other leg of the swap will pay a fixed amount, which is the strike, quoted at the deal's inception. Thus the net payoff to the counterparties will be the difference between these two and will be settled in cash at the expiration of the deal, though some cash payments will likely be made along the way by one or the other counterparty to maintain agreed upon margin. Structure and features The features of a variance swap include: the variance strike the realized variance the vega notional: Like other swaps, the payoff is determined based on a notional amount that is never exchanged. However, in the case of a variance swap, the notional amount is specified in terms of vega, to convert the payoff into dollar terms. The payoff of a variance swap is given as follows: where: = variance notional (a.k.a. variance units), = annualised realised variance, and = variance strike. The annualised realised variance is calculated based on a prespecified set of sampling points over the period. It does not always coincide with the classic statistical definition of variance as the contract terms may not subtract the mean. For example, suppose that there are observed prices where for to . Define the natural log returns. Then where is an annualisation factor normally chosen to be approximately the number of sampling points in a year (commonly 252) and is set be the swaps contract life defined by the number . It can be seen that subtracting the mean return will decrease the realised varianc
https://en.wikipedia.org/wiki/Synthetic%20membrane
An artificial membrane, or synthetic membrane, is a synthetically created membrane which is usually intended for separation purposes in laboratory or in industry. Synthetic membranes have been successfully used for small and large-scale industrial processes since the middle of twentieth century. A wide variety of synthetic membranes is known. They can be produced from organic materials such as polymers and liquids, as well as inorganic materials. The most of commercially utilized synthetic membranes in separation industry are made of polymeric structures. They can be classified based on their surface chemistry, bulk structure, morphology, and production method. The chemical and physical properties of synthetic membranes and separated particles as well as a choice of driving force define a particular membrane separation process. The most commonly used driving forces of a membrane process in industry are pressure and concentration gradients. The respective membrane process is therefore known as filtration. Synthetic membranes utilized in a separation process can be of different geometry and of respective flow configuration. They can also be categorized based on their application and separation regime. The best known synthetic membrane separation processes include water purification, reverse osmosis, dehydrogenation of natural gas, removal of cell particles by microfiltration and ultrafiltration, removal of microorganisms from dairy products, and Dialysis. Membrane types and structure Synthetic membrane can be fabricated from a large number of different materials. It can be made from organic or inorganic materials including solids such as metals, ceramics, homogeneous films, polymers, heterogeneous solids (polymeric mixtures, mixed glasses), and liquids. Ceramic membranes are produced from inorganic materials such as aluminium oxides, silicon carbide, and zirconium oxide. Ceramic membranes are very resistant to the action of aggressive media (acids, strong solvents).
https://en.wikipedia.org/wiki/Acoustic%20cryptanalysis
Acoustic cryptanalysis is a type of side channel attack that exploits sounds emitted by computers or other devices. Most of the modern acoustic cryptanalysis focuses on the sounds produced by computer keyboards and internal computer components, but historically it has also been applied to impact printers, and electromechanical deciphering machines. History Victor Marchetti and John D. Marks eventually negotiated the declassification of CIA acoustic intercepts of the sounds of cleartext printing from encryption machines. Technically this method of attack dates to the time of FFT hardware being cheap enough to perform the task; in this case the late 1960s to mid-1970s. However, using other more primitive means such acoustical attacks were made in the mid-1950s. In his book Spycatcher, former MI5 operative Peter Wright discusses use of an acoustic attack against Egyptian Hagelin cipher machines in 1956. The attack was codenamed "ENGULF". Known attacks In 2004, Dmitri Asonov and Rakesh Agrawal of the IBM Almaden Research Center announced that computer keyboards and keypads used on telephones and automated teller machines (ATMs) are vulnerable to attacks based on the sounds produced by different keys. Their attack employed a neural network to recognize the key being pressed. By analyzing recorded sounds, they were able to recover the text of data being entered. These techniques allow an attacker using covert listening devices to obtain passwords, passphrases, personal identification numbers (PINs), and other information entered via keyboards. In 2005, a group of UC Berkeley researchers performed a number of practical experiments demonstrating the validity of this kind of threat. Also in 2004, Adi Shamir and Eran Tromer demonstrated that it may be possible to conduct timing attacks against a CPU performing cryptographic operations by analyzing variations in acoustic emissions. Analyzed emissions were ultrasonic noise emanating from capacitors and inductors on co
https://en.wikipedia.org/wiki/Color%20balance
In photography and image processing, color balance is the global adjustment of the intensities of the colors (typically red, green, and blue primary colors). An important goal of this adjustment is to render specific colors – particularly neutral colors like white or grey – correctly. Hence, the general method is sometimes called gray balance, neutral balance, or white balance. Color balance changes the overall mixture of colors in an image and is used for color correction. Generalized versions of color balance are used to correct colors other than neutrals or to deliberately change them for effect. White balance is one of the most common kinds of balancing, and is when colors are adjusted to make a white object (such as a piece of paper or a wall) appear white and not a shade of any other colour. Image data acquired by sensors – either film or electronic image sensors – must be transformed from the acquired values to new values that are appropriate for color reproduction or display. Several aspects of the acquisition and display process make such color correction essential – including that the acquisition sensors do not match the sensors in the human eye, that the properties of the display medium must be accounted for, and that the ambient viewing conditions of the acquisition differ from the display viewing conditions. The color balance operations in popular image editing applications usually operate directly on the red, green, and blue channel pixel values, without respect to any color sensing or reproduction model. In film photography, color balance is typically achieved by using color correction filters over the lights or on the camera lens. Generalized color balance Sometimes the adjustment to keep neutrals neutral is called white balance, and the phrase color balance refers to the adjustment that in addition makes other colors in a displayed image appear to have the same general appearance as the colors in an original scene. It is particularly import
https://en.wikipedia.org/wiki/Game%20client
A game client is a network client that connects an individual user to the main game server, used mainly in multiplayer video games. It collects data such as score, player status, position and movement from a single player and send it to the game server, which allows the server to collect each individual's data and show every player in game, whether it is an arena game on a smaller scale or a massive game with thousands of players on the same map. Even though the game server displays each player's information for every player in a game, players still have their own unique perspective from the information collected by the game client, so that every player's perspective of the game is different, even though the world for every player is the same. The game client also allows the information sharing among users. An example would be item exchange in many MMORPG games where a player exchange an item he/she doesn't want for an item he/she wants, the game clients interconnect with each other and allows the sharing of information, in this exchanging items. Since many games requires a centralized space for players to gather and a way for users to exchange their information, many game clients are a hybrid of client-server and peer-to-peer application structures. History The World Wide Web was born on a NeXTCube with a 256Mhz cpu, 2GB of disk, and a gray scale monitor running NeXTSTEP OS. Sir Tim Berners-Lee put the first web page online on August 6, 1991, while working for CERN in Geneva Switzerland. Online gaming started in the early seventies. At that time Dial-up bulletin boards provided players with a way of playing games over the internet. In the 1990s, new technologies enabled gaming sites to pop up all over the internet. The client-server system provided online gaming a way to function on a large scale. Functions A game client has 4 primary functions: Receive inputs, Analyzes data, Gives feedback, Adjust system Receives input A game client receives input from an in
https://en.wikipedia.org/wiki/Statewatch
Statewatch is a non-profit organization founded in 1991 that monitors civil liberties and other issues in the European Union and encourages investigative reporting and research. The organization has three free databases: a large database of all its news, articles and links since 1991, the Statewatch European Monitoring and Documentation Centre (SEMDOC) which monitors all new justice and home affairs measures since 1993. The predecessor to Statewatch was "State Research" (1977-1982), which produced a bi-monthly bulletin and carried research. Among other activities, it monitors anti-terrorist legislation, has a Passenger Name Record observatory, is concerned about asylum issues, data privacy, biometrics, etc. The organization and its director, Tony Bunyan, have received awards for their civil rights activism including a 1998 award from the British Campaign for Freedom of Information and the 2011 "Long Walk" award at the Liberty's Human Rights Awards. References External links Information privacy International organisations based in London Non-profit organisations based in London Organisations based in the City of London Organizations established in 1991 Organizations related to the European Union Watchdog journalism 1991 establishments in Europe
https://en.wikipedia.org/wiki/Principles%20and%20Standards%20for%20School%20Mathematics
Principles and Standards for School Mathematics (PSSM) are guidelines produced by the National Council of Teachers of Mathematics (NCTM) in 2000, setting forth recommendations for mathematics educators. They form a national vision for preschool through twelfth grade mathematics education in the US and Canada. It is the primary model for standards-based mathematics. The NCTM employed a consensus process that involved classroom teachers, mathematicians, and educational researchers. The resulting document sets forth a set of six principles (Equity, Curriculum, Teaching, Learning, Assessment, and Technology) that describe NCTM's recommended framework for mathematics programs, and ten general strands or standards that cut across the school mathematics curriculum. These strands are divided into mathematics content (Number and Operations, Algebra, Geometry, Measurement, and Data Analysis and Probability) and processes (Problem Solving, Reasoning and Proof, Communication, Connections, and Representation). Specific expectations for student learning are described for ranges of grades (preschool to 2, 3 to 5, 6 to 8, and 9 to 12). Origins The Principles and Standards for School Mathematics was developed by the NCTM. The NCTM's stated intent was to improve mathematics education. The contents were based on surveys of existing curriculum materials, curricula and policies from many countries, educational research publications, and government agencies such as the U.S. National Science Foundation. The original draft was widely reviewed at the end of 1998 and revised in response to hundreds of suggestions from teachers. The PSSM is intended to be "a single resource that can be used to improve mathematics curricula, teaching, and assessment." The latest update was published in 2000. The PSSM is available as a book, and in hypertext format on the NCTM web site. The PSSM replaces three prior publications by NCTM: Curriculum and Evaluation Standards for School Mathematics (1
https://en.wikipedia.org/wiki/National%20Council%20of%20Teachers%20of%20Mathematics
Founded in 1920, The National Council of Teachers of Mathematics (NCTM) is a professional organization for schoolteachers of mathematics in the United States. One of its goals is to improve the standards of mathematics in education. NCTM holds annual national and regional conferences for teachers and publishes five journals. Journals NCTM publishes five official journals. All are available in print and online versions. Teaching Children Mathematics supports improvement of pre-K–6 mathematics education by serving as a resource for teachers so as to provide more and better mathematics for all students. It is a forum for the exchange of mathematics idea, activities, and pedagogical strategies, and or sharing and interpreting research. Mathematics Teaching in the Middle School supports the improvement of grade 5–9 mathematics education by serving as a resource for practicing and prospective teachers, as well as supervisors and teacher educators. It is a forum for the exchange of mathematics idea, activities, and pedagogical strategies, and or sharing and interpreting research. Mathematics Teacher is devoted to improving mathematics instruction for grades 8–14 and supporting teacher education programs. It provides a forum for sharing activities and pedagogical strategies, deepening understanding of mathematical ideas, and linking mathematical education research to practice. Mathematics Teacher Educator, published jointly with the Association of Mathematics Teacher Educators, contributes to building a professional knowledge base for mathematics teacher educators that stems from, develops, and strengthens practitioner knowledge. The journal provides a means for practitioner knowledge related to the preparation and support of teachers of mathematics to be not only public, shared, and stored, but also verified and improved over time (Hiebert, Gallimore, and Stigler 2002). NCTM does not conduct research in mathematics education, but it does publish the Journal for Rese
https://en.wikipedia.org/wiki/Heesch%27s%20problem
In geometry, the Heesch number of a shape is the maximum number of layers of copies of the same shape that can surround it with no overlaps and no gaps. Heesch's problem is the problem of determining the set of numbers that can be Heesch numbers. Both are named for geometer Heinrich Heesch, who found a tile with Heesch number 1 (the union of a square, equilateral triangle, and 30-60-90 right triangle) and proposed the more general problem. For example, a square may be surrounded by infinitely many layers of congruent squares in the square tiling, while a circle cannot be surrounded by even a single layer of congruent circles without leaving some gaps. The Heesch number of the square is infinite and the Heesch number of the circle is zero. In more complicated examples, such as the one shown in the illustration, a polygonal tile can be surrounded by several layers, but not by infinitely many; the maximum number of layers is the tile's Heesch number. Formal definitions A tessellation of the plane is a partition of the plane into smaller regions called tiles. The zeroth corona of a tile is defined as the tile itself, and for k > 0 the kth corona is the set of tiles sharing a boundary point with the (k − 1)th corona. The Heesch number of a figure S is the maximum value k such that there exists a tiling of the plane, and tile t within that tiling, for which that all tiles in the zeroth through kth coronas of t are congruent to S. In some work on this problem this definition is modified to additionally require that the union of the zeroth through kth coronas of t is a simply connected region. If there is no upper bound on the number of layers by which a tile may be surrounded, its Heesch number is said to be infinite. In this case, an argument based on Kőnig's lemma can be used to show that there exists a tessellation of the whole plane by congruent copies of the tile. Example Consider the non-convex polygon P shown in the figure to the right, which is formed from
https://en.wikipedia.org/wiki/Brian%20Randell
Brian Randell DSc FBCS FLSW (born 1936) is a British computer scientist, and emeritus professor at the School of Computing, Newcastle University, United Kingdom. He specialises in research into software fault tolerance and dependability, and is a noted authority on the early pre-1950 history of computing hardware. Biography Randell was employed at English Electric from 1957 to 1964 where he was working on compilers. His work on ALGOL 60 is particularly well known, including the development of the Whetstone compiler for the English Electric KDF9, an early stack machine. In 1964, he joined IBM, where he worked at the Thomas J. Watson Research Center on high performance computer architectures and also on operating system design methodology. In May 1969, he became a professor of computing science at the then named University of Newcastle upon Tyne, where he has worked since then in the area of software fault tolerance and dependability. He is a member of the Special Interest Group on Computers, Information and Society (SIGCIS) of the Society for the History of Technology CIS, and a founding member of the Editorial Board of the IEEE Annals of the History of Computing journal. He is a Fellow of the Association for Computing Machinery (2008). He was elected a Fellow of the Learned Society of Wales in 2011. He was, until 1969, a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 (WG2.1) on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68. He is also a founding member of IFIP WG2.3 on Programming Methodology, and of IFIP WG10.4 on Dependability and Fault Tolerance. He is married (to Liz, a teacher of French) and has four children. Work Brian Randell's main research interests are in the field of computer science, specifically on system dependability and fault tolerance. His interest in the history of computing was started by coming across the then almos
https://en.wikipedia.org/wiki/Contorsion%20tensor
The contorsion tensor in differential geometry is the difference between a connection with and without torsion in it. It commonly appears in the study of spin connections. Thus, for example, a vielbein together with a spin connection, when subject to the condition of vanishing torsion, gives a description of Einstein gravity. For supersymmetry, the same constraint, of vanishing torsion, gives (the field equations of) 11-dimensional supergravity. That is, the contorsion tensor, along with the connection, becomes one of the dynamical objects of the theory, demoting the metric to a secondary, derived role. The elimination of torsion in a connection is referred to as the absorption of torsion, and is one of the steps of Cartan's equivalence method for establishing the equivalence of geometric structures. Definition in metric geometry In metric geometry, the contorsion tensor expresses the difference between a metric-compatible affine connection with Christoffel symbol and the unique torsion-free Levi-Civita connection for the same metric. The contorsion tensor is defined in terms of the torsion tensor as (up to a sign, see below) where the indices are being raised and lowered with respect to the metric: . The reason for the non-obvious sum in the definition of the contorsion tensor is due to the sum-sum difference that enforces metric compatibility. The contorsion tensor is antisymmetric in the first two indices, whilst the torsion tensor itself is antisymmetric in its last two indices; this is shown below. The full metric compatible affine connection can be written as: Where the torsion-free Levi-Civita connection: Definition in affine geometry In affine geometry, one does not have a metric nor a metric connection, and so one is not free to raise and lower indices on demand. One can still achieve a similar effect by making use of the solder form, allowing the bundle to be related to what is happening on its base space. This is an explicitly geometric v
https://en.wikipedia.org/wiki/Ferran%20Adri%C3%A0
Fernando Adrià Acosta (; born 14 May 1962) is a Spanish chef. He was the head chef of the El Bulli restaurant in Roses on the Costa Brava and is considered one of the best chefs in the world. He has often collaborated with his brother, the renowned pastry chef Albert Adrià. Career Ferran Adrià began his culinary career in 1980 during his stint as a dishwasher at the Hotel Playafels, in the town of Castelldefels. The chef de cuisine at this hotel taught him traditional Spanish cuisine. At 19 he was drafted into military service where he worked as a cook. In 1984, at the age of 22, Adrià joined the kitchen staff of elBulli as a line cook. Eighteen months later he became the head chef. In 1994, Ferran Adrià and Juli Soler (his partner) sold 20% of their business to Miquel Horta (a Spanish millionaire and philanthropist and son of the founder of Nenuco) for 120 million pesetas. This event became a turning point for elBulli, the money was used to finance an expansion of the kitchen and the relationship with Horta opened the door to new clients, businessmen, and politicians who helped spread the word about the creative experimentation happening at the time in Cala Montjoi. Along with British chef Heston Blumenthal, Adrià is often associated with "molecular gastronomy", although like Blumenthal the Spanish chef does not consider his cuisine to be of this category. Instead, he has referred to his cooking as deconstructivist. He defines the term as 'Taking a dish that is well known and transforming all its ingredients, or part of them; then modifying the dish's texture, form and/or its temperature. Deconstructed, such a dish will preserve its essence... but its appearance will be radically different from the original's.' His stated goal is to "provide unexpected contrasts of flavour, temperature and texture. Nothing is what it seems. The idea is to provoke, surprise and delight the diner." elBulli was only open for about six months of the year, from mid-June to mid-Dece
https://en.wikipedia.org/wiki/Shekel%20function
The Shekel function is a multidimensional, multimodal, continuous, deterministic function commonly used as a test function for testing optimization techniques. The mathematical form of a function in dimensions with maxima is: or, similarly, Global minima Numerically certified global minima and the corresponding solutions were obtained using interval methods for up to . References Shekel, J. 1971. "Test Functions for Multimodal Search Techniques." Fifth Annual Princeton Conference on Information Science and Systems. See also Test functions for optimization Mathematical optimization Functions and mappings
https://en.wikipedia.org/wiki/Voltage%20regulator%20module
A voltage regulator module (VRM), sometimes called processor power module (PPM), is a buck converter that provides microprocessor and chipset the appropriate supply voltage, converting , or to lower voltages required by the devices, allowing devices with different supply voltages be mounted on the same motherboard. On personal computer (PC) systems, the VRM is typically made up of power MOSFET devices. Overview Most voltage regulator module implementations are soldered onto the motherboard. Some processors, such as Intel Haswell and Ice Lake CPUs, feature some voltage regulation components on the same CPU package, reduce the VRM design of the motherboard; such a design brings certain levels of simplification to complex voltage regulation involving numerous CPU supply voltages and dynamic powering up and down of various areas of a CPU. A voltage regulator integrated on-package or on-die is usually referred to as fully integrated voltage regulator (FIVR) or simply an integrated voltage regulator (IVR). Most modern CPUs require less than , as CPU designers tend to use lower CPU core voltages; lower voltages help in reducing CPU power dissipation, which is often specified through thermal design power (TDP) that serves as the nominal value for designing CPU cooling systems. Some voltage regulators provide a fixed supply voltage to the processor, but most of them sense the required supply voltage from the processor, essentially acting as a continuously-variable adjustable regulator. In particular, VRMs that are soldered to the motherboard are supposed to do the sensing, according to the Intel specification. Modern video cards also use a VRM due to higher power and current requirements. These VRMs may generate a significant amount of heat and require heat sinks separate from the GPU. Voltage identification The correct supply voltage is communicated by the microprocessor to the VRM at startup via a number of bits called VID (voltage identification definition). In
https://en.wikipedia.org/wiki/Vertical%20market%20software
Vertical market software is aimed at addressing the needs of any given business within a discernible vertical market (specific industry or market). While horizontal market software can be useful to a wide array of industries (such as word processors or spreadsheet programs), vertical market software is developed for and customized to a specific industry's needs. Vertical market software is readily identifiable by the application specific graphical user interface which defines it. One example of vertical market software is point-of-sale software. See also Horizontal market software Horizontal market Product software implementation method Enterprise resource planning Customer relationship management Content management system Supply chain management Resources Microsoft ships first Windows OS for vertical market from InfoWorld The Limits of Open Source - Vertical Markets Present Special Obstacles Software by type
https://en.wikipedia.org/wiki/Orienting%20response
The orienting response (OR), also called orienting reflex, is an organism's immediate response to a change in its environment, when that change is not sudden enough to elicit the startle reflex. The phenomenon was first described by Russian physiologist Ivan Sechenov in his 1863 book Reflexes of the Brain, and the term ('ориентировочный рефлекс' in Russian) was coined by Ivan Pavlov, who also referred to it as the Shto takoye? (Что такое? or What is it?) reflex. The orienting response is a reaction to novel or significant stimuli. In the 1950s the orienting response was studied systematically by the Russian scientist Evgeny Sokolov, who documented the phenomenon called "habituation", referring to a gradual "familiarity effect" and reduction of the orienting response with repeated stimulus presentations. Researchers have found a number of physiological mechanisms associated with OR, including changes in phasic and tonic skin conductance response (SCR), electroencephalogram (EEG), and heart rate following a novel or significant stimulus. These observations all occur within seconds of stimulus introduction. In particular, EEG studies of OR have corresponded particularly with the P300 wave and P3a component of the OR-related event-related potential (ERP). Neural correlates Current understanding of the localization of OR in the brain is still unclear. In one study using fMRI and SCR, researchers found novel visual stimuli associated with SCR responses typical of an OR also corresponded to activation in the hippocampus, anterior cingulate gyrus, and ventromedial prefrontal cortex. These regions are also believed to be largely responsible for emotion, decision making, and memory. Increases in cerebellar and extrastriate cortex were also recorded, which are significantly implicated in visual perception and processing. Function When an individual encounters a novel environmental stimulus, such as a bright flash of light or a sudden loud noise, they will pay attentio
https://en.wikipedia.org/wiki/Castlequest
Castlequest (known in Japan as ) is an adventure/puzzle video game. It was developed and published by ASCII Corporation in 1985 for the FM-7, PC-88, and Sharp X1. Additional versions followed in 1986 for the Famicom and MSX, and was subsequently released in 1989 for the NES in the United States by Nexoft Corporation (the American division of ASCII). It is the sequel to The Castle, released in 1985 for the MSX, SG-1000, and other systems (though not the NES). Like that game, it is an early example of the Metroidvania genre. Gameplay The object of the game is to navigate through Groken Castle to rescue Princess Margarita. The player can push certain objects throughout the game to accomplish progress. In some rooms, the prince can only advance to the next room by aligning cement blocks, Honey Jars, Candle Cakes, and Elevator Controlling Block. In some rooms, this can be quite time-consuming since the prince can only open a particular door if he can stand by the door, meaning that he can not open the door while jumping in mid-air. The prince must also carry a key that matches the color of the door he intends to be open. The player can navigate the castle with the help of a map that can be obtained from the first room that he/she begins. The map will provide the player with a matrix of 10x10 rooms and will highlight the room in which the princess is located. The player must also avoid touching enemies like Knights, Bishops, Wizards, Fire Spirits, Attack Cats and Phantom Flowers. Release In the Family Computer and NES versions, each room is wider than the screen, so the display scrolls horizontally as the player moves. Because of the different room sizes, many adjustments to the room layouts were made in comparison to the MSX version. In the Family Computer version, the player starts with 4 lives, and the game supports the Famicom Data Recorder and ASCII Turbo File peripherals for saving and loading game progress. When the game was reworked for the US NES release, the
https://en.wikipedia.org/wiki/Berlin%20Circle
The Berlin Circle () was a group that maintained logical empiricist views about philosophy. History Berlin Circle was created in the late 1920s by Hans Reichenbach, Kurt Grelling and Walter Dubislav and composed of philosophers and scientists such as Carl Gustav Hempel, David Hilbert and Richard von Mises. Its original name was Die Gesellschaft für empirische Philosophie, which in English may be translated as "the society for empirical philosophy". Together with the Vienna Circle, they published the journal Erkenntnis ("Knowledge") edited by Rudolf Carnap and Reichenbach, and organized several congresses and colloquia concerning the philosophy of science, the first of which was held in Prague in 1929. The Berlin Circle had much in common with the Vienna Circle, but the philosophies of the circles differed on a few subjects, such as probability and conventionalism. Reichenbach insisted on calling his philosophy logical empiricism, to distinguish it from the logical positivism of the Vienna Circle. Few people today make the distinction, and the words are often used interchangeably. Members of the Berlin Circle were particularly active in analyzing the philosophical and logical consequences of the advances in contemporary physics, especially the theory of relativity. Apart from that, they denied the soundness of metaphysics and traditional philosophy and asserted that many philosophical problems are indeed meaningless. After the rise of Nazism, several of the group's members emigrated to other countries, including Reichenbach, who moved to Turkey in 1933 and later to the United States in 1938; Dubislav emigrated to Prague in 1936; Hempel moved to Belgium in 1934 and later to the United States in 1939; and Grelling was killed in a concentration camp. A younger member of the Berlin Circle or Berlin School to leave Germany was Olaf Helmer who joined the RAND Corporation and played an important role in the development of the Delphi method used for predicting future tr
https://en.wikipedia.org/wiki/Stag%20hunt
In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. However, both hunters know the only way to successfully hunt a stag is with the other's help. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. It would be much better for each hunter, acting individually, to give up total autonomy and minimal risk, which brings only the small reward of the hare. Instead, each hunter should separately choose the more ambitious and far more rewarding goal of getting the stag, thereby giving up some autonomy in exchange for the other hunter's cooperation and added might. This situation is often seen as a useful analogy for many kinds of social cooperation, such as international agreements on climate change. The stag hunt differs from the prisoner's dilemma in that there are two pure-strategy Nash equilibria: one where both players cooperate, and one where both players defect. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect. An example of the payoff matrix for the stag hunt is pictured in Figure 2. Formal definition Formally, a stag hunt is a game with two pure strategy Nash equilibria—one that is risk dominant and another that is payoff dominant. The payoff matrix in Figure 1 illustrates a generic stag hunt, where . Often, games with a similar structure but without a risk dominant Nash equilibrium are called assurance games. For instance if a=10, b=5, c=0, and d=2. While (
https://en.wikipedia.org/wiki/Digital%20Audio%20Access%20Protocol
The Digital Audio Access Protocol (DAAP) is the proprietary protocol introduced by Apple in its iTunes software to share media across a local network. DAAP addresses the same problems for Apple as the UPnP AV standards address for members of the Digital Living Network Alliance (DLNA). Description The DAAP protocol was originally introduced in iTunes version 4.0. Initially, Apple did not officially release a protocol description, but it has been reverse-engineered to a sufficient degree that reimplementations of the protocol for non-iTunes platforms have been possible. A DAAP server is a specialized HTTP server, which performs two functions. It sends a list of songs and it streams requested songs to clients. There are also provisions to notify the client of changes to the server. Requests are sent to the server by the client in form of URLs and are responded to with data in mime-type, which can be converted to XML by the client. iTunes uses the zeroconf (also known as Bonjour) service to announce and discover DAAP shares on a local subnet. The DAAP service uses TCP port 3689 by default. DAAP is one of two media sharing schemes that Apple has currently released. The other, Digital Photo Access Protocol (DPAP), is used by iPhoto for sharing images. They both rely on an underlying protocol, Digital Media Access Protocol (DMAP). Early versions of iTunes allowed users to connect to shares across the Internet, however, in recent versions only computers on the same subnet can share music (workarounds such as port tunneling are possible). The Register speculates that Apple made this move in response to pressure from the record labels. More recent versions of iTunes also limit the number of clients to 5 unique IP addresses within a 24-hour period. DAAP has also been implemented in other non-iTunes media applications such as Banshee, Amarok, Exaile (with a plugin), Songbird (with a plugin), Rhythmbox, and WiFiTunes. DAAP authentication Beginning with iTunes 4.2, Appl
https://en.wikipedia.org/wiki/Universal%20instantiation
In predicate logic, universal instantiation (UI; also called universal specification or universal elimination, and sometimes confused with dictum de omni) is a valid rule of inference from a truth about each member of a class of individuals to the truth about a particular individual of that class. It is generally given as a quantification rule for the universal quantifier but it can also be encoded in an axiom schema. It is one of the basic principles used in quantification theory. Example: "All dogs are mammals. Fido is a dog. Therefore Fido is a mammal." Formally, the rule as an axiom schema is given as for every formula A and every term a, where is the result of substituting a for each free occurrence of x in A. is an instance of And as a rule of inference it is from infer Irving Copi noted that universal instantiation "...follows from variants of rules for 'natural deduction', which were devised independently by Gerhard Gentzen and Stanisław Jaśkowski in 1934." Quine According to Willard Van Orman Quine, universal instantiation and existential generalization are two aspects of a single principle, for instead of saying that "∀x x = x" implies "Socrates = Socrates", we could as well say that the denial "Socrates ≠ Socrates" implies "∃x x ≠ x". The principle embodied in these two operations is the link between quantifications and the singular statements that are related to them as instances. Yet it is a principle only by courtesy. It holds only in the case where a term names and, furthermore, occurs referentially. See also Existential instantiation Existential generalization Existential quantification Inference rules References Rules of inference Predicate logic
https://en.wikipedia.org/wiki/The%20Legend%20of%20Kage
is a side-scrolling hack-and-slash game developed and published by Taito in 1985. In this game, the player controls the ninja Kage, with the objective being to get through five stages in order to save the pincess Kirihime. These stages are littered with enemies, however Kage has various skills and weapons on his hands in order to get through them. The arcade release was considered a success for Taito, and exceeded sales expectations at the time of its release. It has been ported to a variety of home systems, has had sequels and spinoffs, and has been featured on various Taito compilations. Gameplay The player takes the role of a young Iga ninja named Kage ("Shadow"), on a mission to rescue Princess Kiri (hime) - the Shogun's daughter - from the villainous warlord Yoshi (ro Kuyigusa) and fellow evil samurai Yuki (nosuke Riko). The player is armed with a kodachi shortsword and an unlimited number of shuriken. Kage must fight his way through a forest, along a secret passageway, up a fortress wall, and through a castle, rescuing her twice (three times in the FC/NES version) in order to win the game. Each time the princess is rescued, the seasons change from summer to fall to winter and back to summer. In home versions, grabbing a crystal ball causes the player's clothes to change to the next level in color and thereby attain certain powers (bigger shuriken or faster speed). If Kage is hit in a home version while in green or orange clothes, he does not die but reverts to his normal red clothes. Cycles repeat after five levels are completed, and play continues until all lives are gone, which ends the game. Development and release The Legend of Kage was released by Taito in Japan in 1985, with a European release later in the year, and an American release in January 1986. According to Hisayoshi Ogura, the game's composer, development came after a fast-paced period of game development within Taito, where it eased up around this game. Ogura specifically gave the ga
https://en.wikipedia.org/wiki/Electromagnetic%20shielding
In electrical engineering, electromagnetic shielding is the practice of reducing or redirecting the electromagnetic field (EMF) in a space with barriers made of conductive or magnetic materials. It is typically applied to enclosures, for isolating electrical devices from their surroundings, and to cables to isolate wires from the environment through which the cable runs (). Electromagnetic shielding that blocks radio frequency (RF) electromagnetic radiation is also known as RF shielding. EMF shielding serves to minimize electromagnetic interference. The shielding can reduce the coupling of radio waves, electromagnetic fields, and electrostatic fields. A conductive enclosure used to block electrostatic fields is also known as a Faraday cage. The amount of reduction depends very much upon the material used, its thickness, the size of the shielded volume and the frequency of the fields of interest and the size, shape and orientation of holes in a shield to an incident electromagnetic field. Materials used Typical materials used for electromagnetic shielding include thin layer of metal, sheet metal, metal screen, and metal foam. Common sheet metals for shielding include copper, brass, nickel, silver, steel, and tin. Shielding effectiveness, that is, how well a shield reflects or absorbs/suppresses electromagnetic radiation, is affected by the physical properties of the metal.  These may include conductivity, solderability, permeability, thickness, and weight. A metal's properties are an important consideration in material selection. For example, electrically dominant waves are reflected by highly conductive metals like copper, silver, and brass, while magnetically dominant waves are absorbed/suppressed by a less conductive metal such as steel or stainless steel. Further, any holes in the shield or mesh must be significantly smaller than the wavelength of the radiation that is being kept out, or the enclosure will not effectively approximate an unbroken conducting sur
https://en.wikipedia.org/wiki/Multiple%20drug%20resistance
Multiple drug resistance (MDR), multidrug resistance or multiresistance is antimicrobial resistance shown by a species of microorganism to at least one antimicrobial drug in three or more antimicrobial categories. Antimicrobial categories are classifications of antimicrobial agents based on their mode of action and specific to target organisms. The MDR types most threatening to public health are MDR bacteria that resist multiple antibiotics; other types include MDR viruses, parasites (resistant to multiple antifungal, antiviral, and antiparasitic drugs of a wide chemical variety). Recognizing different degrees of MDR in bacteria, the terms extensively drug-resistant (XDR) and pandrug-resistant (PDR) have been introduced. Extensively drug-resistant (XDR) is the non-susceptibility of one bacteria species to all antimicrobial agents except in two or less antimicrobial categories. Within XDR, pandrug-resistant (PDR) is the non-susceptibility of bacteria to all antimicrobial agents in all antimicrobial categories. The definitions were published in 2011 in the journal Clinical Microbiology and Infection and are openly accessible. Common multidrug-resistant organisms (MDROs) Common multidrug-resistant organisms are usually bacteria: Vancomycin-Resistant Enterococci (VRE) Methicillin-resistant Staphylococcus aureus (MRSA) Extended-spectrum β-lactamase (ESBLs) producing Gram-negative bacteria Klebsiella pneumoniae carbapenemase (KPC) producing Gram-negatives Multidrug-resistant Gram negative rods (MDR GNR) MDRGN bacteria such as Enterobacter species, E.coli, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa Multi-drug-resistant tuberculosis Overlapping with MDRGN, a group of Gram-positive and Gram-negative bacteria of particular recent importance have been dubbed as the ESKAPE group (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa and Enterobacter species). Bacterial resistance