source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Macrocell%20array
|
Macrocell arrays in PLDs
Programmable logic devices, such as programmable array logic and complex programmable logic devices, typically have a macrocell on every output pin.
Macrocell arrays in ASICs
A macrocell array is an approach to the design and manufacture of ASICs. Essentially, it is a small step up from the otherwise similar gate array, but rather than being a prefabricated array of simple logic gates, the macrocell array is a prefabricated array of higher-level logic functions such as flip-flops, ALU functions, registers, and the like. These logic functions are simply placed at regular predefined positions and manufactured on a wafer, usually called master slice. Creation of a circuit with a specified function is accomplished by adding metal interconnects to the chips on the master slice late in the manufacturing process, allowing the function of the chip to be customised as desired.
Macrocell array master slices are usually prefabricated and stockpiled in large quantities regardless of customer orders. The fabrication according to the individual customer specifications may be finished in a shorter time compared with standard cell or full custom design. The macrocell array approach reduces the mask costs since fewer custom masks need to be produced. In addition manufacturing test tooling lead time and costs are reduced since the same test fixtures may be used for all macrocell array products manufactured on the same die size.
Drawbacks are somewhat low density and performance than other approaches to ASIC design. However this style is often a viable approach for low production volumes.
A standard cell library is sometimes called a "macrocell library".
References
Gate arrays
|
https://en.wikipedia.org/wiki/Serial%20decimal
|
In computers, a serial decimal numeric representation is one in which ten bits are reserved for each digit, with a different bit turned on depending on which of the ten possible digits is intended. ENIAC and CALDIC used this representation.
See also
Bit-serial architecture
Digit-serial architecture
1-of-10 code
One-hot code
References
Computer arithmetic
|
https://en.wikipedia.org/wiki/Circus%20Charlie
|
is an action game originally published in arcades by Konami in 1984. The player controls a circus clown named Charlie in six different circus-themed minigames. It was released for MSX in the same year, followed by ports to the Famicom in 1986 by Soft Pro and the Commodore 64 in 1987.
Gameplay
In the game there are six regular stages (plus an extra stage) of differing tasks that are to be completed by Charlie. Grabbing money bags, performing dangerous tricks, avoiding enemies, completing stages, etc., earns Charlie points. After the sixth stage is completed, the game starts over again but with a faster pace and more difficult (but exactly the same in terms of task to be completed) levels.
Charlie also races against time. Bonus points are awarded according to the time remaining, but running out of time will cost the player a life.
Levels
The standard Arcade version has 6 levels in total. Levels 1, 2, 4 and 5 have 5 sublevels. Level 3 contains 7 sublevels. Each sublevel gets more difficult. Level 6 also has 5 sublevels, but repeats as long the user has lives.
Level 1: ride on a lion and jump through flaming rings
Level 2: tightrope walking whilst jumping over monkeys
Level 3: Jump between trampolines and beware the knife throwers and fire breathers. In sublevel 3 and 6 the trampolines are placed in a swimming pool and the knife throwers and fire breathers are replaced by jumping dolphins. This level is not present in the MSX and Famicom/NES ports of the game.
Level 4: Jump from ball to ball
Level 5: Ride a horse and jump over trampolines and walls
Level 6: Swing from one trapeze to the next
Versions
In arcades, there's a "Level Select" version of the game, in which the player can choose any of the stages to play, but only a limited number of times each, whereupon the level will become unselectable. There is no "ending" to the game—after the first five levels have each been played to their limit, the player then repeats the trapeze stage until all their lives are
|
https://en.wikipedia.org/wiki/Semiconductor%20memory
|
Semiconductor memory is a digital electronic semiconductor device used for digital data storage, such as computer memory. It typically refers to devices in which data is stored within metal–oxide–semiconductor (MOS) memory cells on a silicon integrated circuit memory chip. There are numerous different types using different semiconductor technologies. The two main types of random-access memory (RAM) are static RAM (SRAM), which uses several transistors per memory cell, and dynamic RAM (DRAM), which uses a transistor and a MOS capacitor per cell. Non-volatile memory (such as EPROM, EEPROM and flash memory) uses floating-gate memory cells, which consist of a single floating-gate transistor per cell.
Most types of semiconductor memory have the property of random access, which means that it takes the same amount of time to access any memory location, so data can be efficiently accessed in any random order. This contrasts with data storage media such as hard disks and CDs which read and write data consecutively and therefore the data can only be accessed in the same sequence it was written. Semiconductor memory also has much faster access times than other types of data storage; a byte of data can be written to or read from semiconductor memory within a few nanoseconds, while access time for rotating storage such as hard disks is in the range of milliseconds. For these reasons it is used for primary storage, to hold the program and data the computer is currently working on, among other uses.
, semiconductor memory chips sell annually, accounting for % of the semiconductor industry. Shift registers, processor registers, data buffers and other small digital registers that have no memory address decoding mechanism are typically not referred to as memory although they also store digital data.
Description
In a semiconductor memory chip, each bit of binary data is stored in a tiny circuit called a memory cell consisting of one to several transistors. The memory cells are
|
https://en.wikipedia.org/wiki/Purchasing
|
Purchasing is the process a business or organization uses to acquire goods or services to accomplish its goals. Although there are several organizations that attempt to set standards in the purchasing process, processes can vary greatly between organizations.
Purchasing is part of the wider procurement process, which typically also includes expediting, supplier quality, transportation, and logistics.
Details
Purchasing managers/directors, procurement managers/directors, or staff based in an organization's Purchasing Office, guide the organization's acquisition procedures and standards and operational purchasing activities.
Most organizations use a three-way check as the foundation of their purchasing programs. This involves three departments in the organization completing separate parts of the acquisition process. The three departments do not all report to the same senior manager, to prevent unethical practices and lend credibility to the process. These departments can be purchasing, receiving and accounts payable; or engineering, purchasing and accounts payable; or a plant manager, purchasing and accounts payable. Combinations can vary significantly, but a purchasing department and accounts payable are usually two of the three departments involved. Organizations typically have simpler procedures in place for low value purchasing, for example the UK's Ministry of Defence has a separate internal policy for low value purchasing valued below £10,000. When the receiving department is not involved, it is typically called a two-way check or two-way purchase order. In this situation, the purchasing department issues the purchase order receipt not required. When an invoice arrives against the order, the accounts payable department will then go directly to the requestor of the purchase order to verify that the goods or services were received. This is typically what is done for goods and services that will bypass the receiving department. A few examples are software deliv
|
https://en.wikipedia.org/wiki/Origin%20%28mathematics%29
|
In mathematics, the origin of a Euclidean space is a special point, usually denoted by the letter O, used as a fixed point of reference for the geometry of the surrounding space.
In physical problems, the choice of origin is often arbitrary, meaning any choice of origin will ultimately give the same answer. This allows one to pick an origin point that makes the mathematics as simple as possible, often by taking advantage of some kind of geometric symmetry.
Cartesian coordinates
In a Cartesian coordinate system, the origin is the point where the axes of the system intersect. The origin divides each of these axes into two halves, a positive and a negative semiaxis. Points can then be located with reference to the origin by giving their numerical coordinates—that is, the positions of their projections along each axis, either in the positive or negative direction. The coordinates of the origin are always all zero, for example (0,0) in two dimensions and (0,0,0) in three.
Other coordinate systems
In a polar coordinate system, the origin may also be called the pole. It does not itself have well-defined polar coordinates, because the polar coordinates of a point include the angle made by the positive x-axis and the ray from the origin to the point, and this ray is not well-defined for the origin itself.
In Euclidean geometry, the origin may be chosen freely as any convenient point of reference.
The origin of the complex plane can be referred as the point where real axis and imaginary axis intersect each other. In other words, it is the complex number zero.
See also
Null vector, an analogous point of a vector space
Distance from a point to a plane
Pointed space, a topological space with a distinguished point
Radial basis function, a function depending only on the distance from the origin
References
Elementary mathematics
|
https://en.wikipedia.org/wiki/Calender
|
A calender is a series of hard pressure rollers used to finish or smooth a sheet of material such as paper, textiles, rubber, or plastics. Calender rolls are also used to form some types of plastic films and to apply coatings. Some calender rolls are heated or cooled as needed. Calenders are sometimes misspelled calendars.
Etymology
The word "calender" itself is a derivation of the word κύλινδρος kylindros, the Greek word that is also the source of the word "cylinder".
History
Calender mills for pressing serge were apparently introduced to the Netherlands by Flemish refugees from the Eighty Years' War.
In eighteenth century China, workers called "calenderers" in the silk- and cotton-cloth trades used heavy rollers to press and finish cloth.
In 1836, Edwin M. Chaffee, of the Roxbury India Rubber Company, patented a four-roll calender to make rubber sheet. Chaffee worked with Charles Goodyear with the intention to "produce a sheet of rubber laminated to a fabric base". Calenders were also used for paper and fabrics long before later applications for thermoplastics. With the expansion of the rubber industry the design of calenders grew as well, so when PVC was introduced the machinery was already capable of processing it into film. As recorded in an overview on the history of the development of calenders, "There was development in both Germany and the United States and probably the first successful calendering of PVC was in 1935 in Germany, where in the previous year the Hermann Berstorff Company of Hannover designed the first calender specifically to process this plastic".
In the past, for paper, sheets were worked on with a polished hammer or pressed between polished metal sheets in a press. With the continuously operating paper machine it became part of the process of rolling the paper (in this case also called web paper). The pressure between the rollers, the "nip pressure", can be reduced by heating the rolls or moistening the paper surface. This helps to kee
|
https://en.wikipedia.org/wiki/Osmometer
|
An osmometer is a device for measuring the osmotic strength of a solution, colloid, or compound.
There are several different techniques employed in osmometry:
Freezing point depression osmometers may also be used to determine the osmotic strength of a solution, as osmotically active compounds depress the freezing point of a solution. This is the most common method in clinical laboratories because it is the most accurate and simple method.
Vapor pressure osmometers determine the concentration of osmotically active particles that reduce the vapor pressure of a solution.
Membrane osmometers measure the osmotic pressure of a solution separated from pure solvent by a semipermeable membrane.
Osmometers are useful for determining the total concentration of dissolved salts and sugars in blood or urine samples. Osmometry is also useful in determining the molecular weight of unknown compounds and polymers.
Osmometry is the measurement of the osmotic strength of a substance. This is often used by chemists for the determination of average molecular weight.
Osmometry is also useful for estimating the drought tolerance of plant leaves.
See also
Clifton nanolitre osmometer, an example of a freezing point depression osmometer.
References
Scientific techniques
Measuring instruments
Polymer chemistry
Amount of substance
|
https://en.wikipedia.org/wiki/Internet%20Systems%20Consortium
|
Internet Systems Consortium, Inc., also known as ISC, is a Delaware-registered, 501(c)(3) non-profit corporation that supports the infrastructure of the universal, self-organizing Internet by developing and maintaining core production-quality software, protocols, and operations. ISC has developed several key Internet technologies that enable the global Internet, including: BIND, ISC DHCP and Kea. Other software projects no longer in active development include OpenReg and ISC AFTR (an implementation of an IPv4/IPv6 transition protocol based on Dual-Stack Lite).
ISC operates one of the 13 global authoritative DNS root servers, F-Root.
Over the years a number of additional software systems were operated under ISC (for example: INN and Lynx) to better support the Internet's infrastructure. ISC also expanded their operational activities to include Internet hosting facilities for other open-source projects such as NetBSD, XFree86, kernel.org, secondary name-service (SNS) for more than 50 top-level domains, and a DNS OARC (Operations, Analysis and Research Center) for monitoring and reporting of the Internet's DNS.
ISC is actively involved in the community design process; it authors and participates in the development of the IETF standards, including the production of managed open-source software used as a reference implementation of the DNS.
ISC is primarily funded by the sale of technical support contracts for its open source software.
History
Originally the company was founded as the Internet Software Consortium, Inc. The founders included Paul Vixie, Rick Adams and Carl Malamud. The corporation was intended to continue the development of BIND software. The founders believed that it was necessary that BIND's maintenance and development be managed and funded by an independent organization. ISC was designated as a root name server operator by IANA, originally as NS.ISC.ORG and later as F.ROOT-SERVERS.NET.
In January 2004, ISC reorganized under the new name Internet
|
https://en.wikipedia.org/wiki/Dirac%20comb
|
In mathematics, a Dirac comb (also known as sha function, impulse train or sampling function) is a periodic function with the formula
for some given period . Here t is a real variable and the sum extends over all integers k. The Dirac delta function and the Dirac comb are tempered distributions. The graph of the function resembles a comb (with the s as the comb's teeth), hence its name and the use of the comb-like Cyrillic letter sha (Ш) to denote the function.
The symbol , where the period is omitted, represents a Dirac comb of unit period. This implies
Because the Dirac comb function is periodic, it can be represented as a Fourier series based on the Dirichlet kernel:
The Dirac comb function allows one to represent both continuous and discrete phenomena, such as sampling and aliasing, in a single framework of continuous Fourier analysis on tempered distributions, without any reference to Fourier series. The Fourier transform of a Dirac comb is another Dirac comb. Owing to the Convolution Theorem on tempered distributions which turns out to be the Poisson summation formula, in signal processing, the Dirac comb allows modelling sampling by multiplication with it, but it also allows modelling periodization by convolution with it.
Dirac-comb identity
The Dirac comb can be constructed in two ways, either by using the comb operator (performing sampling) applied to the function that is constantly , or, alternatively, by using the rep operator (performing periodization) applied to the Dirac delta . Formally, this yields (; )
where
and
In signal processing, this property on one hand allows sampling a function by multiplication with , and on the other hand it also allows the periodization of by convolution with ().
The Dirac comb identity is a particular case of the Convolution Theorem for tempered distributions.
Scaling
The scaling property of the Dirac comb follows from the properties of the Dirac delta function.
Since for positive real numbers , it
|
https://en.wikipedia.org/wiki/Diagonal%20subgroup
|
In the mathematical discipline of group theory, for a given group the diagonal subgroup of the n-fold direct product is the subgroup
This subgroup is isomorphic to
Properties and applications
If acts on a set the n-fold diagonal subgroup has a natural action on the Cartesian product induced by the action of on defined by
If acts -transitively on then the -fold diagonal subgroup acts transitively on More generally, for an integer if acts -transitively on acts -transitively on
Burnside's lemma can be proved using the action of the twofold diagonal subgroup.
See also
Diagonalizable group
References
.
Group theory
|
https://en.wikipedia.org/wiki/Downsampling%20%28signal%20processing%29
|
In digital signal processing, downsampling, compression, and decimation are terms associated with the process of resampling in a multi-rate digital signal processing system. Both downsampling and decimation can be synonymous with compression, or they can describe an entire process of bandwidth reduction (filtering) and sample-rate reduction. When the process is performed on a sequence of samples of a signal or a continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a lower rate (or density, as in the case of a photograph).
Decimation is a term that historically means the removal of every tenth one. But in signal processing, decimation by a factor of 10 actually means keeping only every tenth sample. This factor multiplies the sampling interval or, equivalently, divides the sampling rate. For example, if compact disc audio at 44,100 samples/second is decimated by a factor of 5/4, the resulting sample rate is 35,280. A system component that performs decimation is called a decimator. Decimation by an integer factor is also called compression.
Downsampling by an integer factor
Rate reduction by an integer factor M can be explained as a two-step process, with an equivalent implementation that is more efficient:
Reduce high-frequency signal components with a digital lowpass filter.
Decimate the filtered signal by M; that is, keep only every Mth sample.
Step 2 alone creates undesirable aliasing (i.e. high-frequency signal components will copy into the lower frequency band and be mistaken for lower frequencies). Step 1, when necessary, suppresses aliasing to an acceptable level. In this application, the filter is called an anti-aliasing filter, and its design is discussed below. Also see undersampling for information about decimating bandpass functions and signals.
When the anti-aliasing filter is an IIR design, it relies on feedback from output to input, prior to the second step. With FIR filtering,
|
https://en.wikipedia.org/wiki/General%20contractor
|
A general contractor, main contractor, prime contractor, builder (UK/AUS), or contractor (US and Asia) is responsible for the day-to-day oversight of a construction site, management of vendors and trades, and the communication of information to all involved parties throughout the course of a building project. In the USA a builder may be a sole proprietor managing a project and performing labor or carpentry work, have a small staff, or may be a very large company managing billion dollar projects. Some builders build new homes, some are remodelers, some are developers.
Description
A general contractor is a construction manager employed by a client, usually upon the advice of the project's architect or engineer. Responsible for the overall coordination of a project, general contractors may also act as building designer and foreman (a tradesman in charge of a crew).
A general contractor must first assess the project-specific documents (referred to as a bid, proposal, or tender documents). In the case of renovations, a site visit is required to get a better understanding of the project. Depending on the project delivery method, the contractor will submit a fixed price proposal or bid, cost-plus price or an estimate. The general contractor considers the cost of home office overhead, general conditions, materials, and equipment, as well as the cost of labor, to provide the owner with a price for the project.
Contract documents may include drawings, project manuals (including general, supplementary, or special conditions and specifications), and addendum or modifications issued prior to proposal/bidding and prepared by a design professional, such as an architect. The general contractor may be the construction manager or construction manager at high risk.
Prior to formal appointment, the selected contractor to whom a client proposes to award a contract is often referred to as a "preferred contractor".
Responsibilities
A general contractor is responsible for providing
|
https://en.wikipedia.org/wiki/Hotfix
|
A hotfix or quick-fix engineering update (QFE update) is a single, cumulative package that includes information (often in the form of one or more files) that is used to address a problem in a software product (i.e., a software bug). Typically, hotfixes are made to address a specific customer situation.
The term "hotfix" originally referred to software patches that were applied to "hot" systems: those which are live, currently running, and in production status rather than development status. For the developer, a hotfix implies that the change may have been made quickly and outside normal development and testing processes. This could increase the cost of the fix by requiring rapid development, overtime or other urgent measures. For the user, the hotfix could be considered riskier or less likely to resolve the problem. This could cause an immediate loss of services, so depending on the severity of the bug, it may be desirable to delay a hotfix. The risk of applying the hotfix must be weighed against the risk of not applying it, because the problem to be fixed might be so critical that it could be considered more important than a potential loss of service (e.g., a major security breach).
Similar use of the terms can be seen in hot-swappable disk drives. The more recent usage of the term is likely due to software vendors making a distinction between a hotfix and a patch.
Details
A hotfix package might contain several "encompassed" bug fixes, raising the risk of possible regression. An encompassed bug fix is a software bug fix that is not the main objective of a software patch, but rather the side effect of it. Because of this, some libraries for automatic updates like StableUpdate also offer features to uninstall the applied fixes if necessary.
Most modern operating systems and many stand-alone programs offer the capability to download and apply fixes automatically. Instead of creating this feature from scratch, the developer may choose to use a proprietary (like RT
|
https://en.wikipedia.org/wiki/Non-broadcast%20multiple-access%20network
|
A non-broadcast multiple access network (NBMA) is a computer network to which multiple hosts are attached, but data is transmitted only directly from one computer to another single host over a virtual circuit or across a switched fabric.
Examples of non broadcast technologies
Asynchronous Transfer Mode (ATM)
Frame Relay
X.25
home power line networking
Wireguard
Replication broadcasts
Some NBMA network devices support multicast and broadcast traffic replication (pseudo-broadcasts).
This is done by send multiple copies of a broadcast packet, one through virtual circuit, so that the broadcast gets to all intended recipients.
Power line networks
The ITU-T G.hn standard provides a specification for creating a high-speed (up to 1 Gigabit/s) local area network using existing home power lines, phone lines and coaxial cables.
Because of multipath propagation, power lines use frequency-selective channels. Channel frequency response is different for each pair of transmitter and receiver, so modulation parameters are unique for each transmitter and receiver pair. Since each pair of devices uses a different modulation scheme for communication, other devices may not be able to demodulate the information sent between them.
Split horizon route advertisement
In NBMA networks a special technique called split horizon route advertisement must be disabled by distance-vector routing protocols in order to route traffic in a hub and spoke topology. The reason being is that split horizon dictates that a router cannot send a routing table update out of the same interface from which it received it. Thus eliminating the proper propagation from one location to another. This family of protocols relies on link layer broadcasting for route advertisement propagation, so when this feature is absent, it has to be emulated with a series of unicast transmissions, which may result in a receiver node sending a route advertisement back to the node it has just received it from.
See also
Ope
|
https://en.wikipedia.org/wiki/Upsampling
|
In digital signal processing, upsampling, expansion, and interpolation are terms associated with the process of resampling in a multi-rate digital signal processing system. Upsampling can be synonymous with expansion, or it can describe an entire process of expansion and filtering (interpolation). When upsampling is performed on a sequence of samples of a signal or other continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a higher rate (or density, as in the case of a photograph). For example, if compact disc audio at 44,100 samples/second is upsampled by a factor of 5/4, the resulting sample-rate is 55,125.
Upsampling by an integer factor
Rate increase by an integer factor L can be explained as a 2-step process, with an equivalent implementation that is more efficient:
Expansion: Create a sequence, comprising the original samples, separated by L − 1 zeros. A notation for this operation is:
Interpolation: Smooth out the discontinuities with a lowpass filter, which replaces the zeros.
In this application, the filter is called an interpolation filter, and its design is discussed below. When the interpolation filter is an FIR type, its efficiency can be improved, because the zeros contribute nothing to its dot product calculations. It is an easy matter to omit them from both the data stream and the calculations. The calculation performed by a multirate interpolating FIR filter for each output sample is a dot product:
where the h[•] sequence is the impulse response of the interpolation filter, and K is the largest value of k for which h[j + kL] is non-zero. In the case L = 2, h[•] can be designed as a half-band filter, where almost half of the coefficients are zero and need not be included in the dot products. Impulse response coefficients taken at intervals of L form a subsequence, and there are L such subsequences (called phases) multiplexed together. Each of L phases of the impulse respons
|
https://en.wikipedia.org/wiki/K%C5%8Dhaku%20Uta%20Gassen
|
, more commonly known simply as Kōhaku, is an annual New Year's Eve television special produced by Japanese public broadcaster NHK. It is broadcast live simultaneously on television and radio, nationally and internationally by the NHK network and by some overseas (mainly cable) broadcasters who buy the program. The show ends shortly before midnight. Before the show began broadcasting on television in late 1953, the show was held on 3 January and only consisted of a radio broadcast.
The program divides the most popular music artists of the year into competing teams of red and white. The "red" team or is composed of all female artists (or groups with female vocals), while the "white" team or is all male (or groups with male vocals). At the end of the show, judges and the audience vote to decide which group performed better. The honor of performing on Kōhaku is strictly by invitation, so only the most successful singing acts in the Japanese entertainment industry can perform. In addition to the actual music performances, the costumes, hair-styles, makeup, dancing, and lighting are important. Even today, a performance on Kōhaku is said to be a big highlight in a singer's career because of the show's wide reach.
Song selection process
The songs and performers are examined by a selection committee put together by NHK. The basis for selection are record sales and adaptability to the edition's theme.
At the same time, a demographic survey is conducted regarding the most popular singers for each and what kind of music people want to hear. This and the song selection explain the amalgamation of the musical genres and its artists.
There are, however, exceptions to the process. Momoe Yamaguchi chose to sing her favorite song "Hito Natsu no Keiken" (ひと夏の経験) with its suggestive lyrics during the 25th edition, despite NHK's pick of a different song.
Show
When the show was first broadcast on radio in 1951, each team had a few performers, all of whom would perform within an
|
https://en.wikipedia.org/wiki/152%20%28number%29
|
152 (one hundred [and] fifty-two) is the natural number following 151 and preceding 153.
In mathematics
152 is the sum of four consecutive primes (31 + 37 + 41 + 43). It is a nontotient since there is no integer with 152 coprimes below it.
152 is a refactorable number since it is divisible by the total number of divisors it has, and in base 10 it is divisible by the sum of its digits, making it a Harshad number.
Recently, the smallest repunit probable prime in base 152 was found, it has 589570 digits.
The number of surface points on a 6*6*6 cube is 152.
In the military
Focke-Wulf Ta 152 was a Luftwaffe high-altitude interceptor fighter aircraft during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy supply ship during World War II
was a United States Navy during World War II
was a United States Navy ship during World War II
was a United States Navy during World War II
was a United States Navy during World War II
152.3 (5.9"), common medium artillery (and historically heavy tank destroyer) caliber utilized by Russia, China and former members of the Soviet Union, akin to the 155 mm standard caliber of NATO nations.
In transportation
The Baade 152, the first German jet passenger airliner in 1958
The Cessna 152 airplane
Garuda Indonesia Flight 152 was an Indonesian flight from Jakarta to Medan that crashed on September 26, 1997
London Buses route 152
In TV, radio, games and cinema
The aviation-frequency radio exchange (pronounced one-fifty-two), as 152 is associated with the Cessna 152
"NY152" AOL e-mail account use by Joe in the movie You've Got Mail
In other fields
152 is also:
The year AD 152 or 152 BC
152 AH is a year in the Islamic calendar that corresponds to 759 – 760 CE
152 Atala is a dark type D main belt asteroid
The atomic number of an element temporarily called Unpentbium
Sonnet 152
The Garmin GPS 152, produced in 2001
The Xerox DocuMate
|
https://en.wikipedia.org/wiki/XeTeX
|
XeTeX (
or ; see also Pronouncing and writing "TeX") is a TeX typesetting engine using Unicode and supporting modern font technologies such as OpenType, Graphite and Apple Advanced Typography (AAT). It was originally written by Jonathan Kew and is distributed under the X11 free software license.
Initially developed for Mac OS X only, it is now available for all major platforms. It natively supports Unicode and the input file is assumed to be in UTF-8 encoding by default. XeTeX can use any fonts installed in the operating system without configuring TeX font metrics, and can make direct use of advanced typographic features of OpenType, AAT and Graphite technologies such as alternative glyphs and swashes, optional or historic ligatures, and variable font weights. Support for OpenType local typographic conventions (locl tag) is also present. XeTeX even allows raw OpenType feature tags to be passed to the font. Microtypography is also supported. XeTeX also supports typesetting mathematics using Unicode fonts that contain special mathematical features, such as Cambria Math or Asana Math as an alternative to the traditional mathematical typesetting based on TeX font metrics.
Mode of operation
XeTeX processes input in two stages. In the first stage XeTeX outputs an extended DVI (xdv) file, which is then converted to PDF by a driver. In the default operating mode the xdv output is piped directly to the driver without producing any user-visible intermediate files. It is possible to run just the first stage of XeTeX and save the xdv, although there are no viewers capable of displaying the intermediate format.
Two backend drivers are available to generate PDF from an xdv file:
xdv2pdf, which uses ATSUI and QuickTime frameworks, and only works on Mac OS X.
xdvipdfmx, a modified version of dvipdfmx, which uses FreeType. This driver works on all platforms.
Starting from version 0.997, the default driver is xdvipdfmx on all platforms. As of version 0.9999, xdv2pdf is no lo
|
https://en.wikipedia.org/wiki/ICMP%20Router%20Discovery%20Protocol
|
In computer networking, the ICMP Internet Router Discovery Protocol (IRDP), also called the Internet Router Discovery Protocol, is a protocol for computer hosts to discover the presence and location of routers on their IPv4 local area network. Router discovery is useful for accessing computer systems on other nonlocal area networks. The IRDP is defined by the IETF RFC 1256 standard, with the Internet Control Message Protocol (ICMP) upon which it is based defined in IETF RFC 792. IRDP eliminates the need to manually configure routing information.
Router discovery messages
To enable router discovery, the IRDP defines two kinds of ICMP messages:
The ICMP Router Solicitation message is sent from a computer host to any routers on the local area network to request that they advertise their presence on the network.
The ICMP Router Advertisement message is sent by a router on the local area network to announce its IP address as available for routing.
When a host boots up, it sends solicitation messages to IP multicast address 224.0.0.2. In response, one or more routers may send advertisement messages. If there is more than one router, the host usually picks the first message it gets and adds that router to its routing table. Independently of a solicitation, a router may periodically send out advertisement messages. These messages are not considered a routing protocol, as they do not determine a routing path, just the presence of possible gateways.
Extensions
The IRDP strategy has been used in the development of the IPv6 neighbor discovery protocol. These use ICMPv6 messages, the IPv6 analog of ICMP messages. Neighbor discovery is governed by IETF standards RFC 4861 and RFC 4862.
IRDP plays an essential role in mobile networking through IETF standard RFC 3344. This is called MIPv4 Agent discovery.
See also
Dynamic Host Configuration Protocol
References
External links
: ICMP Router Discovery Messages
Internet Standards
Internet protocols
|
https://en.wikipedia.org/wiki/Window%20capping
|
In construction, capping or window capping (window cladding, window wrapping) refers to the application of aluminum or vinyl sheeting cut and formed with a brake to fit over the exterior, wood trim of a building. The aluminum is intended to make aging trim with peeling paint look better, reduce future paint maintenance, and provide a weather-proof layer to control the infiltration of water.
Overview
The capping application must direct water away from the original under-lying wood material and prevent infiltration of water into the structure. Cladding applied to exterior window and door casing (brick-moulding) and their associated parts is often referred to as window capping or window cladding. This sort of capping is typically applied in order to eliminate the need to re-paint wood window trim. The aluminum capping helps to prevent wood rot by protecting the wood from water and snow. However, capping will exacerbate wood rot if the moisture in the wood is coming from inside the building or the capping leaks. Good installation of capping allows for an outlet for water in the event of a leak. Caulking and sealant materials may be used to help prevent leaks but these products are not considered reliable in the long-term.
A sill that has been clad should provide a "drip cap" or "drip-control" function. This will serve to direct water away from the wall surface directly underneath the sill. The leading edge of the sill must be the lowest point on the sill to ensure that water does not wick into the sill material.
Window capping may provide a marginal increase in energy efficiency by decreasing the potential for drafts by providing an extra barrier between the exterior and the interior.
The most common material used in residential window capping is factory painted aluminum. An alternative to factory painted aluminum is to use a vinyl coated aluminium material.
Aluminum capping can be painted so long as the painter is highly skilled and knowledgeable in the field of m
|
https://en.wikipedia.org/wiki/RoboTurb
|
RoboTurb is a welding robot used to repair turbine blades developed at Universidade Federal de Santa Catarina. It is a redundant robot with a flexible rail.
The Roboturb project started in 1998 at the Universidade Federal de Santa Catarina initially with the support of the Brazilian Government and the public power utility
company COPEL – Companhia Paranaense de Energia Eletrica.
Three phases followed, and now the project is mainly maintained by another public power utility company FURNAS – Furnas Centrais Eletricas.
References
External links
RoboTurb Project (2004). The Roboturb Project. In Portuguese Retrieved Dec. 5, 2004.
Robotics Laboratory at UFSC. Robotics laboratory. Retrieved Oct. 12, 2006.
Industrial robots
1988 robots
Robots of Brazil
|
https://en.wikipedia.org/wiki/Robot%20kinematics
|
In robotics, robot kinematics applies geometry to the study of the movement of multi-degree of freedom kinematic chains that form the structure of robotic systems. The emphasis on geometry means that the links of the robot are modeled as rigid bodies and its joints are assumed to provide pure rotation or translation.
Robot kinematics studies the relationship between the dimensions and connectivity of kinematic chains and the position, velocity and acceleration of each of the links in the robotic system, in order to plan and control movement and to compute actuator forces and torques. The relationship between mass and inertia properties, motion, and the associated forces and torques is studied as part of robot dynamics.
Kinematic equations
A fundamental tool in robot kinematics is the kinematics equations of the kinematic chains that form the robot. These non-linear equations are used to map the joint parameters to the configuration of the robot system. Kinematics equations are also used in biomechanics of the skeleton and computer animation of articulated characters.
Forward kinematics uses the kinematic equations of a robot to compute the position of the end-effector from specified values for the joint parameters. The reverse process that computes the joint parameters that achieve a specified position of the end-effector is known as inverse kinematics. The dimensions of the robot and its kinematics equations define the volume of space reachable by the robot, known as its workspace.
There are two broad classes of robots and associated kinematics equations: serial manipulators and parallel manipulators. Other types of systems with specialized kinematics equations are air, land, and submersible mobile robots, hyper-redundant, or snake, robots and humanoid robots.
Forward kinematics
Forward kinematics specifies the joint parameters and computes the configuration of the chain. For serial manipulators this is achieved by direct substitution of the joint parame
|
https://en.wikipedia.org/wiki/Screw%20theory
|
Screw theory is the algebraic calculation of pairs of vectors, such as forces and moments or angular and linear velocity, that arise in the kinematics and dynamics of rigid bodies. The mathematical framework was developed by Sir Robert Stawell Ball in 1876 for application in kinematics and statics of mechanisms (rigid body mechanics).
Screw theory provides a mathematical formulation for the geometry of lines which is central to rigid body dynamics, where lines form the screw axes of spatial movement and the lines of action of forces. The pair of vectors that form the Plücker coordinates of a line define a unit screw, and general screws are obtained by multiplication by a pair of real numbers and addition of vectors.
An important result of screw theory is that geometric calculations for points using vectors have parallel geometric calculations for lines obtained by replacing vectors with screws. This is termed the transfer principle.
Screw theory has become an important tool in robot mechanics, mechanical design, computational geometry and multibody dynamics.
This is in part because of the relationship between screws and dual quaternions which have been used to interpolate rigid-body motions. Based on screw theory, an efficient approach has also been developed for the type synthesis of parallel mechanisms (parallel manipulators or parallel robots).
Fundamental theorems include Poinsot's theorem (Louis Poinsot, 1806) and Chasles' theorem (Michel Chasles, 1832). Felix Klein saw screw theory as an application of elliptic geometry and his Erlangen Program. He also worked out elliptic geometry, and a fresh view of Euclidean geometry, with the Cayley–Klein metric. The use of a symmetric matrix for a von Staudt conic and metric, applied to screws, has been described by Harvey Lipkin. Other prominent contributors include Julius Plücker, W. K. Clifford, F. M. Dimentberg, Kenneth H. Hunt, J. R. Phillips.
Basic concepts
A spatial displacement of a rigid body can be d
|
https://en.wikipedia.org/wiki/Signalling%20theory
|
Within evolutionary biology, signalling theory is a body of theoretical work examining communication between individuals, both within species and across species. The central question is when organisms with conflicting interests, such as in sexual selection, should be expected to provide honest signals (no presumption being made of conscious intention) rather than cheating. Mathematical models describe how signalling can contribute to an evolutionarily stable strategy.
Signals are given in contexts such as mate selection by females, which subjects the advertising males' signals to selective pressure. Signals thus evolve because they modify the behaviour of the receiver to benefit the signaller. Signals may be honest, conveying information which usefully increases the fitness of the receiver, or dishonest. An individual can cheat by giving a dishonest signal, which might briefly benefit that signaller, at the risk of undermining the signalling system for the whole population.
The question of whether the selection of signals works at the level of the individual organism or gene, or at the level of the group, has been debated by biologists such as Richard Dawkins, arguing that individuals evolve to signal and to receive signals better, including resisting manipulation. Amotz Zahavi suggested that cheating could be controlled by the handicap principle, where the best horse in a handicap race is the one carrying the largest handicap weight. According to Zahavi's theory, signallers such as male peacocks have "tails" that are genuinely handicaps, being costly to produce. The system is evolutionarily stable as the large showy tails are honest signals. Biologists have attempted to verify the handicap principle, but with inconsistent results. The mathematical biologist Ronald Fisher analysed the contribution that having two copies of each gene (diploidy) would make to honest signalling, demonstrating that a runaway effect could occur in sexual selection. The evolutionary equ
|
https://en.wikipedia.org/wiki/Downgrade
|
In computing, downgrading refers to reverting software (or hardware) back to an older version; downgrade is the opposite of upgrade. Programs may need to be downgraded to remove introduced bugs, restore useful removed features, and to increase speed and/or ease of use. The same can occur with machinery.
An example of a downgraded program is Gmax, a downgraded version of 3ds max used by professional computer graphics artists, free to download and simplified for ease of use.
The term "downgrade" became especially popularized during the days of Windows Vista, with users wanting to return to, or downgrade to (with some even calling it an "upgrade") Windows XP because Vista had performance and familiarity issues.
Another reason could be that the user's applications do not support their new OS and they want to revert to an older version.
See also
Backporting
References
Software industry
|
https://en.wikipedia.org/wiki/172%20%28number%29
|
172 (one hundred [and] seventy-two) is the natural number following 171 and preceding 173.
In mathematics
172 is a part of a near-miss for being a counterexample to Fermat's last theorem, as 1353 + 1383 = 1723 − 1. This is only the third near-miss of this form, two cubes adding to one less than a third cube. It is also a "thickened cube number", half an odd cube (73 = 343) rounded up to the next integer.
See also
172 (disambiguation)
References
Integers
|
https://en.wikipedia.org/wiki/On%20Growth%20and%20Form
|
On Growth and Form is a book by the Scottish mathematical biologist D'Arcy Wentworth Thompson (1860–1948). The book is long – 793 pages in the first edition of 1917, 1116 pages in the second edition of 1942.
The book covers many topics including the effects of scale on the shape of animals and plants, large ones necessarily being relatively thick in shape; the effects of surface tension in shaping soap films and similar structures such as cells; the logarithmic spiral as seen in mollusc shells and ruminant horns; the arrangement of leaves and other plant parts (phyllotaxis); and Thompson's own method of transformations, showing the changes in shape of animal skulls and other structures on a Cartesian grid.
The work is widely admired by biologists, anthropologists and architects among others, but less often read than cited. Peter Medawar explains this as being because it clearly pioneered the use of mathematics in biology, and helped to defeat mystical ideas of vitalism; but that the book is weakened by Thompson's failure to understand the role of evolution and evolutionary history in shaping living structures. Philip Ball and Michael Ruse, on the other hand, suspect that while Thompson argued for physical mechanisms, his rejection of natural selection bordered on vitalism.
Overview
D'Arcy Wentworth Thompson's most famous work, On Growth and Form was written in Dundee, mostly in 1915, but publication was put off until 1917 because of the delays of wartime and Thompson's many late alterations to the text. The central theme of the book is that biologists of its author's day overemphasized evolution as the fundamental determinant of the form and structure of living organisms, and underemphasized the roles of physical laws and mechanics. At a time when vitalism was still being considered as a biological theory, he advocated structuralism as an alternative to natural selection in governing the form of species, with the smallest hint of vitalism as the unseen driving f
|
https://en.wikipedia.org/wiki/NetFront
|
NetFront Browser is a mobile browser developed by Access Company of Japan. The first version shipped in 1995. They currently have several browser variants, both Chromium-based and WebKit-based.
Over its lifetime, various versions of NetFront have been deployed on mobile phones, multifunction printers, digital TVs, set-top boxes, PDAs, web phones, game consoles, e-mail terminals, automobile telematics systems, and other device types. This has included Sony PlayStation consoles and several Nintendo consoles.
Platforms
For Pocket PC devices, the browser converted web page tables to a vertical display, eliminating the need to scroll horizontally.
The engine also was used in Japanese and European versions of Dreamcast browser
The Nintendo 3DS Internet browser uses the WebKit-based NetFront Browser NX according to the documentation included with the browser. The PlayStation 3 Internet web browser received a major upgrade with firmware version 4.10, upgrading to a custom version of the NetFront browser, adding limited HTML5 support and improved JavaScript speeds. The Wii U console is also equipped with NetFront NX, and GPL source code is available.
The Amazon Kindle e-reader uses NetFront as its web browser. Nintendo's latest console, the Nintendo Switch, is also using NetFront NX.
Performance
Netfront 3.5 had an Acid3 score of 11/100 and NetFront Browser NX v1.0 had an Acid3 score of 92/100.
See also
Internet Browser (Nintendo 3DS)
References
1995 software
Android (operating system) software
Cross-platform software
Mobile web browsers
Palm OS software
Pocket PC software
Software based on WebKit
Symbian software
Windows Mobile software
|
https://en.wikipedia.org/wiki/Spectral%20power%20distribution
|
In radiometry, photometry, and color science, a spectral power distribution (SPD) measurement describes the power per unit area per unit wavelength of an illumination (radiant exitance). More generally, the term spectral power distribution can refer to the concentration, as a function of wavelength, of any radiometric or photometric quantity (e.g. radiant energy, radiant flux, radiant intensity, radiance, irradiance, radiant exitance, radiosity, luminance, luminous flux, luminous intensity, illuminance, luminous emittance).
Knowledge of the SPD is crucial for optical-sensor system applications. Optical properties such as transmittance, reflectivity, and absorbance as well as the sensor response are typically dependent on the incident wavelength.
Physics
Mathematically, for the spectral power distribution of a radiant exitance or irradiance one may write:
where M(λ) is the spectral irradiance (or exitance) of the light (SI units: W/m2 = kg·m−1·s−3); Φ is the radiant flux of the source (SI unit: watt, W); A is the area over which the radiant flux is integrated (SI unit: square meter, m2); and λ is the wavelength (SI unit: meter, m). (Note that it is more convenient to express the wavelength of light in terms of nanometers; spectral exitance would then be expressed in units of W·m−2·nm−1.) The approximation is valid when the area and wavelength interval are small.
Relative SPD
The ratio of spectral concentration (irradiance or exitance) at a given wavelength to the concentration of a reference wavelength provides the relative SPD. This can be written as:
For instance, the luminance of lighting fixtures and other light sources are handled separately, a spectral power distribution may be normalized in some manner, often to unity at 555 or 560 nanometers, coinciding with the peak of the eye's luminosity function.
Responsivity
The SPD can be used to determine the response of a sensor at a specified wavelength. This compares the output power of the sensor to the
|
https://en.wikipedia.org/wiki/Microsoft%20DNS
|
Microsoft DNS is the name given to the implementation of domain name system services provided in Microsoft Windows operating systems.
Overview
The Domain Name System support in Microsoft Windows NT, and thus its derivatives Windows 2000, Windows XP, and Windows Server 2003, comprises two clients and a server. Every Microsoft Windows machine has a DNS lookup client, to perform ordinary DNS lookups. Some machines have a Dynamic DNS client, to perform Dynamic DNS Update transactions, registering the machines' names and IP addresses. Some machines run a DNS server, to publish DNS data, to service DNS lookup requests from DNS lookup clients, and to service DNS update requests from DNS update clients.
The server software is only supplied with the server versions of Windows.
DNS lookup client
Applications perform DNS lookups with the aid of a DLL. They call library functions in the DLL, which in turn handle all communications with DNS servers (over UDP or TCP) and return the final results of the lookup back to the applications.
Microsoft's DNS client also has optional support for local caching, in the form of a DNS Client service (also known as DNSCACHE). Before they attempt to directly communicate with DNS servers, the library routines first attempt to make a local IPC connection to the DNS Client service on the machine. If there is one, and if such a connection can be made, they hand the actual work of dealing with the lookup over to the DNS Client service. The DNS Client service itself communicates with DNS servers, and caches the results that it receives.
Microsoft's DNS client is capable of talking to multiple DNS servers. The exact algorithm varies according to the version, and service pack level, of the operating system; but in general all communication is with a preferred DNS server until it fails to answer, whereupon communication switches to one of several alternative DNS servers.
The effects of running the DNS Client service
There are several mi
|
https://en.wikipedia.org/wiki/Commercial%20animal%20cloning
|
Commercial animal cloning is the cloning of animals for commercial purposes, currently, including livestock, competition camels and horses, pets, medical uses, endangered and extinct animals, as first demonstrated in 1996 for Dolly the sheep.
Cloning methods
Moving or copying all (or nearly all) genes from one animal to form a second, genetically nearly identical, animal is usually done through one of three methods: the Roslin technique, the Honolulu technique, and Artificial Twinning. The first two of these involve a process known as somatic cell nuclear transfer. In this process, an oocyte is taken from a surrogate mother and put through enucleation, a process that removes the nucleus from inside the oocyte. Somatic cells are then taken from the animal that is being cloned, transferred into the blank oocyte in order to provide genetic material, and fused with the oocyte using an electrical current. The oocyte is then activated and re-inserted into the surrogate mother. The end result is the formation of an animal that is almost genetically identical to the animal the somatic cells were taken from. While somatic cell nuclear transfer was previously believed to only work using genetic material from somatic cells that were unfrozen or were frozen with cryoprotectant (to avoid cell damage caused by freezing), successful dog cloning in various breeds has now been shown using somatic cells from unprotected specimens that had been frozen for up to four days. Another method of cloning includes embryo splitting, the process of taking the blastomeres from a very early animal embryo and separating them before they become differentiated in order to create two or more separate organisms. When using embryo splitting, cloning must occur before the birth of the animal, and clones grow up at the same time (in a similar fashion to monozygotic twins).
Livestock cloning
The US Food and Drug Administration has concluded that "Food from cattle, swine, and goat clones is as safe to e
|
https://en.wikipedia.org/wiki/Homotopical%20algebra
|
In mathematics, homotopical algebra is a collection of concepts comprising the nonabelian aspects of homological algebra, and possibly the abelian aspects as special cases. The homotopical nomenclature stems from the fact that a common approach to such generalizations is via abstract homotopy theory, as in nonabelian algebraic topology, and in particular the theory of closed model categories.
This subject has received much attention in recent years due to new foundational work of Vladimir Voevodsky, Eric Friedlander, Andrei Suslin, and others resulting in the A1 homotopy theory for quasiprojective varieties over a field. Voevodsky has used this new algebraic homotopy theory to prove the Milnor conjecture (for which he was awarded the Fields Medal) and later, in collaboration with Markus Rost, the full Bloch–Kato conjecture.
References
See also
Derived algebraic geometry
Derivator
Cotangent complex - one of the first objects discovered using homotopical algebra
L∞ Algebra
A∞ Algebra
Categorical algebra
Nonabelian homological algebra
External links
An abstract for a talk on the proof of the full Bloch–Kato conjecture
Algebraic topology
Topological methods of algebraic geometry
|
https://en.wikipedia.org/wiki/Anchor%20portal
|
An anchor portal or H-frame tower is a gantry structure supporting overhead power lines in a switchyard. Their static function is similar to a dead-end tower. Anchor portals are almost always steel-tube or steel-framework constructions.
Gallery
Pylons
Electric power infrastructure
|
https://en.wikipedia.org/wiki/Joint%20Electronics%20Type%20Designation%20System
|
The Joint Electronics Type Designation System (JETDS), which was previously known as the Joint Army-Navy Nomenclature System (AN System. JAN) and the Joint Communications-Electronics Nomenclature System, is a method developed by the U.S. War Department during World War II for assigning an unclassified designator to electronic equipment. In 1957, the JETDS was formalized in MIL-STD-196.
Computer software and commercial unmodified electronics for which the manufacturer maintains design control are not covered.
Applicability
Electronic material, from a military point of view, generally includes those electronic devices employed in data processing, detection and tracking (underwater, sea, land-based, air and space), recognition and identification, communications, aids to navigation, weapons control and evaluation, flight control, and electronics countermeasures. Nomenclature is assigned to:
Electronic materiel of military design
Commercial electronic material that has been modified for military use and requires military identification and design control
Electronic materiel which is intended for use by other Federal agencies or other governments that participate in the nomenclature system.
This system is separate from the "M" designation used in the Army Nomenclature System (MIL-STD-1464A).
Organization
Items are given an Item Level which describes their hierarchy
Basic Structure
The core of the JETDS system is the combination of a Type Designation with an Item Name to specify a particular item.
For example:
With the AN/PEQ-2A Infrared Illuminator, the "AN/PEQ-2A" is the Type Designation while the Item Name Code (INC) 26086 "Illuminator, Infrared" is the Item Name.
Type Designation
The type designation is a unique series of letters and numbers which specifies an item. There are three basic forms of type designator used:
Type designators for definitive Systems, Subsystems, Centers, Central, and Sets (e.g. AN/SPY-1)
Type designators for definitive Groups
|
https://en.wikipedia.org/wiki/Chronospecies
|
A chronospecies is a species derived from a sequential development pattern that involves continual and uniform changes from an extinct ancestral form on an evolutionary scale. The sequence of alterations eventually produces a population that is physically, morphologically, and/or genetically distinct from the original ancestors. Throughout the change, there is only one species in the lineage at any point in time, as opposed to cases where divergent evolution produces contemporary species with a common ancestor. The related term paleospecies (or palaeospecies) indicates an extinct species only identified with fossil material. That identification relies on distinct similarities between the earlier fossil specimens and some proposed descendant although the exact relationship to the later species is not always defined. In particular, the range of variation within all the early fossil specimens does not exceed the observed range that exists in the later species.
A paleosubspecies (or palaeosubspecies) identifies an extinct subspecies that evolved into the currently-existing form. The connection with relatively-recent variations, usually from the Late Pleistocene, often relies on the additional information available in subfossil material. Most of the current species have changed in size and so adapted to the climatic changes during the last ice age (see Bergmann's Rule).
The further identification of fossil specimens as part of a "chronospecies" relies on additional similarities that more strongly indicate a specific relationship with a known species. For example, relatively recent specimens, hundreds of thousands to a few million years old with consistent variations (such as always smaller but with the same proportions) as a living species might represent the final step in a chronospecies. The possible identification of the immediate ancestor of the living taxon may also rely on stratigraphic information to establish the age of the specimens.
The concept of chronospec
|
https://en.wikipedia.org/wiki/Electrohydrodynamics
|
Electrohydrodynamics (EHD), also known as electro-fluid-dynamics (EFD) or electrokinetics, is the study of the dynamics of electrically charged fluids. It is the study of the motions of ionized particles or molecules and their interactions with electric fields and the surrounding fluid. The term may be considered to be synonymous with the rather elaborate electrostrictive hydrodynamics. ESHD covers the following types of particle and fluid transport mechanisms: electrophoresis, electrokinesis, dielectrophoresis, electro-osmosis, and electrorotation. In general, the phenomena relate to the direct conversion of electrical energy into kinetic energy, and vice versa.
In the first instance, shaped electrostatic fields (ESF's) create hydrostatic pressure (HSP, or motion) in dielectric media. When such media are fluids, a flow is produced. If the dielectric is a vacuum or a solid, no flow is produced. Such flow can be directed against the electrodes, generally to move the electrodes. In such case, the moving structure acts as an electric motor. Practical fields of interest of EHD are the common air ioniser, electrohydrodynamic thrusters and EHD cooling systems.
In the second instance, the converse takes place. A powered flow of medium within a shaped electrostatic field adds energy to the system which is picked up as a potential difference by electrodes. In such case, the structure acts as an electrical generator.
Electrokinesis
Electrokinesis is the particle or fluid transport produced by an electric field acting on a fluid having a net mobile charge. (See -kinesis for explanation and further uses of the -kinesis suffix.) Electrokinesis was first observed by Ferdinand Frederic Reuss during 1808, in the electrophoresis of clay particles The effect was also noticed and publicized in the 1920s by Thomas Townsend Brown which he called the Biefeld–Brown effect, although he seems to have misidentified it as an electric field acting on gravity. The flow rate in such a mec
|
https://en.wikipedia.org/wiki/IP%20tunnel
|
An IP tunnel is an Internet Protocol (IP) network communications channel between two networks. It is used to transport another network protocol by encapsulation of its packets.
IP tunnels are often used for connecting two disjoint IP networks that don't have a native routing path to each other, via an underlying routable protocol across an intermediate transport network. In conjunction with the IPsec protocol they may be used to create a virtual private network between two or more private networks across a public network such as the Internet. Another prominent use is to connect islands of IPv6 installations across the IPv4 Internet.
In IP tunnelling, every IP packet, including addressing information of its source and destination IP networks, is encapsulated within another packet format native to the transit network.
At the borders between the source network and the transit network, as well as the transit network and the destination network, gateways are used that establish the end-points of the IP tunnel across the transit network. Thus, the IP tunnel endpoints become native IP routers that establish a standard IP route between the source and destination networks. Packets traversing these end-points from the transit network are stripped from their transit frame format headers and trailers used in the tunnelling protocol and thus converted into native IP format and injected into the IP stack of the tunnel endpoints. In addition, any other protocol encapsulations used during transit, such as IPsec or Transport Layer Security, are removed.
IP in IP, sometimes called ipencap, is an example of IP encapsulation within IP and is described in RFC 2003. Other variants of the IP-in-IP variety are IPv6-in-IPv4 (6in4) and IPv4-in-IPv6 (4in6).
IP tunneling often bypasses simple firewall rules transparently since the specific nature and addressing of the original datagrams are hidden. Content-control software is usually required to block IP tunnels.
History
The first spec
|
https://en.wikipedia.org/wiki/David%20Singmaster
|
David Breyer Singmaster (14 December 1938 – 13 February 2023) was an American-British mathematician who was emeritus professor of mathematics at London South Bank University, England. He had a huge personal collection of mechanical puzzles and books of brain teasers. He was most famous for being an early adopter and enthusiastic promoter of the Rubik's Cube. His Notes on Rubik's "Magic Cube" which he began compiling in 1979 provided the first mathematical analysis of the Cube as well as providing one of the first published solutions. The book contained his cube notation which allowed the recording of Rubik's Cube moves, and which quickly became the standard.
Singmaster was both a puzzle historian and a composer of puzzles, and many of his puzzles were published in newspapers and magazines. In combinatorial number theory, Singmaster's conjecture states that there is an upper bound on the number of times a number other than 1 can appear in Pascal's triangle.
Career
David Singmaster was a student at the California Institute of Technology in the late 1950s. His intention was to become a civil engineer, but he became interested in chemistry and then physics. However he was thrown out of college in his third year for "lack of academic ability". After a year working, he switched to the University of California, Berkeley. He only became really interested in mathematics in his final year when he took some courses in algebra and number theory. In the autumn semester, his number theory teacher Dick Lehmer posed a prize problem which Singmaster won. In his last semester, his algebra teacher posed a question the teacher didn't know the answer to and Singmaster solved it, eventually leading to two papers. He gained his PhD from Berkeley, in 1966. He taught at the American University of Beirut, and then lived for a while in Cyprus.
Singmaster moved to London in 1970. The "Polytechnic of the South Bank" had been created from a merger of institutions in 1970, and Singmaster becam
|
https://en.wikipedia.org/wiki/Energy%20accounting
|
Energy accounting is a system used to measure, analyze and report the energy consumption of different activities on a regular basis. This is done to improve energy efficiency, and to monitor the environment impact of energy consumption.
Energy management
Energy accounting is a system used in energy management systems to measure and analyze energy consumption to improve energy efficiency within an organization. Organisations such as Intel corporation use these systems to track energy usage.
Various energy transformations are possible. An energy balance can be used to track energy through a system. This becomes a useful tool for determining resource use and environmental impacts. How much energy is needed at each point in a system is measured, as well as the form of that energy. An accounting system keeps track of energy in, energy out, and non-useful energy versus work done, and transformations within a system. Sometimes, non-useful work is what is often responsible for environmental problems.
Energy balance
Energy returned on energy invested (EROEI) is the ratio of energy delivered by an energy technology to the energy invested to set up the technology.
See also
Anthropogenic metabolism
Energy and Environment
Energy management
Energy management software
Energy management system
Energy quality
Energy transformation
EROEI
Industrial metabolism
Social metabolism
Urban metabolism
References
External links
Accounting: Facility Energy Use
Energy accounting in the context of environmental accounting
Thermodynamics
Energy economics
Ecological economics
|
https://en.wikipedia.org/wiki/Rapid%20amplification%20of%20cDNA%20ends
|
Rapid amplification of cDNA ends (RACE) is a technique used in molecular biology to obtain the full length sequence of an RNA transcript found within a cell. RACE results in the production of a cDNA copy of the RNA sequence of interest, produced through reverse transcription, followed by PCR amplification of the cDNA copies (see RT-PCR). The amplified cDNA copies are then sequenced and, if long enough, should map to a unique genomic region. RACE is commonly followed up by cloning before sequencing of what was originally individual RNA molecules. A more high-throughput alternative which is useful for identification of novel transcript structures, is to sequence the RACE-products by next generation sequencing technologies.
Process
RACE can provide the sequence of an RNA transcript from a small known sequence within the transcript to the 5' end (5' RACE-PCR) or 3' end (3' RACE-PCR) of the RNA. This technique is sometimes called one-sided PCR or anchored PCR.
The first step in RACE is to use reverse transcription to produce a cDNA copy of a region of the RNA transcript. In this process, an unknown end portion of a transcript is copied using a known sequence from the center of the transcript. The copied region is bounded by the known sequence, at either the 5' or 3' end.
The protocols for 5' or 3' RACES differ slightly. 5' RACE-PCR begins using mRNA as a template for a first round of cDNA synthesis (or reverse transcription) reaction using an anti-sense (reverse) oligonucleotide primer that recognizes a known sequence in the middle of the gene of interest; the primer is called a gene specific primer (GSP). The primer binds to the mRNA, and the enzyme reverse transcriptase adds base pairs to the 3' end of the primer to generate a specific single-stranded cDNA product; this is the reverse complement of the mRNA. Following cDNA synthesis, the enzyme terminal deoxynucleotidyl transferase (TdT) is used to add a string of identical nucleotides, known as a homopolymeric t
|
https://en.wikipedia.org/wiki/Message%20passing
|
In computer science, message passing is a technique for invoking behavior (i.e., running a program) on a computer. The invoking program sends a message to a process (which may be an actor or object) and relies on that process and its supporting infrastructure to then select and run some appropriate code. Message passing differs from conventional programming where a process, subroutine, or function is directly invoked by name. Message passing is key to some models of concurrency and object-oriented programming.
Message passing is ubiquitous in modern computer software. It is used as a way for the objects that make up a program to work with each other and as a means for objects and systems running on different computers (e.g., the Internet) to interact. Message passing may be implemented by various mechanisms, including channels.
Overview
Message passing is a technique for invoking behavior (i.e., running a program) on a computer. In contrast to the traditional technique of calling a program by name, message passing uses an object model to distinguish the general function from the specific implementations. The invoking program sends a message and relies on the object to select and execute the appropriate code. The justifications for using an intermediate layer essentially falls into two categories: encapsulation and distribution.
Encapsulation is the idea that software objects should be able to invoke services on other objects without knowing or caring about how those services are implemented. Encapsulation can reduce the amount of coding logic and make systems more maintainable. E.g., rather than having IF-THEN statements that determine which subroutine or function to call a developer can just send a message to the object and the object will select the appropriate code based on its type.
One of the first examples of how this can be used was in the domain of computer graphics. There are various complexities involved in manipulating graphic objects. For example, si
|
https://en.wikipedia.org/wiki/Dataflow%20architecture
|
Dataflow architecture is a dataflow-based computer architecture that directly contrasts the traditional von Neumann architecture or control flow architecture. Dataflow architectures have no program counter, in concept: the executability and execution of instructions is solely determined based on the availability of input arguments to the instructions, so that the order of instruction execution may be hard to predict.
Although no commercially successful general-purpose computer hardware has used a dataflow architecture, it has been successfully implemented in specialized hardware such as in digital signal processing, network routing, graphics processing, telemetry, and more recently in data warehousing, and artificial intelligence (as: polymorphic dataflow Convolution Engine, structure-driven, dataflow scheduling). It is also very relevant in many software architectures today including database engine designs and parallel computing frameworks.
Synchronous dataflow architectures tune to match the workload presented by real-time data path applications such as wire speed packet forwarding. Dataflow architectures that are deterministic in nature enable programmers to manage complex tasks such as processor load balancing, synchronization and accesses to common resources.
Meanwhile, there is a clash of terminology, since the term dataflow is used for a subarea of parallel programming: for dataflow programming.
History
Hardware architectures for dataflow was a major topic in computer architecture research in the 1970s and early 1980s. Jack Dennis of MIT pioneered the field of static dataflow architectures while the Manchester Dataflow Machine and MIT Tagged Token architecture were major projects in dynamic dataflow.
The research, however, never overcame the problems related to:
Efficiently broadcasting data tokens in a massively parallel system.
Efficiently dispatching instruction tokens in a massively parallel system.
Building content-addressable memory (CAM)
|
https://en.wikipedia.org/wiki/Genetic%20Savings%20%26%20Clone
|
Genetic Savings & Clone, Inc. was a company headquartered in Sausalito, California that offered commercial pet gene banking and cloning services, between 2004 and 2006.
History
The company was founded as a result of the efforts to clone Lou Hawthorne's favorite family dog, Missy. The Missyplicity project generated enough interest that Lou Hawthorne decided to build a company devoted to dog and cat cloning.
The company opened for business in February 2000, funded production of the first cloned cat, CC, in 2001, and launched its pet cloning service in February 2004, operating a "petbank", to which pet owners could send tissue samples for later use in cloning. The company delivered the world's first commercially cloned cat, Little Nicky, in December 2004. Little Nicky was sold to a Texas woman for a reported US$50,000. He is a genetic twin of "Nicky," a 17-year-old Maine Coon cat that had been kept as a pet. Musician Liam Lynch's cat was cloned after its death, presumably making him the only celebrity to own a cloned pet.
As well as their success in cloning cats, the company also made significant advances in dog cloning research, although the technology was not mature enough to sustain the business. The company closed in 2006. Letters to this effect were sent out to clients at the end of September 2006, informing them of this decision and offering to transfer any genetic material to another facility.
Controversy
The company spurred widespread debate regarding the ethics and morality of pet cloning especially in light of the fact that animals are euthanized by their owners every day. Though the topic lost currency with the closure of the company, divergent arguments about these issues can still be found on some web sites.
External links
Last version of the homepage before the company closed (At the Internet Archive)
Last version of Defend Pet Cloning (At the Internet Archive)
References
Cloning
Companies based in Marin County, California
Biotechnology compan
|
https://en.wikipedia.org/wiki/AMS%20Euler
|
AMS Euler is an upright cursive typeface, commissioned by the American Mathematical Society (AMS) and designed and created by Hermann Zapf with the assistance of Donald Knuth and his Stanford graduate students. It tries to emulate a mathematician's style of handwriting mathematical entities on a blackboard, which is upright rather than italic. It blends very well with other typefaces made by Hermann Zapf, such as Palatino, Aldus and Melior, but very badly with the default TeX font Computer Modern. All the alphabets were implemented with the computer-assisted design system Metafont developed by Knuth. Zapf designed and drew the Euler alphabets in 1980–81 and provided critique and advice of digital proofs in 1983 and later. The typeface family is copyright by American Mathematical Society, 1983. Euler Metafont development was done by Stanford computer science and/or digital typography students; first Scott Kim, then Carol Twombly and Daniel Mills, and finally David Siegel, all assisted by John Hobby. Siegel finished the Metafont Euler digitization project as his M.S. thesis in 1985.
The AMS Euler typeface is named after Leonhard Euler.
First implemented in METAFONT, AMS Euler was first used in the book Concrete Mathematics, which was co-authored by Knuth and dedicated to Euler. This volume also saw the debut of Knuth's Concrete Roman font, designed to complement AMS Euler. The Euler Metafont format fonts were converted to PostScript Type 1 font format by the efforts of several people, including Berthold Horn at Y&Y, Barry Smith at Bluesky Research, and Henry Pinkham and Ian Morrison at Projective Solutions. It is now also available in TrueType format.
Reshaping Euler
In 2009, AMS released version 3.0 of AMS fonts, in which Hermann Zapf reshaped many of the Euler glyphs, with implementation and assistance from Hans Hagen, Taco Hoekwater, and Volker RW Schaa.
The updated version 3.0 was presented to Donald Knuth on his birthday, January 10, 2008.
These updates were
|
https://en.wikipedia.org/wiki/Sun%20RPC
|
Open Network Computing (ONC) Remote Procedure Call (RPC), commonly known as Sun RPC is a remote procedure call system. ONC was originally developed by Sun Microsystems in the 1980s as part of their Network File System project.
ONC is based on calling conventions used in Unix and the C programming language. It serializes data using the External Data Representation (XDR), which has also found some use to encode and decode data in files that are to be accessed on more than one platform. ONC then delivers the XDR payload using either UDP or TCP. Access to RPC services on a machine are provided via a port mapper that listens for queries on a well-known port (number 111) over UDP and TCP.
ONC RPC was described in RFC 1831, published in 1995. RFC 5531, published in 2009, is the current version. Authentication mechanisms used by ONC RPC are described in RFC 2695, RFC 2203, and RFC 2623.
Implementations of ONC RPC exist in most Unix-like systems. Microsoft supplies an implementation for Windows in their Microsoft Windows Services for UNIX product; in addition, a number of third-party implementation of ONC RPC for Windows exist, including versions for C/C++, Java, and .NET (see external links).
In 2009, Sun relicensed the ONC RPC code under the standard 3-clause BSD license and then reconfirmed by Oracle Corporation in 2010 following confusion about the scope of the relicensing.
ONC is considered "lean and mean", but has limited appeal as a generalized RPC system for WANs or heterogeneous environments. Systems such as DCE, CORBA and SOAP are generally used in this wider role.
See also
XDR - The grammar defined in RFC 1831 is a small extension of the XDR grammar defined in RFC 4506
DCE
XML-RPC
References
Notes
External links
RFC 1050 - Specifies version 1 of ONC RPC
RFC 5531 - Specifies version 2 of ONC RPC
Remote Procedure Calls (RPC) — A tutorial on ONC RPC by Dr Dave Marshall of Cardiff University
Introduction to RPC Programming — A developer's introduction to RPC
|
https://en.wikipedia.org/wiki/Steady%20state
|
In systems theory, a system or a process is in a steady state if the variables (called state variables) which define the behavior of the system or the process are unchanging in time. In continuous time, this means that for those properties p of the system, the partial derivative with respect to time is zero and remains so:
In discrete time, it means that the first difference of each property is zero and remains so:
The concept of a steady state has relevance in many fields, in particular thermodynamics, economics, and engineering. If a system is in a steady state, then the recently observed behavior of the system will continue into the future. In stochastic systems, the probabilities that various states will be repeated will remain constant. See for example Linear difference equation#Conversion to homogeneous form for the derivation of the steady state.
In many systems, a steady state is not achieved until some time after the system is started or initiated. This initial situation is often identified as a transient state, start-up or warm-up period. For example, while the flow of fluid through a tube or electricity through a network could be in a steady state because there is a constant flow of fluid or electricity, a tank or capacitor being drained or filled with fluid is a system in transient state, because its volume of fluid changes with time.
Often, a steady state is approached asymptotically. An unstable system is one that diverges from the steady state. See for example Linear difference equation#Stability.
In chemistry, a steady state is a more general situation than dynamic equilibrium. While a dynamic equilibrium occurs when two or more reversible processes occur at the same rate, and such a system can be said to be in a steady state, a system that is in a steady state may not necessarily be in a state of dynamic equilibrium, because some of the processes involved are not reversible.
Applications
Economics
A steady state economy is an economy (es
|
https://en.wikipedia.org/wiki/Mohr%27s%20circle
|
Mohr's circle is a two-dimensional graphical representation of the transformation law for the Cauchy stress tensor.
Mohr's circle is often used in calculations relating to mechanical engineering for materials' strength, geotechnical engineering for strength of soils, and structural engineering for strength of built structures. It is also used for calculating stresses in many planes by reducing them to vertical and horizontal components. These are called principal planes in which principal stresses are calculated; Mohr's circle can also be used to find the principal planes and the principal stresses in a graphical representation, and is one of the easiest ways to do so.
After performing a stress analysis on a material body assumed as a continuum, the components of the Cauchy stress tensor at a particular material point are known with respect to a coordinate system. The Mohr circle is then used to determine graphically the stress components acting on a rotated coordinate system, i.e., acting on a differently oriented plane passing through that point.
The abscissa and ordinate (,) of each point on the circle are the magnitudes of the normal stress and shear stress components, respectively, acting on the rotated coordinate system. In other words, the circle is the locus of points that represent the state of stress on individual planes at all their orientations, where the axes represent the principal axes of the stress element.
19th-century German engineer Karl Culmann was the first to conceive a graphical representation for stresses while considering longitudinal and vertical stresses in horizontal beams during bending. His work inspired fellow German engineer Christian Otto Mohr (the circle's namesake), who extended it to both two- and three-dimensional stresses and developed a failure criterion based on the stress circle.
Alternative graphical methods for the representation of the stress state at a point include the Lamé's stress ellipsoid and Cauchy's stress q
|
https://en.wikipedia.org/wiki/Array%20processing
|
Array processing is a wide area of research in the field of signal processing that extends from the simplest form of 1 dimensional line arrays to 2 and 3 dimensional array geometries. Array structure can be defined as a set of sensors that are spatially separated, e.g. radio antenna and seismic arrays. The sensors used for a specific problem may vary widely, for example microphones, accelerometers and telescopes. However, many similarities exist, the most fundamental of which may be an assumption of wave propagation. Wave propagation means there is a systemic relationship between the signal received on spatially separated sensors. By creating a physical model of the wave propagation, or in machine learning applications a training data set, the relationships between the signals received on spatially separated sensors can be leveraged for many applications.
Some common problem that are solved with array processing techniques are:
determine number and locations of energy-radiating sources
enhance the signal to noise ratio SNR "signal-to-interference-plus-noise ratio (SINR)"
track moving sources
Array processing metrics are often assessed noisy environments. The model for noise may be either one of spatially incoherent noise, or one with interfering signals following the same propagation physics. Estimation theory is an important and basic part of signal processing field, which used to deal with estimation problem in which the values of several parameters of the system should be estimated based on measured/empirical data that has a random component. As the number of applications increases, estimating temporal and spatial parameters become more important. Array processing emerged in the last few decades as an active area and was centered on the ability of using and combining data from different sensors (antennas) in order to deal with specific estimation task (spatial and temporal processing). In addition to the information that can be extracted from the collecte
|
https://en.wikipedia.org/wiki/Beamforming
|
Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception. This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array.
Beamforming can be used for radio or sound waves. It has found numerous applications in radar, sonar, seismology, wireless communications, radio astronomy, acoustics and biomedicine. Adaptive beamforming is used to detect and estimate the signal of interest at the output of a sensor array by means of optimal (e.g. least-squares) spatial filtering and interference rejection.
Techniques
To change the directionality of the array when transmitting, a beamformer controls the phase and relative amplitude of the signal at each transmitter, in order to create a pattern of constructive and destructive interference in the wavefront. When receiving, information from different sensors is combined in a way where the expected pattern of radiation is preferentially observed.
For example, in sonar, to send a sharp pulse of underwater sound towards a ship in the distance, simply simultaneously transmitting that sharp pulse from every sonar projector in an array fails because the ship will first hear the pulse from the speaker that happens to be nearest the ship, then later pulses from speakers that happen to be further from the ship. The beamforming technique involves sending the pulse from each projector at slightly different times (the projector closest to the ship last), so that every pulse hits the ship at exactly the same time, producing the effect of a single strong pulse from a single powerful projector. The same technique can be c
|
https://en.wikipedia.org/wiki/Quantum%20wire
|
In mesoscopic physics, a quantum wire is an electrically conducting wire in which quantum effects influence the transport properties. Usually such effects appear in the dimension of nanometers, so they are also referred to as nanowires.
Quantum effects
If the diameter of a wire is sufficiently small, electrons will experience quantum confinement in the transverse direction. As a result, their transverse energy will be limited to a series of discrete values. One consequence of this quantization is that the classical formula for calculating the electrical resistance of a wire,
is not valid for quantum wires (where is the material's resistivity, is the length, and is the cross-sectional area of the wire).
Instead, an exact calculation of the transverse energies of the confined electrons has to be performed to calculate a wire's resistance. Following from the quantization of electron energy, the electrical conductance (the inverse of the resistance) is found to be quantized in multiples of , where is the electron charge and is the Planck constant. The factor of two arises from spin degeneracy. A single ballistic quantum channel (i.e. with no internal scattering) has a conductance equal to this quantum of conductance. The conductance is lower than this value in the presence of internal scattering.
The importance of the quantization is inversely proportional to the diameter of the nanowire for a given material. From material to material, it is dependent on the electronic properties, especially on the effective mass of the electrons. Physically, this means that it will depend on how conduction electrons interact with the atoms within a given material. In practice, semiconductors can show clear conductance quantization for large wire transverse dimensions (~100 nm) because the electronic modes due to confinement are spatially extended. As a result, their Fermi wavelengths are large and thus they have low energy separations. This means that they can only be resolv
|
https://en.wikipedia.org/wiki/Rindler%20coordinates
|
Rindler coordinates are a coordinate system used in the context of special relativity to describe the hyperbolic acceleration of a uniformly accelerating reference frame in flat spacetime. In relativistic physics the coordinates of a hyperbolically accelerated reference frame constitute an important and useful coordinate chart representing part of flat Minkowski spacetime. In special relativity, a uniformly accelerating particle undergoes hyperbolic motion, for which a uniformly accelerating frame of reference in which it is at rest can be chosen as its proper reference frame. The phenomena in this hyperbolically accelerated frame can be compared to effects arising in a homogeneous gravitational field. For general overview of accelerations in flat spacetime, see Acceleration (special relativity) and Proper reference frame (flat spacetime).
In this article, the speed of light is defined by , the inertial coordinates are , and the hyperbolic coordinates are . These hyperbolic coordinates can be separated into two main variants depending on the accelerated observer's position: If the observer is located at time at position (with as the constant proper acceleration measured by a comoving accelerometer), then the hyperbolic coordinates are often called Rindler coordinates with the corresponding Rindler metric. If the observer is located at time at position , then the hyperbolic coordinates are sometimes called Møller coordinates or Kottler–Møller coordinates with the corresponding Kottler–Møller metric. An alternative chart often related to observers in hyperbolic motion is obtained using Radar coordinates which are sometimes called Lass coordinates. Both the Kottler–Møller coordinates as well as Lass coordinates are denoted as Rindler coordinates as well.
Regarding the history, such coordinates were introduced soon after the advent of special relativity, when they were studied (fully or partially) alongside the concept of hyperbolic motion: In relation to flat Mi
|
https://en.wikipedia.org/wiki/UTF-EBCDIC
|
UTF-EBCDIC is a character encoding capable of encoding all 1,112,064 valid character code points in Unicode using one to five one-byte (8-bit) code units (in contrast to a maximum of four for UTF-8). It is meant to be EBCDIC-friendly, so that legacy EBCDIC applications on mainframes may process the characters without much difficulty. Its advantages for existing EBCDIC-based systems are similar to UTF-8's advantages for existing ASCII-based systems. Details on UTF-EBCDIC are defined in Unicode Technical Report #16.
To produce the UTF-EBCDIC encoded version of a series of Unicode code points, an encoding based on UTF-8 (known in the specification as UTF-8-Mod) is applied first (creating what the specification calls an I8 sequence). The main difference between this encoding and UTF-8 is that it allows Unicode code points U+0080 through U+009F (the C1 control codes) to be represented as a single byte and therefore later mapped to corresponding EBCDIC control codes. In order to achieve this, UTF-8-Mod uses 101XXXXX instead of 10XXXXXX as the format for trailing bytes in a multi-byte sequence. As this can only hold 5 bits rather than 6, the UTF-8-Mod encoding of codepoints above U+03FF are larger than the UTF-8 encoding.
The UTF-8-Mod transformation leaves the data in an ASCII-based format (for example, U+0041 "A" is still encoded as 01000001), so each byte is fed through a reversible (one-to-one) lookup table to produce the final UTF-EBCDIC encoding. For example, 01000001 in this table maps to 11000001; thus the UTF-EBCDIC encoding of U+0041 (Unicode's "A") is 0xC1 (EBCDIC's "A").
This encoding form is rarely used, even on the EBCDIC-based mainframes for which it was designed. IBM EBCDIC-based mainframe operating systems, such as z/OS, usually use UTF-16 for complete Unicode support. For example, IBM Db2, COBOL, PL/I, Java and the IBM XML toolkit support UTF-16 on IBM mainframes.
Codepage layout
There are 160 characters with single-byte encodings in UTF-EBCDIC (com
|
https://en.wikipedia.org/wiki/Data%20broker
|
A data broker is an individual or company that specializes in collecting personal data (such as income, ethnicity, political beliefs, or geolocation data) or data about companies, mostly from public records but sometimes sourced privately, and selling or licensing such information to third parties for a variety of uses. Sources, usually Internet-based since the 1990s, may include census and electoral roll records, social networking sites, court reports and purchase histories. The information from data brokers may be used in background checks used by employers and housing.
There are varying regulations around the world limiting the collection of information on individuals; privacy laws vary. In the United States there is no federal regulation protection for the consumer from data brokers, although some states have begun enacting laws individually. In the European Union, GDPR serves to regulate data brokers' operations. Some data brokers report to have large numbers of population data or "data attributes". Acxiom purports to have data from 2.5 billion different people.
Overview
Information broker is sometimes abbreviated to IB, and other terms used for information brokers include data brokers, independent information specialists, information or data agents, data providers, data suppliers, information resellers, data vendors, syndicated data brokers, or information product companies. Information consultants, freelance librarians, and information specialists are also sometimes termed information brokers.
Credit scores were first used in the 1950s, and information brokering emerged as a career for individuals during that decade. However the business of information brokering did not become widely known or specifically regulated until the 1990s. During the 1970s, "information brokers" often had a library science degree; however, towards the end of the 20th century, people with degrees in science, law, business, medicine, or other disciplines entered the profession, and
|
https://en.wikipedia.org/wiki/Data%20Discman
|
The Data Discman is an electronic book player introduced to the Western market in late 1991 or early 1992 by Sony Corporation. It was marketed in the United States to college students and international travelers, but had little success outside Japan. The Discman product name had originally been applied to Sony's range of portable CD players such as the Sony Discman D-50, first released in 1984.
The Data Discman was designed to allow quick access to electronic reference information on a pre-recorded disc. Searching terms were entered using a QWERTY-style keyboard and utilized the "Yes" and "No" keys.
A typical Data Discman model has a low resolution small grayscale LCD (256x200 early on, later models would have up to 320x240 and in colour), CD drive unit (either Mini CD or full size), and a low-power computer. Early versions of the device were incapable of playing audio CDs. Software was prerecorded and usually featured encyclopedias, foreign language dictionaries and novels. It was typically created using the Sony Electronic Book Authoring System (SEBAS).
A DD-1EX Data Discman is in the permanent collection of the Victoria and Albert Museum and is currently displayed in the V&A's 20th Century Gallery. This early model did not include the ability to play sound.
An updated model, the DD-10EX, was released in 1992 or 1993. The accompanying manual gives a copyright date of 1992. Unlike the DD-1EX, the DD-10EX also had the ability to play audio files. The British version came with a disc containing the Thomson Electronic Directory for April 1992, plus another containing the Pocket Interpreter 5-language conversation book for travelers. A DD-10EX was included in an exhibition entitled The Book and Beyond: Electronic Publishing and the Art of the Book, held at the Victoria and Albert Museum, London, from April to October 1995. The exhibition also included a CD-ROM designed to be played on the Data Discman, entitled The Library of the Future and published in 1993.
The
|
https://en.wikipedia.org/wiki/Inhibitor%20protein
|
The inhibitor protein (IP) is situated in the mitochondrial matrix and protects the cell against rapid ATP hydrolysis during momentary ischaemia. In oxygen absence, the pH of the matrix drops. This causes IP to become protonated and change its conformation to one that can bind to the F1Fo synthetase and stops it thereby preventing it from moving in a backwards direction and hydrolyze ATP instead of make it. When oxygen is finally incorporated into the system, the pH rises and IP is deprotonated. IP dissociates from the F1Fo synthetase and allows it to resume its ATP synthesis.
Cell biology
|
https://en.wikipedia.org/wiki/Document%20classification
|
Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. The problems are overlapping, however, and there is therefore interdisciplinary research on document classification.
The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. When not otherwise specified, text classification is implied.
Documents may be classified according to their subjects or according to other attributes (such as document type, author, printing year etc.). In the rest of this article only subject classification is considered. There are two main philosophies of subject classification of documents: the content-based approach and the request-based approach.
"Content-based" versus "request-based" classification
Content-based classification is classification in which the weight given to particular subjects in a document determines the class to which the document is assigned. It is, for example, a common rule for classification in libraries, that at least 20% of the content of a book should be about the class to which the book is assigned. In automatic classification it could be the number of times given words appears in a document.
Request-oriented classification (or -indexing) is classification in which the anticipated request from users is influencing how documents are being classified. The classifier asks themself: “Under which descriptors should this entity be found?” and “think of all the possible queries and decide for which ones the entity at hand is relevant” (Soergel, 1985, p. 230).
Request-oriented classi
|
https://en.wikipedia.org/wiki/Federated%20identity
|
A federated identity in information technology is the means of linking a person's electronic identity and attributes, stored across multiple distinct identity management systems.
Federated identity is related to single sign-on (SSO), in which a user's single authentication ticket, or token, is trusted across multiple IT systems or even organizations. SSO is a subset of federated identity management, as it relates only to authentication and is understood on the level of technical interoperability and it would not be possible without some sort of federation.
Management
In information technology (IT), federated identity management (FIdM) amounts to having a common set of policies, practices and protocols in place to manage the identity and trust into IT users and devices across organizations.
Single sign-on (SSO) systems allow a single user authentication process across multiple IT systems or even organizations. SSO is a subset of federated identity management, as it relates only to authentication and technical interoperability.
Centralized identity management solutions were created to help deal with user and data security where the user and the systems they accessed were within the same network – or at least the same "domain of control". Increasingly however, users are accessing external systems which are fundamentally outside their domain of control, and external users are accessing internal systems. The increasingly common separation of user from the systems requiring access is an inevitable by-product of the decentralization brought about by the integration of the Internet into every aspect of both personal and business life. Evolving identity management challenges, and especially the challenges associated with cross-company, cross-domain access, have given rise to a new approach to identity management, known now as "federated identity management".
FIdM, or the "federation" of identity, describes the technologies, standards and use-cases which serve to enab
|
https://en.wikipedia.org/wiki/Gnomonic%20projection
|
A gnomonic projection, also known as a central projection or rectilinear projection, is a perspective projection of a sphere, with center of projection at the sphere's center, onto any plane not passing through the center, most commonly a tangent plane. Under gnomonic projection every great circle on the sphere is projected to a straight line in the plane (a great circle is a geodesic on the sphere, the shortest path between any two points, analogous to a straight line on the plane). More generally, a gnomonic projection can be taken of any -dimensional hypersphere onto a hyperplane.
The projection is the -dimensional generalization of the trigonometric tangent which maps from the circle to a straight line, and as with the tangent, every pair of antipodal points on the sphere projects to a single point in the plane, while the points on the plane through the sphere's center and parallel to the image plane project to points at infinity; often the projection is considered as a one-to-one correspondence between points in the hemisphere and points in the plane, in which case any finite part of the image plane represents a portion of the hemisphere.
The gnomonic projection is azimuthal (radially symmetric). No shape distortion occurs at the center of the projected image, but distortion increases rapidly away from it.
The gnomonic projection originated in astronomy for constructing sundials and charting the celestial sphere. It is commonly used as a geographic map projection, and can be convenient in navigation because great-circle courses are plotted as straight lines. Rectilinear photographic lenses make a perspective projection of the world onto an image plane; this can be thought of as a gnomonic projection of the image sphere (an abstract sphere indicating the direction of each ray passing through a camera modeled as a pinhole). The gnomonic projection is used in crystallography for analyzing the orientations of lines and planes of crystal structures. It is used in
|
https://en.wikipedia.org/wiki/Dew%20pond
|
A dew pond is an artificial pond usually sited on the top of a hill, intended for watering livestock. Dew ponds are used in areas where a natural supply of surface water may not be readily available. The name dew pond (sometimes cloud pond or mist pond) is first found in the Journal of the Royal Agricultural Society in 1865. Despite the name, their primary source of water is believed to be rainfall rather than dew or mist.
Construction
They are usually shallow, saucer-shaped and lined with puddled clay, chalk or marl on an insulating straw layer over a bottom layer of chalk or lime. To deter earthworms from their natural tendency of burrowing upwards, which in a short while would make the clay lining porous, a layer of soot would be incorporated or lime mixed with the clay. The clay is usually covered with straw to prevent cracking by the sun and a final layer of chalk rubble or broken stone to protect the lining from the hoofs of sheep or cattle. To retain more of the rainfall, the clay layer could be extended across the catchment area of the pond. If the pond's temperature is kept low, evaporation (a major water loss) may be significantly reduced, thus maintaining the collected rainwater. According to researcher Edward Martin, this may be attained by building the pond in a hollow, where cool air is likely to gather, or by keeping the surrounding grass long to enhance heat radiation. As the water level in the basin falls, a well of cool, moist air tends to form over the surface, restricting evaporation.
A method of constructing the base layer using chalk puddle was described in The Field 14 December 1907.
A Sussex farmer born in 1850 tells how he and his forefathers made dew ponds:
The initial supply of water after construction has to be provided by the builders, using artificial means. A preferred method was to arrange to finish the excavation in winter, so that any fallen snow could be collected and heaped into the centre of the pond to await melting.
Hist
|
https://en.wikipedia.org/wiki/Toy%20problem
|
In scientific disciplines, a toy problem or a puzzlelike problem is a problem that is not of immediate scientific interest, yet is used as an expository device to illustrate a trait that may be shared by other, more complicated, instances of the problem, or as a way to explain a particular, more general, problem solving technique. A toy problem is useful to test and demonstrate methodologies. Researchers can use toy problems to compare the performance of different algorithms. They are also good for game designing.
For instance, while engineering a large system, the large problem is often broken down into many smaller toy problems which have been well understood in detail. Often these problems distill a few important aspects of complicated problems so that they can be studied in isolation. Toy problems are thus often very useful in providing intuition about specific phenomena in more complicated problems.
As an example, in the field of artificial intelligence, classical puzzles, games and problems are often used as toy problems. These include sliding-block puzzles, N-Queens problem, missionaries and cannibals problem, tic-tac-toe, chess, Tower of Hanoi and others.
See also
Blocks world
Firing squad synchronization problem
Monkey and banana problem
Secretary problem
References
External links
Information science
Mathematics education
Recreational mathematics
|
https://en.wikipedia.org/wiki/Source%20Code%20Control%20System
|
Source Code Control System (SCCS) is a version control system designed to track changes in source code and other text files during the development of a piece of software. This allows the user to retrieve any of the previous versions of the original source code and the changes which are stored. It was originally developed at Bell Labs beginning in late 1972 by Marc Rochkind for an IBM System/370 computer running OS/360.
A characteristic feature of SCCS is the sccsid string that is embedded into source code, and automatically updated by SCCS for each revision. This example illustrates its use in the C programming language:
static char sccsid[] = "@(#)ls.c 8.1 (Berkeley) 6/11/93";
This string contains the file name, date, and can also contain a comment. After compilation, the string can be found in binary and object files by looking for the pattern @(#) and can be used to determine which source code files were used during compilation. The what command is available to automate this search for version strings.
History
In 1972, Marc Rochkind developed SCCS in SNOBOL4 at Bell Labs for an IBM System/370 computer running OS/360 MVT. He rewrote SCCS in the C programming language for use under UNIX, then running on a PDP-11, in 1973.
The first publicly released version was SCCS version 4 from February 18, 1977. It was available with the Programmer's Workbench (PWB) edition of the operating system. Release 4 of SCCS was the first version that used a text-based history file format, earlier versions did use binary history file formats. Release 4 was no longer written or maintained by Marc Rochkind. Subsequently, SCCS was included in AT&T's commercial System III and System V distributions. It was not licensed with 32V, the ancestor to BSD. The SCCS command set is now part of the Single UNIX Specification.
SCCS was the dominant version control system for Unix until later version control systems, notably the RCS and later CVS, gained more widespread adoption. Today, th
|
https://en.wikipedia.org/wiki/NZB
|
NZB is an XML-based file format for retrieving posts from NNTP (Usenet) servers. The format was conceived by the developers of the Newzbin.com Usenet Index. NZB is effective when used with search-capable websites. These websites create NZB files out of what is needed to be downloaded. Using this concept, headers would not be downloaded hence the NZB method is quicker and more bandwidth-efficient than traditional methods.
Each Usenet message has a unique identifier called the "Message-ID". When a large file is posted to a Usenet newsgroup, it is usually divided into multiple messages (called segments or parts) each having its own Message-ID. An NZB-capable Usenet client will read all needed Message-IDs from the NZB file, download them and decode the messages back into a binary file (usually using yEnc or Uuencode).
File format example
The following is an example of an NZB 1.1 file.
<?xml version="1.0" encoding="iso-8859-1" ?>
<!DOCTYPE nzb PUBLIC "-//newzBin//DTD NZB 1.1//EN" "http://www.newzbin.com/DTD/nzb/nzb-1.1.dtd">
<nzb xmlns="http://www.newzbin.com/DTD/2003/nzb">
<head>
<meta type="title">Your File!</meta>
<meta type="tag">Example</meta>
</head>
<file poster="Joe Bloggs <bloggs@nowhere.example>;" date="1071674882" subject="Here's your file! abc-mr2a.r01 (1/2)">
<groups>
<group>alt.binaries.newzbin</group>
<group>alt.binaries.mojo</group>
</groups>
<segments>
<segment bytes="102394" number="1">123456789abcdef@news.newzbin.com</segment>
<segment bytes="4501" number="2">987654321fedbca@news.newzbin.com</segment>
</segments>
</file>
</nzb>
See also
Comparison of Usenet newsreaders
References
External links
How to use Usenet NZB Files
NZB file specification
NZB sites directory
Usenet
XML
Computer file formats
|
https://en.wikipedia.org/wiki/T-2%20mycotoxin
|
T-2 mycotoxin is a trichothecene mycotoxin. It is a naturally occurring mold byproduct of Fusarium spp. fungus which is toxic to humans and animals. The clinical condition it causes is alimentary toxic aleukia and a host of symptoms related to organs as diverse as the skin, airway, and stomach. Ingestion may come from consumption of moldy whole grains. T-2 can be absorbed through human skin. Although no significant systemic effects are expected after dermal contact in normal agricultural or residential environments, local skin effects can not be excluded. Hence, skin contact with T-2 should be limited.
History
Alimentary toxic aleukia (ATA), a disease which is caused by trichothecenes like T-2 mycotoxin, killed many thousands of USSR citizens in the Orenburg District in the 1940s. It was reported that the mortality rate was 10% of the entire population in that area. During the 1970s it was proposed that the consumption of contaminated food was the cause of this mass poisoning. Because of World War II, harvesting of grains was delayed and food was scarce in Russia. This resulted in the consumption of grain that was contaminated with Fusarium molds, which produce T-2 mycotoxin.
In 1981, the United States Secretary of State Alexander Haig and his successor George P. Shultz accused the Soviet Union of using T-2 mycotoxin as a chemical weapon known as "yellow rain" in Laos (1975–81), Kampuchea (1979–81), and Afghanistan (1979–81), where it allegedly caused thousands of casualties. Although several US chemical weapons experts claim to have identified "yellow rain" samples from Laos as trichothecenes, other experts believe that this exposure was due to naturally occurring T-2 mycotoxin in contaminated foods. Another alternative theory was developed by Harvard biologist Matthew Meselson, who proposed that the "yellow rain" found in Southeast Asia originated from the excrement of jungle bees. The first indication for this theory came from finding high levels of pollen in
|
https://en.wikipedia.org/wiki/Emergent%20evolution
|
Emergent evolution is the hypothesis that, in the course of evolution, some entirely new properties, such as mind and consciousness, appear at certain critical points, usually because of an unpredictable rearrangement of the already existing entities. The term was originated by the psychologist C. Lloyd Morgan in 1922 in his Gifford Lectures at St. Andrews, which would later be published as the 1923 book Emergent Evolution.
The hypothesis has been widely criticized for providing no mechanism to how entirely new properties emerge, and for its historical roots in teleology. Historically, emergent evolution has been described as an alternative to materialism and vitalism.
Emergent evolution is distinct from the hypothesis of Emergent Evolutionary Potential (EEP) which was introduced in 2019 by Gene Levinson. In EEP, the scientific mechanism of Darwinian natural selection tends to preserve new, more complex entities that arise from interactions between previously existing entities, when those interactions prove useful, by trial-and error, in the struggle for existence. Biological organization arising via EEP is complementary to organization arising via gradual accumulation of incremental variation.
Historical context
The term emergent was first used to describe the concept by George Lewes in volume two of his 1875 book Problems of Life and Mind (p. 412). Henri Bergson covered similar themes in his popular 1907 book Creative Evolution on the Élan vital. Emergence was further developed by Samuel Alexander in his Gifford Lectures at Glasgow during 1916–18 and published as Space, Time, and Deity (1920). The related term emergent evolution was coined by C. Lloyd Morgan in his own Gifford lectures of 1921–22 at St. Andrews and published as Emergent Evolution (1923). In an appendix to a lecture in his book, Morgan acknowledged the contributions of Roy Wood Sellars's Evolutionary Naturalism (1922).
Origins
Response to Darwin's Origin of Species
Charles Darwin and Alfr
|
https://en.wikipedia.org/wiki/Cydrome
|
Cydrome (1984−1988) was a computer company established in San Jose of the Silicon Valley region in California. Its mission was to develop a numeric processor. The founders were David Yen, Wei Yen, Ross Towle, Arun Kumar, and Bob Rau (the chief architect).
History
The company was originally named ”Axiom Systems". However another company in San Diego called "Axiom" was founded earlier. Axiom Systems called its architecture "SPARC". It sold the rights to the name (but not the architecture) to Sun Microsystems and used the money to hire NameLab to come up with a new company name. They came up with "Cydrome" from "cyber" (computer) "drome" (racecourse).
Cydrome moved from an office in San Jose to a business park in Milpitas on President's Day 1985. This site was used to host meetings of the Bay Area ACM chapter's Special Interest Group in Large Scale Systems (SIGBIG), in contrast to then SIGSMALL for microcomputers which are now called "PCs" and its present-day national SIGHPC.
Late in its history, Cydrome received an investment from Prime Computers and OEMed the Cydra-5 through Prime. The system sold by Cydrome had white skins. The skins for the Prime OEM system was black. In the Summer of 1988 Prime was set to acquire Cydrome. At the last minute the board of Prime decided not to go through with the deal. That sealed the fate of Cydrome.
The company closed after roughly 4 years of operation in 1988. Many of the ideas in Cydrome were carried on in the Itanium architecture.
Product
In order to improve performance in a new instruction set architecture, the Cydrome processors were based on a very long instruction word (VLIW) containing instructions from parallel operations. Software pipelining in a custom Fortran compiler generated code that would run efficiently.
The numeric processor used a 256 bit-wide instruction word with seven "fields". In most cases the compiler would find instructions that could run in parallel and place them together in a single word
|
https://en.wikipedia.org/wiki/Wei%20Yen
|
Wei Yen () is a Taiwanese-American technologist and serial entrepreneur. He has been involved with several companies, including most recently as Chairman and Founder of AiLive.
Yen received his Ph.D. in Electrical Engineering in Operating Systems and Artificial Intelligence from Purdue University. Yen and his brother David Yen along with King-sun Fu published the paper "Data Coherence Problem in a Multicache System" that describes a practical cache coherence protocol.
Career
Yen served as the Director of Software Engineering for Cydrome, where he worked with his brother David, who served as the Director of Hardware Engineering. They were the major contributors to the Cydra-5 mini-supercomputer. The system was a combination of a VLIW ECL-based processor used for scientific applications and a multi-processor system designed for a bus architecture based on their Cache Coherence protocol.
Yen served as Senior Vice President of Silicon Graphics from 1988 to 1996, where he led development on OpenGL and also served as President of subsidiary MIPS Technologies. In 1996, he left SGI and founded TVsoft, a maker of interactive software for television setup devices. The company was renamed Navio and later merged with Oracle's Network Computer. Subsequently, the company went public as Liberate Technologies in July 1999. Its public offering reached a $12 billion valuation in early 2000 with a revenue run rate of $25 million.
In parallel, Yen founded a company called ArtX, employed with former SGI graphics engineers. ArtX received the contract to deliver the GameCube's Flipper graphics chip. The company was acquired by ATI in February 2000 for $400 million. This led to ATI's greatly improved R300 graphics chip family. Yen later joined ATI's board of directors.
With Nintendo, Yen cofounded another company iQue, the manufacturing and distributing arm of Nintendo for mainland China.
Yen founded iGware, a company that offers cloud computing services to its customers, including N
|
https://en.wikipedia.org/wiki/Random%20password%20generator
|
A random password generator is a software program or hardware device that takes input from a random or pseudo-random number generator and automatically generates a password. Random passwords can be generated manually, using simple sources of randomness such as dice or coins, or they can be generated using a computer.
While there are many examples of "random" password generator programs available on the Internet, generating randomness can be tricky, and many programs do not generate random characters in a way that ensures strong security. A common recommendation is to use open source security tools where possible, since they allow independent checks on the quality of the methods used. Simply generating a password at random does not ensure the password is a strong password, because it is possible, although highly unlikely, to generate an easily guessed or cracked password. In fact, there is no need at all for a password to have been produced by a perfectly random process: it just needs to be sufficiently difficult to guess.
A password generator can be part of a password manager. When a password policy enforces complex rules, it can be easier to use a password generator based on that set of rules than to manually create passwords.
Long strings of random characters are difficult for most people to memorize. Mnemonic hashes, which reversibly convert random strings into more memorable passwords, can substantially improve the ease of memorization. As the hash can be processed by a computer to recover the original 60-bit string, it has at least as much information content as the original string. Similar techniques are used in memory sport.
The naive approach
Here are two code samples that a programmer who is not familiar with the limitations of the random number generators in standard programming libraries might implement:
C
# include <time.h>
# include <stdio.h>
# include <stdlib.h>
int
main(void)
{
/* Length of the password */
unsigned short int length = 8
|
https://en.wikipedia.org/wiki/Butterfly%20theorem
|
The butterfly theorem is a classical result in Euclidean geometry, which can be stated as follows:
Let be the midpoint of a chord of a circle, through which two other chords and are drawn; and intersect chord at and correspondingly. Then is the midpoint of .
Proof
A formal proof of the theorem is as follows:
Let the perpendiculars and be dropped from the point on the straight lines and respectively. Similarly, let and be dropped from the point perpendicular to the straight lines and respectively.
Since
From the preceding equations and the intersecting chords theorem, it can be seen that
since .
So
Cross-multiplying in the latter equation,
Cancelling the common term
from both sides of the resulting equation yields
hence , since MX, MY, and PM are all positive, real numbers.
Thus, is the midpoint of .
Other proofs exist, including one using projective geometry.
History
Proving the butterfly theorem was posed as a problem by William Wallace in The Gentlemen's Mathematical Companion (1803). Three solutions were published in 1804, and in 1805 Sir William Herschel posed the question again in a letter to Wallace. Rev. Thomas Scurr asked the same question again in 1814 in the Gentlemen's Diary or Mathematical Repository.
References
External links
The Butterfly Theorem at cut-the-knot
A Better Butterfly Theorem at cut-the-knot
Proof of Butterfly Theorem at PlanetMath
The Butterfly Theorem by Jay Warendorff, the Wolfram Demonstrations Project.
Euclidean plane geometry
Theorems about circles
Articles containing proofs
|
https://en.wikipedia.org/wiki/Neanderthal%20extinction
|
Neanderthals became extinct around 40,000 years ago. Hypotheses on the causes of the extinction include violence, transmission of diseases from modern humans which Neanderthals had no immunity to, competitive replacement, extinction by interbreeding with early modern human populations, natural catastrophes, climate change and inbreeding depression. It is likely that multiple factors caused the demise of an already low population.
Possible coexistence before extinction
In research published in Nature in 2014, an analysis of radiocarbon dates from forty Neanderthal sites from Spain to Russia found that the Neanderthals disappeared in Europe between 41,000 and 39,000 years ago with 95% probability. The study also found with the same probability that modern humans and Neanderthals overlapped in Europe for between 2,600 and 5,400 years. Modern humans reached Europe between 45,000 and 43,000 years ago. Improved radiocarbon dating published in 2015 indicates that Neanderthals disappeared around 40,000 years ago, which overturns older carbon dating which indicated that Neanderthals may have lived as recently as 24,000 years ago, including in refugia on the south coast of the Iberian peninsula such as Gorham's Cave.
Zilhão et al. (2017) argue for pushing this date forward by some 3,000 years, to 37,000 years ago.
Inter-stratification of Neanderthal and modern human remains has been suggested, but is disputed. Stone tools that have been proposed to be linked to Neanderthals have been found at Byzovya (:ru:Бызовая) in the polar Urals, and dated to 31,000 to 34,000 years ago, but is also disputed.
Possible cause of extinction
Violence
Kwang Hyun Ho discusses the possibility that Neanderthal extinction was either precipitated or hastened by violent conflict with Homo sapiens. Violence in early hunter-gatherer societies usually occurred as a result of resource competition following natural disasters. It is therefore plausible to suggest that violence, including primitive wa
|
https://en.wikipedia.org/wiki/CA%20Anti-Spyware
|
CA Anti-Spyware is a spyware detection program distributed by CA, Inc. Until 2007, it was known as PestPatrol.
This product is now offered by Total Defense, Inc. and has been named Total Defense Anti-Virus.
History
PestPatrol, Inc. was a Carlisle, PA based software company founded by Dr. David Stang and Robert Bales, which developed PestPatrol and released its first version in 2000. Originally called SaferSite, the company changed its name in 2002 to better reflect the focus of the company.
PestPatrol was an anti-malware product, designed to protect a computer system against threats such as adware, spyware and viruses. It performed automated scans of a system's hard disks, Windows registry and other crucial system areas, and enabled manual scans for specific threats, selected from a very long list of known malicious software. Among its unique features were CookiePatrol, which purges spyware cookies, and KeyPatrol, which detects keyloggers. Unlike most anti-spyware programs designed for home use on a single desktop, PestPatrol also provided a solution for the network environments found in enterprises. Among the features that made it appealing for enterprise security administrators was the ability to manage networked desktops remotely.
Early versions of the product were criticized for the poor user interface, described alternatively as something that "looks like an application that was ported from OS/2, with unclear buttons" or a "clunky, text-based UI", but the reviewers praised its malware detection and removal capabilities, stating "PestPatrol is the most effective anti-spyware system - short of a switch to Linux - that we've ever used".
It was described by InfoWorld as "one of the most established brands in anti-spyware", and in 2002, it was selected as "Security product of year" by Network World, which cited its ability to detect and remove more than 60,000 types of malware, and its defenses against Remote Administration Tools (RATs).
Billing itself as t
|
https://en.wikipedia.org/wiki/NexTView
|
NexTView was an electronic program guide for the analog domain, introduced in 1995 and based on Level 2.5 teletext / Hi-Text.
It was used by TV programme listings for all of the major networks in Germany, Austria, France and Switzerland. The transmission protocol was based on teletext, however, using a compact binary format instead of preformatted text pages. The advantage compared to paper-based TV magazines was that the user had an immediate overview of the current and next programmes, and was able to search through the programme database, filtering results by categories.
The nxtvepg software enabled nexTView to be viewed using a personal computer.
Some TV manufacturers that implemented this solution were: Grundig, Loewe, Metz, Philips, Sony, Thomson, and Quelle Universum.
From 1997 to October 2013, NexTView was broadcast on Swiss Television channels and on French-language channels whose teletext services were managed from Swiss Television (SwissText) (TV5, M6, Canal+).
See also
Guide Plus
AV.link
References
Television technology
Multimedia
Teletext
|
https://en.wikipedia.org/wiki/Email%20filtering
|
Email filtering is the processing of email to organize it according to specified criteria. The term can apply to the intervention of human intelligence, but most often refers to the automatic processing of messages at an SMTP server, possibly applying anti-spam techniques. Filtering can be applied to incoming emails as well as to outgoing ones.
Depending on the calling environment, email filtering software can reject an item at the initial SMTP connection stage or pass it through unchanged for delivery to the user's mailbox. It is also possible to redirect the message for delivery elsewhere, quarantine it for further checking, modify it or 'tag' it in any other way.
Motivation
Common uses for mail filters include organizing incoming email and removal of spam and computer viruses. Mailbox providers filter outgoing email to promptly react to spam surges that may result from compromised accounts. A less common use is to inspect outgoing email at some companies to ensure that employees comply with appropriate policies and laws. Users might also employ a mail filter to prioritize messages, and to sort them into folders based on subject matter or other criteria.
Methods
Mailbox providers can also install mail filters in their mail transfer agents as a service to all of their customers. Anti-virus, anti-spam, URL filtering, and authentication-based rejections are common filter types.
Corporations often use filters to protect their employees and their information technology assets. A catch-all filter will "catch all" of the emails addressed to the domain that do not exist in the mail server - this can help avoid losing emails due to misspelling.
Users, may be able to install separate programs (see links below), or configure filtering as part of their email program (email client). In email programs, users can make personal, "manual" filters that then automatically filter mail according to the chosen criteria.
Inbound and outbound filtering
Mail filters can operate o
|
https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe%20pattern
|
In software architecture, publish–subscribe is a messaging pattern where publishers categorize messages into classes that are received by subscribers. This is contrasted to the typical messaging pattern model where publishers sends messages directly to subscribers.
Similarly, subscribers express interest in one or more classes and only receive messages that are of interest, without knowledge of which publishers, if any, there are.
Publish–subscribe is a sibling of the message queue paradigm, and is typically one part of a larger message-oriented middleware system. Most messaging systems support both the pub/sub and message queue models in their API; e.g., Java Message Service (JMS).
This pattern provides greater network scalability and a more dynamic network topology, with a resulting decreased flexibility to modify the publisher and the structure of the published data.
Message filtering
In the publish-subscribe model, subscribers typically receive only a subset of the total messages published. The process of selecting messages for reception and processing is called filtering. There are two common forms of filtering: topic-based and content-based.
In a topic-based system, messages are published to "topics" or named logical channels. Subscribers in a topic-based system will receive all messages published to the topics to which they subscribe. The publisher is responsible for defining the topics to which subscribers can subscribe.
In a content-based system, messages are only delivered to a subscriber if the attributes or content of those messages matches constraints defined by the subscriber. The subscriber is responsible for classifying the messages.
Some systems support a hybrid of the two; publishers post messages to a topic while subscribers register content-based subscriptions to one or more topics.
Topologies
In many publish-subscribe systems, publishers post messages to an intermediary message broker or event bus, and subscribers register subscription
|
https://en.wikipedia.org/wiki/Failure%20rate
|
Failure rate is the frequency with which an engineered system or component fails, expressed in failures per unit of time. It is usually denoted by the Greek letter λ (lambda) and is often used in reliability engineering.
The failure rate of a system usually depends on time, with the rate varying over the life cycle of the system. For example, an automobile's failure rate in its fifth year of service may be many times greater than its failure rate during its first year of service. One does not expect to replace an exhaust pipe, overhaul the brakes, or have major transmission problems in a new vehicle.
In practice, the mean time between failures (MTBF, 1/λ) is often reported instead of the failure rate. This is valid and useful if the failure rate may be assumed constant – often used for complex units / systems, electronics – and is a general agreement in some reliability standards (Military and Aerospace). It does in this case only relate to the flat region of the bathtub curve, which is also called the "useful life period". Because of this, it is incorrect to extrapolate MTBF to give an estimate of the service lifetime of a component, which will typically be much less than suggested by the MTBF due to the much higher failure rates in the "end-of-life wearout" part of the "bathtub curve".
The reason for the preferred use for MTBF numbers is that the use of large positive numbers (such as 2000 hours) is more intuitive and easier to remember than very small numbers (such as 0.0005 per hour).
The MTBF is an important system parameter in systems where failure rate needs to be managed, in particular for safety systems. The MTBF appears frequently in the engineering design requirements, and governs frequency of required system maintenance and inspections. In special processes called renewal processes, where the time to recover from failure can be neglected and the likelihood of failure remains constant with respect to time, the failure rate is simply the multiplica
|
https://en.wikipedia.org/wiki/Zoo%20hypothesis
|
The zoo hypothesis speculates on the assumed behavior and existence of technologically advanced extraterrestrial life and the reasons they refrain from contacting Earth. It is one of many theoretical explanations for the Fermi paradox. The hypothesis states that alien life intentionally avoids communication with Earth to allow for natural evolution and sociocultural development, and avoiding interplanetary contamination, similar to people observing animals at a zoo. The hypothesis seeks to explain the apparent absence of extraterrestrial life despite its generally accepted plausibility and hence the reasonable expectation of its existence. A variant on the zoo hypothesis suggested by the former MIT Haystack Observatory scientist John Allen Ball is the "laboratory" hypothesis, in which humanity is being subjected to experiments, with Earth serving as a giant laboratory.
Aliens might, for example, choose to allow contact once the human species has passed certain technological, political, and/or ethical standards. Alternatively, aliens may withhold contact until humans force contact upon them, possibly by sending a spacecraft to an alien-inhabited planet. In this regard, reluctance to initiate contact could reflect a sensible desire to minimize risk. An alien society with advanced remote-sensing technologies may conclude that direct contact with neighbors confers added risks to itself without an added benefit. In the related laboratory hypothesis, the zoo hypothesis is extended such that the 'zoo keepers' are subjecting humanity to experiments, a hypothesis which Ball describes as "morbid" and "grotesque", overlooking the possibility that such experiments may be altruistic, i.e., designed to accelerate the pace of civilization to overcome a tendency for intelligent life to destroy itself, until a species is sufficiently developed to establish contact, as in the zoo hypothesis.
Assumptions
The zoo hypothesis assumes, first, that whenever the conditions are such that l
|
https://en.wikipedia.org/wiki/Cross%20section%20%28geometry%29
|
In geometry and science, a cross section is the non-empty intersection of a solid body in three-dimensional space with a plane, or the analog in higher-dimensional spaces. Cutting an object into slices creates many parallel cross-sections. The boundary of a cross-section in three-dimensional space that is parallel to two of the axes, that is, parallel to the plane determined by these axes, is sometimes referred to as a contour line; for example, if a plane cuts through mountains of a raised-relief map parallel to the ground, the result is a contour line in two-dimensional space showing points on the surface of the mountains of equal elevation.
In technical drawing a cross-section, being a projection of an object onto a plane that intersects it, is a common tool used to depict the internal arrangement of a 3-dimensional object in two dimensions. It is traditionally crosshatched with the style of crosshatching often indicating the types of materials being used.
With computed axial tomography, computers can construct cross-sections from x-ray data.
Definition
If a plane intersects a solid (a 3-dimensional object), then the region common to the plane and the solid is called a cross-section of the solid. A plane containing a cross-section of the solid may be referred to as a cutting plane.
The shape of the cross-section of a solid may depend upon the orientation of the cutting plane to the solid. For instance, while all the cross-sections of a ball are disks, the cross-sections of a cube depend on how the cutting plane is related to the cube. If the cutting plane is perpendicular to a line joining the centers of two opposite faces of the cube, the cross-section will be a square, however, if the cutting plane is perpendicular to a diagonal of the cube joining opposite vertices, the cross-section can be either a point, a triangle or a hexagon.
Plane sections
A related concept is that of a plane section, which is the curve of intersection of a plane with a surface. Th
|
https://en.wikipedia.org/wiki/Space%20diagonal
|
In geometry, a space diagonal (also interior diagonal or body diagonal) of a polyhedron is a line connecting two vertices that are not on the same face. Space diagonals contrast with face diagonals, which connect vertices on the same face (but not on the same edge) as each other.
For example, a pyramid has no space diagonals, while a cube (shown at right) or more generally a parallelepiped has four space diagonals.
Axial diagonal
An axial diagonal is a space diagonal that passes through the center of a polyhedron.
For example, in a cube with edge length a, all four space diagonals are axial diagonals, of common length More generally, a cuboid with edge lengths a, b, and c has all four space diagonals axial, with common length
A regular octahedron has 3 axial diagonals, of length , with edge length a.
A regular icosahedron has 6 axial diagonals of length , where is the golden ratio .
Space diagonals of magic cubes
A magic square is an arrangement of numbers in a square grid so that the sum of the numbers along every row, column, and diagonal is the same. Similarly, one may define a magic cube to be an arrangement of numbers in a cubical grid so that the sum of the numbers on the four space diagonals must be the same as the sum of the numbers in each row, each column, and each pillar.
See also
Distance
Face diagonal
Magic cube classes
Hypotenuse
Spacetime interval
References
John R. Hendricks, The Pan-3-Agonal Magic Cube, Journal of Recreational Mathematics 5:1:1972, pp 51–54. First published mention of pan-3-agonals
Hendricks, J. R., Magic Squares to Tesseracts by Computer, 1998, 0-9684700-0-9, page 49
Heinz & Hendricks, Magic Square Lexicon: Illustrated, 2000, 0-9687985-0-0, pages 99,165
Guy, R. K. Unsolved Problems in Number Theory, 2nd ed. New York: Springer-Verlag, p. 173, 1994.
External links
de Winkel Magic Encyclopedia
Heinz - Basic cube parts
John Hendricks Hypercubes
Magic squares
Elementary geometry
|
https://en.wikipedia.org/wiki/The%20Adventure%20of%20the%20Dancing%20Men
|
The Adventure of the Dancing Men is a Sherlock Holmes story written by Sir Arthur Conan Doyle as one of 13 stories in the cycle published as The Return of Sherlock Holmes in 1905. It was first published in The Strand Magazine in the United Kingdom in December 1903, and in Collier's in the United States on 5 December 1903.
Doyle ranked "The Adventure of the Dancing Men" third in his list of his twelve favorite Holmes stories. This is one of only two Sherlock Holmes short stories where Holmes' client dies after seeking his help. Holmes's solution to the riddle of the dancing men rests on reasoning that closely resembles that of Legrand in Poe's "The Gold Bug."
The original title was "The Dancing Men," when it was published as a short story in The Strand Magazine in December 1903.
Plot
The story begins when Hilton Cubitt of Ridling Thorpe Manor in Norfolk visits Sherlock Holmes and gives him a piece of paper with the following mysterious sequence of stick figures.
Cubitt explains to Holmes and Dr. Watson that he has recently married an American woman named Elsie Patrick. Before the wedding, she had asked her husband-to-be never to ask about her past, as she had had some "very disagreeable associations" in her life, although she said that there was nothing that she was personally ashamed of. Their marriage had been a happy one until the messages began to arrive, first mailed from the United States and then appearing in the garden.
The messages had made Elsie very afraid but she did not explain the reasons for her fear, and Cubitt insisted on honoring his promise not to ask about Elsie's life in the United States. Holmes examines all of the occurrences of the dancing figures, and they provide him with an important clue—he realizes that they form a substitution cipher and cracks the code by frequency analysis. The last of the messages causes Holmes to fear that the Cubitts are in immediate danger.
Holmes rushes to Riding Thorpe Manor and finds Cubitt dead of a bull
|
https://en.wikipedia.org/wiki/Tilth
|
Tilth is a physical condition of soil, especially in relation to its suitability for planting or growing a crop. Factors that determine tilth include the formation and stability of aggregated soil particles, moisture content, degree of aeration, soil biota, rate of water infiltration and drainage. Tilth can change rapidly, depending on environmental factors such as changes in moisture, tillage and soil amendments. The objective of tillage (mechanical manipulation of the soil) is to improve tilth, thereby increasing crop production; in the long term, however, conventional tillage, especially plowing, often has the opposite effect, causing the soil carbon sponge to oxidize, break down and become compacted.
Soil with good tilth is spongy with large pore spaces for air infiltration and water movement. Roots only grow where the soil tilth allows for adequate levels of soil oxygen. Such soil also holds a reasonable supply of water and nutrients.
Tillage, organic matter amendments, fertilization and irrigation can each improve tilth, but when used excessively, can have the opposite effect. Crop rotation and cover crops can rebuild the soil carbon sponge and positively impact tilth. A combined approach can produce the greatest improvement.
Aggregation
Good tilth shares a balanced relation between soil-aggregate tensile strength and friability, in which it has a stable mixture of aggregate soil particles that can be readily broken up by shallow non-abrasive tilling. A high tensile strength will result in large cemented clods of compacted soil with low friability. Proper management of agricultural soils can positively impact soil aggregation and improve tilth quality.
Aggregation is positively associated with tilth. With finer-textured soils, aggregates may in turn be made up of smaller aggregates. Aggregation implies substantial pores between individual aggregates.
Aggregation is important in the subsoil, the layer below tillage. Such aggregates involve larger (2- to 6
|
https://en.wikipedia.org/wiki/Privacy%20policy
|
A privacy policy is a statement or legal document (in privacy law) that discloses some or all of the ways a party gathers, uses, discloses, and manages a customer or client's data. Personal information can be anything that can be used to identify an individual, not limited to the person's name, address, date of birth, marital status, contact information, ID issue, and expiry date, financial records, credit information, medical history, where one travels, and intentions to acquire goods and services. In the case of a business, it is often a statement that declares a party's policy on how it collects, stores, and releases personal information it collects. It informs the client what specific information is collected, and whether it is kept confidential, shared with partners, or sold to other firms or enterprises. Privacy policies typically represent a broader, more generalized treatment, as opposed to data use statements, which tend to be more detailed and specific.
The exact contents of a certain privacy policy will depend upon the applicable law and may need to address requirements across geographical boundaries and legal jurisdictions. Most countries have own legislation and guidelines of who is covered, what information can be collected, and what it can be used for. In general, data protection laws in Europe cover the private sector, as well as the public sector. Their privacy laws apply not only to government operations but also to private enterprises and commercial transactions.
California Business and Professions Code, Internet Privacy Requirements (CalOPPA) mandate that websites collecting Personally Identifiable Information () from California residents must conspicuously post their privacy policy.x (See also Online Privacy Protection Act)
History
In 1968, the Council of Europe began to study the effects of technology on human rights, recognizing the new threats posed by computer technology that could link and transmit in ways not widely available before.
|
https://en.wikipedia.org/wiki/Contact%20list
|
A contact list is a collection of screen names. It is a commonplace feature of instant messaging, Email clients, online games and mobile phones. It has various trademarked and proprietary names in different contexts.
Contacts lists' windows show screen names that represent actual other people. To communicate with someone on the list, the user can select a name and act upon it, for example open a new E-mail editing session, instant message, or telephone call. In some programs, if your contact list shows someone, their list will show yours. Contact lists for mobile operating systems are often shared among several mobile apps.
Some text message clients allow you to change your display name at will while others only allow you to reformat your screen name (Add/remove spaces and capitalize letters). Generally, it makes no difference other than how it's displayed.
With most programs, the contact list can be minimized to keep it from getting in the way, and is accessed again by selecting its icon.
The style of the contact list is different with the different programs, but all contact lists have similar capabilities.
Such lists may be used to form social networks with more specific purposes. The list is not the network: to become a network, a list requires some additional information such as the status or category of the contact. Given this, contact networks for various purposes can be generated from the list. Salespeople have long maintained contact networks using a variety of means of contact including phone logs and notebooks. They do not confuse their list with their network, nor would they confuse a "sales contact" with a "friend" or person they had already worked with.
See also
Address Book
Contact manager
vCard
Instant messaging
Mobile phones
Text messaging
|
https://en.wikipedia.org/wiki/Gateway%20%28telecommunications%29
|
A gateway is a piece of networking hardware or software used in telecommunications networks that allows data to flow from one discrete network to another. Gateways are distinct from routers or switches in that they communicate using more than one protocol to connect multiple networks and can operate at any of the seven layers of the open systems interconnection model (OSI).
The term gateway can also loosely refer to a computer or computer program configured to perform the tasks of a gateway, such as a default gateway or router, and in the case of HTTP, gateway is also often used as a synonym for reverse proxy. It can also refer to a device installed in homes that combines router and modem functionality into one device, used by ISPs, also called a residential gateway.
Network gateway
A network gateway provides a connection between networks and contains devices, such as protocol translators, impedance matchers, rate converters, fault isolators, or signal translators. A network gateway requires the establishment of mutually acceptable administrative procedures between the networks using the gateway. Network gateways, known as protocol translation gateways or mapping gateways, can perform protocol conversions to connect networks with different network protocol technologies. For example, a network gateway connects an office or home intranet to the Internet. If an office or home computer user wants to load a web page, at least two network gateways are accessed—one to get from the office or home network to the Internet and one to get from the Internet to the computer that serves the web page.
On an Internet Protocol (IP) network, IP packets with a destination outside a given subnetwork are sent to the network gateway. For example, if a private network has a base IPv4 address of 192.168.1.0 and has a subnet mask of 255.255.255.0, then any data addressed to an IP address outside of 192.168.1.0–192.168.1.255 is sent to the network gateway. IPv6 networks work in a similar w
|
https://en.wikipedia.org/wiki/Corecursion
|
In computer science, corecursion is a type of operation that is dual to recursion. Whereas recursion works analytically, starting on data further from a base case and breaking it down into smaller data and repeating until one reaches a base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing data further removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce, bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept is generative recursion which may lack a definite "direction" inherent in corecursion and recursion.
Where recursion allows programs to operate on arbitrarily complex data, so long as they can be reduced to simple data (base cases), corecursion allows programs to produce arbitrarily complex and potentially infinite data structures, such as streams, so long as it can be produced from simple data (base cases) in a sequence of finite steps. Where recursion may not terminate, never reaching a base state, corecursion starts from a base state, and thus produces subsequent steps deterministically, though it may proceed indefinitely (and thus not terminate under strict evaluation), or it may consume more than it produces and thus become non-productive. Many functions that are traditionally analyzed as recursive can alternatively, and arguably more naturally, be interpreted as corecursive functions that are terminated at a given stage, for example recurrence relations such as the factorial.
Corecursion can produce both finite and infinite data structures as results, and may employ self-referential data structures. Corecursion is often used in conjunction with lazy evaluation, to produce only a finite subset of a potentially infinite structure (rather than trying to produce an entire infinite structure at once). Corecursion is a particularly important concept in functional programming, where corecursion an
|
https://en.wikipedia.org/wiki/Delegated%20Path%20Validation
|
Delegated Path Validation (DPV) is a method for offloading to a trusted server the work involved in validating a public key certificate.
Combining certificate information supplied by the DPV client with certificate path and revocation status information obtained by itself, a DPV server is able to apply complex validation policies that are prohibitive for each client to perform.
The requirements for DPV are described in RFC 3379.
See also
Delegated Path Discovery
Cryptographic protocols
|
https://en.wikipedia.org/wiki/Delegated%20Path%20Discovery
|
Delegated Path Discovery (DPD) is a method for querying a trusted server for information about a public key certificate.
DPD allows clients to obtain collated certificate information from a trusted DPD server. This information may then be used by the client to validate the subject certificate.
The requirements for DPD are described in RFC 3379.
See also
Delegated Path Validation
SCVP
Cryptographic protocols
|
https://en.wikipedia.org/wiki/Ogo%20%28handheld%20device%29
|
Ogo is a handheld electronic device which allows the user to communicate via instant messaging services, email, MMS and SMS text messages. The device works through GSM cellular networks and allows unlimited usage for a flat monthly fee. It supports AOL Instant Messenger, Yahoo! Messenger, and MSN Messenger. It was released in 2004.
Overview
Ogo uses the IXI-Connect OS. It features a clamshell design with a 12-bit depth color screen on the top half and a full QWERTY keyboard on the lower half. Navigation through the menus is accomplished primarily through the use of a directional pad located on the lower right hand of the device and alternately through buttons that directly access each of the devices features.
The Ogo is part of a family of devices produced by its overseas manufacturer, IXI, which showcase the "personal mobile gateway" concept, wherein the Ogo acts as a wireless gateway for other Bluetooth enabled devices to access the Internet. Other devices in the family include pens and cameras. With support for AOL Instant Messenger, Yahoo Instant Messenger, and MSN Windows Messenger, it's at least as equipped for IM as any phone you could hope for. The Ogo supports POP3 e-mail, but that's about it. The Ogo is not a phone and has no voice features.
AT&T deliberately omitted the wireless gateway capabilities of the Ogo in all domestic advertising, possibly in a bid to keep the device from being used as a flat-rate wireless modem.
After the acquisition of AT&T Wireless by Cingular, the Ogo was no longer offered. Cingular discontinued its Ogo service on October 10, 2006.
The device is also marketed in Germany by 1&1. In Germany, the OGO is called a Pocket Web. The OGO can web surf, email, sync with outlook, IM and all the other things like the US based OGO but can not play MP3s. It is also available in Austria through A1 and in Switzerland through Swisscom carrier.
Technical data
Size
11.5 cm × 7.5 cm × 2.5 cm
Weight
162 g
Display
240×160 Pixel = 1
|
https://en.wikipedia.org/wiki/Oversampling
|
In signal processing, oversampling is the process of sampling a signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if sampled at the Nyquist rate or above it. The Nyquist rate is defined as twice the bandwidth of the signal. Oversampling is capable of improving resolution and signal-to-noise ratio, and can be helpful in avoiding aliasing and phase distortion by relaxing anti-aliasing filter performance requirements.
A signal is said to be oversampled by a factor of N if it is sampled at N times the Nyquist rate.
Motivation
There are three main reasons for performing oversampling: to improve anti-aliasing performance, to increase resolution and to reduce noise.
Anti-aliasing
Oversampling can make it easier to realize analog anti-aliasing filters. Without oversampling, it is very difficult to implement filters with the sharp cutoff necessary to maximize use of the available bandwidth without exceeding the Nyquist limit. By increasing the bandwidth of the sampling system, design constraints for the anti-aliasing filter may be relaxed. Once sampled, the signal can be digitally filtered and downsampled to the desired sampling frequency. In modern integrated circuit technology, the digital filter associated with this downsampling is easier to implement than a comparable analog filter required by a non-oversampled system.
Resolution
In practice, oversampling is implemented in order to reduce cost and improve performance of an analog-to-digital converter (ADC) or digital-to-analog converter (DAC). When oversampling by a factor of N, the dynamic range also increases a factor of N because there are N times as many possible values for the sum. However, the signal-to-noise ratio (SNR) increases by , because summing up uncorrelated noise increases its amplitude by , while summing up a coherent signal increases its average by N. As a result, the SNR increases by .
For instance,
|
https://en.wikipedia.org/wiki/Peter%20Borwein
|
Peter Benjamin Borwein (born St. Andrews, Scotland, May 10, 1953 – 23 August 2020) was a Canadian mathematician
and a professor at Simon Fraser University. He is known as a co-author of the paper which presented the Bailey–Borwein–Plouffe algorithm (discovered by Simon Plouffe) for computing π.
First interest in mathematics
Borwein was born into a Jewish family. He became interested in number theory and classical analysis during his second year of university. He had not previously been interested in math, although his father was the head of the University of Western Ontario's mathematics department and his mother is associate dean of medicine there. Borwein and his two siblings majored in mathematics.
Academic career
After completing a Bachelor of Science in Honours Math at the University of Western Ontario in 1974, he went on to complete an MSc and Ph.D. at the University of British Columbia. He joined the Department of Mathematics at Dalhousie University. While he was there, he, his brother Jonathan Borwein and David H. Bailey of NASA wrote the 1989 paper that outlined and popularized a proof for computing one billion digits of π. The authors won the 1993 Chauvenet Prize and Merten M. Hasse Prize for this paper.
In 1993, he moved to Simon Fraser University, joining his brother Jonathan in establishing the Centre for Experimental and Constructive Mathematics (CECM) where he developed the Inverse Symbolic Calculator.
Research
In 1995, the Borweins collaborated with Yasumasa Kanada of the University of Tokyo to compute π to more than four billion digits.
Borwein has developed an algorithm that applies Chebyshev polynomials to the Dirichlet eta function to produce a very rapidly convergent series suitable for high precision numerical calculations, which he published on the occasion of the awarding of an honorary doctorate to his brother, Jonathan.
Peter Borwein also collaborated with NASA's David Bailey and the Université du Québec's Simon Plouffe to calc
|
https://en.wikipedia.org/wiki/Horner%27s%20syndrome
|
Horner's syndrome, also known as oculosympathetic paresis, is a combination of symptoms that arises when a group of nerves known as the sympathetic trunk is damaged. The signs and symptoms occur on the same side (ipsilateral) as it is a lesion of the sympathetic trunk. It is characterized by miosis (a constricted pupil), partial ptosis (a weak, droopy eyelid), apparent anhidrosis (decreased sweating), with apparent enophthalmos (inset eyeball).
The nerves of the sympathetic trunk arise from the spinal cord in the chest, and from there ascend to the neck and face. The nerves are part of the sympathetic nervous system, a division of the autonomic (or involuntary) nervous system. Once the syndrome has been recognized, medical imaging and response to particular eye drops may be required to identify the location of the problem and the underlying cause.
Signs and symptoms
Signs that are found in people with Horner's syndrome on the affected side of the face include the following:
ptosis (drooping of the upper eyelid)
anhidrosis (decreased sweating)
miosis (constriction of the pupil)
Enophthalmos (sinking of the eyeball into the face)
inability to completely close or open the eyelid
facial flushing
headaches
loss of ciliospinal reflex
bloodshot conjunctiva, depending on the site of lesion.
unilateral straight hair (in congenital Horner's syndrome); the hair on the affected side may be straight in some cases.
heterochromia iridum (in congenital Horner's syndrome)
Interruption of sympathetic pathways leads to several implications. It inactivates the dilator muscle and thereby produces miosis. It inactivates the superior tarsal muscle which produces ptosis. It reduces sweat secretion in the face. Patients may have apparent enophthalmos (affected eye looks to be slightly sunken in) but this is not always the case. The ptosis from inactivation of the superior tarsal muscle causes the eye to appear sunken in, but when actually measured, enophthalmos is not present.
|
https://en.wikipedia.org/wiki/HAL%20SPARC64
|
SPARC64 is a microprocessor developed by HAL Computer Systems and fabricated by Fujitsu. It implements the SPARC V9 instruction set architecture (ISA), the first microprocessor to do so. SPARC64 was HAL's first microprocessor and was the first in the SPARC64 brand. It operates at 101 and 118 MHz. The SPARC64 was used exclusively by Fujitsu in their systems; the first systems, the Fujitsu HALstation Model 330 and Model 350 workstations, were formally announced in September 1995 and were introduced in October 1995, two years late. It was succeeded by the SPARC64 II (previously known as the SPARC64+) in 1996.
Description
The SPARC64 is a superscalar microprocessor that issues four instructions per cycle and executes them out of order. It is a multichip design, consisting of seven dies: a CPU die, MMU die, four CACHE dies and a CLOCK die.
CPU die
The CPU die contains the majority of logic, all of the execution units and a level 0 (L0) instruction cache. The execution units consist of two integer units, address units, floating-point units (FPUs), memory units. The FPU hardware consists of a fused multiply add (FMA) unit and a divide unit. But the FMA instructions are really fused (that is, with a single rounding) only as of SPARC64 VI. The FMA unit is pipelined and has a four-cycle latency and a one-cycle-throughput. The divide unit is not pipelined and has significantly longer latencies. The L0 instruction cache has a capacity of 4 KB, is direct-mapped and has a one-cycle latency.
The CPU die is connected to the CACHE and MMU dies by ten 64-bit buses. Four address buses carrying virtual addresses lead out to each cache die. Two data buses write data from the register file to the two CACHE dies that implement the data cache. Four buses, one from each CACHE die, deliver data or instructions to the CPU.
The CPU die contained 2.7 million transistors, has dimensions of 17.53 mm by 16.92 mm for an area of 297 mm2 and has 817 signal bumps and 1,695 power bumps.
MMU die
|
https://en.wikipedia.org/wiki/Diskless%20node
|
A diskless node (or diskless workstation) is a workstation or personal computer without disk drives, which employs network booting to load its operating system from a server. (A computer may also be said to act as a diskless node, if its disks are unused and network booting is used.)
Diskless nodes (or computers acting as such) are sometimes known as network computers or hybrid clients. Hybrid client may either just mean diskless node, or it may be used in a more particular sense to mean a diskless node which runs some, but not all, applications remotely, as in the thin client computing architecture.
Advantages of diskless nodes can include lower production cost, lower running costs, quieter operation, and manageability advantages (for example, centrally managed software installation).
In many universities and in some large organizations, PCs are used in a similar configuration, with some or all applications stored remotely but executed locally—again, for manageability reasons. However, these are not diskless nodes if they still boot from a local hard drive.
Distinction between diskless nodes and centralized computing
Diskless nodes process data, thus using their own CPU and RAM to run software, but do not store data persistently—that task is handed off to a server. This is distinct from thin clients, in which all significant processing happens remotely, on the server—the only software that runs on a thin client is the "thin" (i.e. relatively small and simple) client software, which handles simple input/output tasks to communicate with the user, such as drawing a dialog box on the display or waiting for user input.
A collective term encompassing both thin client computing, and its technological predecessor, text terminals (which are text-only), is centralized computing. Thin clients and text terminals can both require powerful central processing facilities in the servers, in order to perform all significant processing tasks for all of the clients.
Diskless no
|
https://en.wikipedia.org/wiki/Angle%20of%20parallelism
|
In hyperbolic geometry, angle of parallelism is the angle at the non-right angle vertex of a right hyperbolic triangle having two asymptotic parallel sides. The angle depends on the segment length a between the right angle and the vertex of the angle of parallelism.
Given a point not on a line, drop a perpendicular to the line from the point. Let a be the length of this perpendicular segment, and be the least angle such that the line drawn through the point does not intersect the given line. Since two sides are asymptotically parallel,
There are five equivalent expressions that relate and a:
where sinh, cosh, tanh, sech and csch are hyperbolic functions and gd is the Gudermannian function.
Construction
János Bolyai discovered a construction which gives the asymptotic parallel s to a line r passing through a point A not on r. Drop a perpendicular from A onto B on r. Choose any point C on r different from B. Erect a perpendicular t to r at C. Drop a perpendicular from A onto D on t. Then length DA is longer than CB, but shorter than CA. Draw a circle around C with radius equal to DA. It will intersect the segment AB at a point E. Then the angle BEC is independent of the length BC, depending only on AB; it is the angle of parallelism. Construct s through A at angle BEC from AB.
See Trigonometry of right triangles for the formulas used here.
History
The angle of parallelism was developed in 1840 in the German publication "Geometrische Untersuchungen zur Theory der Parallellinien" by Nikolai Lobachevsky.
This publication became widely known in English after the Texas professor G. B. Halsted produced a translation in 1891. (Geometrical Researches on the Theory of Parallels)
The following passages define this pivotal concept in hyperbolic geometry:
The angle HAD between the parallel HA and the perpendicular AD is called the parallel angle (angle of parallelism) which we will here designate by Π(p) for AD = p.
Demonstration
In the Po
|
https://en.wikipedia.org/wiki/Artin%E2%80%93Mazur%20zeta%20function
|
In mathematics, the Artin–Mazur zeta function, named after Michael Artin and Barry Mazur, is a function that is used for studying the iterated functions that occur in dynamical systems and fractals.
It is defined from a given function as the formal power series
where is the set of fixed points of the th iterate of the function , and is the number of fixed points (i.e. the cardinality of that set).
Note that the zeta function is defined only if the set of fixed points is finite for each . This definition is formal in that the series does not always have a positive radius of convergence.
The Artin–Mazur zeta function is invariant under topological conjugation.
The Milnor–Thurston theorem states that the Artin–Mazur zeta function of an interval map is the inverse of the kneading determinant of .
Analogues
The Artin–Mazur zeta function is formally similar to the local zeta function, when a diffeomorphism on a compact manifold replaces the Frobenius mapping for an algebraic variety over a finite field.
The Ihara zeta function of a graph can be interpreted as an example of the Artin–Mazur zeta function.
See also
Lefschetz number
Lefschetz zeta-function
References
Zeta and L-functions
Dynamical systems
Fixed points (mathematics)
|
https://en.wikipedia.org/wiki/Ihara%20zeta%20function
|
In mathematics, the Ihara zeta function is a zeta function associated with a finite graph. It closely resembles the Selberg zeta function, and is used to relate closed walks to the spectrum of the adjacency matrix. The Ihara zeta function was first defined by Yasutaka Ihara in the 1960s in the context of discrete subgroups of the two-by-two p-adic special linear group. Jean-Pierre Serre suggested in his book Trees that Ihara's original definition can be reinterpreted graph-theoretically. It was Toshikazu Sunada who put this suggestion into practice in 1985. As observed by Sunada, a regular graph is a Ramanujan graph if and only if its Ihara zeta function satisfies an analogue of the Riemann hypothesis.
Definition
The Ihara zeta function is defined as the analytic continuation of the infinite product
where L(p) is the length of .
The product in the definition is taken over all prime closed geodesics of the graph , where geodesics which differ by a cyclic rotation are considered equal. A closed geodesic on (known in graph theory as a "reduced closed walk"; it is not a graph geodesic) is a finite sequence of vertices such that
The integer is the length . The closed geodesic is prime if it cannot be obtained by repeating a closed geodesic times, for an integer .
This graph-theoretic formulation is due to Sunada.
Ihara's formula
Ihara (and Sunada in the graph-theoretic setting) showed that for regular graphs the zeta function is a rational function.
If is a -regular graph with adjacency matrix then
where is the circuit rank of . If is connected and has vertices, .
The Ihara zeta-function is in fact always the reciprocal of a graph polynomial:
where is Ki-ichiro Hashimoto's edge adjacency operator. Hyman Bass gave a determinant formula involving the adjacency operator.
Applications
The Ihara zeta function plays an important role in the study of free groups, spectral graph theory, and dynamical systems, especially symbolic dynamics, where the Ihar
|
https://en.wikipedia.org/wiki/Mr.%20Do%21
|
is a 1982 maze game developed by Universal. It is the first arcade video game to be released as a conversion kit for other arcade machines; Taito published the conversion kit in Japan. The game was inspired by Namco's Dig Dug released earlier in 1982. Mr. Do! was a commercial success in Japan and North America, selling 30,000 arcade units in the US, and it was followed by several arcade sequels.
Gameplay
The object of Mr. Do! is to score as many points as possible by digging tunnels through the ground and collecting cherries. The title character, Mr. Do (a circus clown—except for the original Japanese version of the game, in which he is a snowman), is constantly chased by red dinosaur-like monsters called creeps, and the player loses a life if Mr. Do is caught by one. The game ends when the last life is lost.
Cherries are distributed throughout the level in groups of eight, and collecting all the cherries in one group without a pause awards bonus points. A level is complete either when all cherries are removed, all creeps are destroyed, "EXTRA" is spelled, or a diamond is found.
Mr. Do can defeat creeps by hitting them with his bouncing "power ball" or by dropping large apples on them. While the power ball is bouncing toward a creep, Mr. Do is defenseless. If the ball bounces into an area where there are no creeps to hit (such as behind a fallen apple), Mr. Do cannot use it again until he has retrieved it. When the power ball hits a creep, it then reforms in Mr. Do's hands after a delay that increases with each use.
Mr. Do or the creeps can push an apple off the edge of a vertical tunnel and crush one or more creeps. If an apple falls more than its own height, it breaks and disappears. Mr. Do can also be crushed by a falling apple, causing a loss of life.
Occasionally, the creeps transform briefly into more powerful multicolored monsters that can tunnel through the ground. If one of these digs through a cherry, it leaves fewer cherries for Mr. Do to collect. W
|
https://en.wikipedia.org/wiki/Hyphomicrobiales
|
The Hyphomicrobiales (synonom Rhizobiales) are an order of Gram-negative Alphaproteobacteria.
The rhizobia, which fix nitrogen and are symbiotic with plant roots, appear in several different families. The four families Nitrobacteraceae, Hyphomicrobiaceae, Phyllobacteriaceae, and Rhizobiaceae contain at least several genera of nitrogen-fixing, legume-nodulating, microsymbiotic bacteria. Examples are the genera Bradyrhizobium and Rhizobium. Species of the Methylocystaceae are methanotrophs; they use methanol (CH3OH) or methane (CH4) as their sole energy and carbon sources. Other important genera are the human pathogens Bartonella and Brucella, as well as Agrobacterium (useful in genetic engineering).
Taxonomy
Accepted families
Aestuariivirgaceae Li et al. 2019
Afifellaceae Hördt et al. 2020
Ahrensiaceae Hördt et al. 2020
Alsobacteraceae Sun et al. 2018
Amorphaceae Hördt et al. 2020
Ancalomicrobiaceae Dahal et al. 2018
Aurantimonadaceae Hördt et al. 2020
Bartonellaceae Gieszczykiewicz 1939 (Approved Lists 1980)
Beijerinckiaceae Garrity et al. 2006
Blastochloridaceae Hördt et al. 2020
Boseaceae Hördt et al. 2020
Breoghaniaceae Hördt et al. 2020
Brucellaceae Breed et al. 1957 (Approved Lists 1980)
Chelatococcaceae Dedysh et al. 2016
Cohaesibacteraceae Hwang and Cho 2008
Devosiaceae Hördt et al. 2020
Hyphomicrobiaceae Babudieri 1950 (Approved Lists 1980)
Kaistiaceae Hördt et al. 2020
Lichenibacteriaceae Pankratov et al. 2020
Lichenihabitantaceae Noh et al. 2019
Methylobacteriaceae Garrity et al. 2006
Methylocystaceae Bowman 2006
Nitrobacteraceae corrig. Buchanan 1917 (Approved Lists 1980)
Notoacmeibacteraceae Huang et al. 2017
Parvibaculaceae Hördt et al. 2020
Phreatobacteraceae Hördt et al. 2020
Phyllobacteriaceae Mergaert and Swings 2006
Pleomorphomonadaceae Hördt et al. 2020
Pseudoxanthobacteraceae Hördt et al. 2020
Rhabdaerophilaceae Ming et al. 2020
Rhizobiaceae Conn 1938 (Approved Lists 1980)
Rhodobiaceae Garrity et al. 2006
R
|
https://en.wikipedia.org/wiki/Memory%20management%20controller%20%28Nintendo%29
|
Multi-memory controllers or memory management controllers (MMC) are different kinds of special chips designed by various video game developers for use in Nintendo Entertainment System (NES) cartridges. These chips extend the capabilities of the original console and make it possible to create NES games with features the original console cannot offer alone. The basic NES hardware supports only 40KB of ROM total, up to 32KB PRG and 8KB CHR, thus only a single tile and sprite table are possible. This limit was rapidly reached within the Famicom's first two years on the market and game developers began requesting a way to expand the console's capabilities.
In the emulation community these chips are also known as mappers.
List of MMC chips
CNROM
Manufacturer: Nintendo
Games: Gradius, Ghostbusters, Gyruss, Arkanoid
CNROM is the earliest banking hardware introduced on the Famicom, appearing in early 1986. It consists of a single 7400 series discrete logic chip. CNROM supports a single fixed PRG bank and up to eight CHR banks for 96KB total ROM. Some third party variations supported additional capabilities. Many CNROM games store the game level data in the CHR ROM and blank the screen while reading it.
UNROM
Manufacturer: Nintendo
Games: Pro Wrestling, Ikari Warriors, Mega Man, Contra, Castlevania
Early NES mappers are composed of 7400 series discrete logic chips. UNROM appeared in late 1986. It supports a single fixed 16KB PRG bank, the rest of the PRG being switchable. Instead of a dedicated ROM chip to hold graphics data (called CHR by Nintendo), games using UNROM store graphics data on the program ROM and copy it to a RAM on the cartridge at run time.
MMC1
Manufacturer: Nintendo
Games: The Legend of Zelda, Mega Man 2, Metroid, Godzilla: Monster of Monsters, Teenage Mutant Ninja Turtles, and more.
The MMC1 is Nintendo's first custom MMC integrated circuit to incorporate support for saved games and multi-directional scrolling configurations.
The chip comes in
|
https://en.wikipedia.org/wiki/248%20%28number%29
|
248 (two hundred [and] forty-eight) is the natural number following 247 and preceding 249.
Additionally, 248 is:
a nontotient.
a refactorable number.
an untouchable number.
palindromic in bases 13 (16113), 30 (8830), 61 (4461) and 123 (22123).
a Harshad number in bases 3, 4, 6, 7, 9, 11, 13 (and 18 other bases).
part of the 43-aliquot tree. The aliquot sequence starting at 248 is: 248, 232, 218, 112, 136, 134, 70, 74, 40, 50, 43, 1, 0.
The exceptional Lie group E8 has dimension 248.
References
Integers
|
https://en.wikipedia.org/wiki/Model-based%20testing
|
Model-based testing is an application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a system under test (SUT), or to represent testing strategies and a test environment. The picture on the right depicts the former approach.
A model describing a SUT is usually an abstract, partial presentation of the SUT's desired behavior.
Test cases derived from such a model are functional tests on the same level of abstraction as the model.
These test cases are collectively known as an abstract test suite.
An abstract test suite cannot be directly executed against an SUT because the suite is on the wrong level of abstraction.
An executable test suite needs to be derived from a corresponding abstract test suite.
The executable test suite can communicate directly with the system under test.
This is achieved by mapping the abstract test cases to
concrete test cases suitable for execution. In some model-based testing environments, models contain enough information to generate executable test suites directly.
In others, elements in the abstract test suite must be mapped to specific statements or method calls in the software to create a concrete test suite. This is called solving the "mapping problem".
In the case of online testing (see below), abstract test suites exist only conceptually but not as explicit artifacts.
Tests can be derived from models in different ways. Because testing is usually experimental and based on heuristics,
there is no known single best approach for test derivation.
It is common to consolidate all test derivation related parameters into a
package that is often known as "test requirements", "test purpose" or even "use case(s)".
This package can contain information about those parts of a model that should be focused on, or the conditions for finishing testing (test stopping criteria).
Because test suites are derived from models
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.