source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Charge%20pump
|
A charge pump is a kind of DC-to-DC converter that uses capacitors for energetic charge storage to raise or lower voltage. Charge-pump circuits are capable of high efficiencies, sometimes as high as 90–95%, while being electrically simple circuits.
Description
Charge pumps use some form of switching device to control the connection of a supply voltage across a load through a capacitor. In a two stage cycle, in the first stage a capacitor is connected across the supply, charging it to that same voltage. In the second stage the circuit is reconfigured so that the capacitor is in series with the supply and the load. This doubles the voltage across the load - the sum of the original supply and the capacitor voltages. The pulsing nature of the higher voltage switched output is often smoothed by the use of an output capacitor.
An external or secondary circuit drives the switching, typically at tens of kilohertz up to several megahertz. The high frequency minimizes the amount of capacitance required, as less charge needs to be stored and dumped in a shorter cycle.
Charge pumps can double voltages, triple voltages, halve voltages, invert voltages, fractionally multiply or scale voltages (such as ×, ×, ×, etc.) and generate arbitrary voltages by quickly alternating between modes, depending on the controller and circuit topology.
They are commonly used in low-power electronics (such as mobile phones) to raise and lower voltages for different parts of the circuitry - minimizing power consumption by controlling supply voltages carefully.
Terminology for PLL
The term charge pump is also commonly used in phase-locked loop (PLL) circuits even though there is no pumping action involved unlike in the circuit discussed above. A PLL charge pump is merely a bipolar switched current source. This means that it can output positive and negative current pulses into the loop filter of the PLL. It cannot produce higher or lower voltages than its power and ground supply levels.
Applica
|
https://en.wikipedia.org/wiki/British%20Plant%20Communities
|
British Plant Communities is a five-volume work, edited by John S. Rodwell and published by Cambridge University Press, which describes the plant communities which comprise the British National Vegetation Classification.
Its coverage includes all native vegetation communities and some artificial ones of Great Britain, excluding Northern Ireland. The series is a major contribution to plant conservation in Great Britain, and, as such, covers material appropriate for professionals and amateurs interested in the conservation of native plant communities. Each book begins with an introduction to the techniques used to survey the particular vegetations within its scope, discussing sampling, the type of data collected, organization of the data, and analysing the data. Each community is discussed with an overall emphasis of the ecology of the community, so that users can consider the relationships of various plant communities to each other as a function of climatic or soil conditions, for example.
The five volumes are:
British Plant Communities Volume 1 – Woodlands and Scrub
This volume was first published in 1991 in hardback () and in 1998 in paperback ()
British Plant Communities Volume 2 – Mires and Heaths
This volume was first published in 1991 in hardback () and in 1998 in paperback ()
British Plant Communities Volume 3 – Grasslands and Montane Communities
This volume was first published in 1992 in hardback () and in 1998 in paperback ()
British Plant Communities Volume 4 – Aquatic Communities, Swamps and Tall-herb Fens
This volume was first published in 1995 in hardback () and in 1998 in paperback ()
British Plant Communities Volume 5 – Maritime Communities and Vegetation of Open Habitats
This volume was first published in 2000 in both hardback () and paperback ()
Errors
The following is a list of errors found in the published books:
In Volume 1, on page 38–39, the branches leading from couplets 22 and 23 should read W12, not W14
In volume 3, on p
|
https://en.wikipedia.org/wiki/Cross-presentation
|
Cross-presentation is the ability of certain professional antigen-presenting cells (mostly dendritic cells) to take up, process and present extracellular antigens with MHC class I molecules to CD8 T cells (cytotoxic T cells). Cross-priming, the result of this process, describes the stimulation of naive cytotoxic CD8+ T cells into activated cytotoxic CD8+ T cells. This process is necessary for immunity against most tumors and against viruses that infect dendritic cells and sabotage their presentation of virus antigens. Cross presentation is also required for the induction of cytotoxic immunity by vaccination with protein antigens, for example, tumour vaccination.
Cross-presentation is of particular importance, because it permits the presentation of exogenous antigens, which are normally presented by MHC II on the surface of dendritic cells, to also be presented through the MHC I pathway. The MHC I pathway is normally used to present endogenous antigens that have infected a particular cell. However, cross presenting cells are able to utilize the MHC I pathway in order to remain uninfected, while still triggering an adaptive immune response of activated cytotoxic CD8+ T cells against infected peripheral tissue cells.
History
The first evidence of cross-presentation was reported in 1976 by Michael J. Bevan after injection of grafted cells carrying foreign minor histocompatibility (MHC) molecules. This resulted in a CD8+ T cell response induced by antigen-presenting cells of the recipient against the foreign MHC cells. Because of this, Bevan implied that these antigen presenting cells must have engulfed and cross presented these foreign MHC cells to host cytotoxic CD8+ cells, thus triggering an adaptive immune response against the grafted tissue. This observation was termed "cross-priming".
Later, there had been much controversy about cross-presentation, which now is believed to have been due to particularities and limitations of some experimental systems used.
Cross
|
https://en.wikipedia.org/wiki/Evert%20Willem%20Beth
|
Evert Willem Beth (7 July 1908 – 12 April 1964) was a Dutch philosopher and logician, whose work principally concerned the foundations of mathematics. He was a member of the Significs Group.
Biography
Beth was born in Almelo, a small town in the eastern Netherlands. His father had studied mathematics and physics at the University of Amsterdam, where he had been awarded a PhD. Evert Beth studied the same subjects at Utrecht University, but then also studied philosophy and psychology. His 1935 PhD was in philosophy.
In 1946, he became professor of logic and the foundations of mathematics in Amsterdam. Apart from two brief interruptions – a stint in 1951 as a research assistant to Alfred Tarski, and in 1957 as a visiting professor at Johns Hopkins University – he held the post in Amsterdam continuously until his death in 1964. His was the first academic post in his country in logic and the foundations of mathematics, and during this time he contributed actively to international cooperation in establishing logic as an academic discipline.
In 1953 he became member of the Royal Netherlands Academy of Arts and Sciences.
He died in Amsterdam.
Contributions to logic
Beth definability theorem
The Beth definability theorem states that for first-order logic a property (or function or constant) is implicitly definable if and only if it is explicitly definable. Further explanation is provided under Beth definability.
Semantic tableaux
Beth's most famous contribution to formal logic is semantic tableaux, which are decision procedures for propositional logic and first-order logic. It is a semantic method—like Wittgenstein's truth tables or J. Alan Robinson's resolution—as opposed to the proof of theorems in a formal system, such as the axiomatic systems employed by Frege, Russell and Whitehead, and Hilbert, or even Gentzen's natural deduction. Semantic tableaux are an effective decision procedure for propositional logic, whereas they are only semi-effective for first
|
https://en.wikipedia.org/wiki/Ponderomotive%20force
|
In physics, a ponderomotive force is a nonlinear force that a charged particle experiences in an inhomogeneous oscillating electromagnetic field. It causes the particle to move towards the area of the weaker field strength, rather than oscillating around an initial point as happens in a homogeneous field. This occurs because the particle sees a greater magnitude of force during the half of the oscillation period while it is in the area with the stronger field. The net force during its period in the weaker area in the second half of the oscillation does not offset the net force of the first half, and so over a complete cycle this makes the particle move towards the area of lesser force.
The ponderomotive force Fp is expressed by
which has units of newtons (in SI units) and where e is the electrical charge of the particle, m is its mass, ω is the angular frequency of oscillation of the field, and E is the amplitude of the electric field. At low enough amplitudes the magnetic field exerts very little force.
This equation means that a charged particle in an inhomogeneous oscillating field not only oscillates at the frequency of ω of the field, but is also accelerated by Fp toward the weak field direction. This is a rare case in which the direction of the force does not depend on whether the particle is positively or negatively charged.
Etymology
The term ponderomotive comes from the Latin ponder- (meaning weight) and the english motive (having to do with motion).
Derivation
The derivation of the ponderomotive force expression proceeds as follows.
Consider a particle under the action of a non-uniform electric field oscillating at frequency in the x-direction. The equation of motion is given by:
neglecting the effect of the associated oscillating magnetic field.
If the length scale of variation of is large enough, then the particle trajectory can be divided into a slow time motion and a fast time motion:
where is the slow drift motion and represents fast osci
|
https://en.wikipedia.org/wiki/Prodikeys
|
Prodikeys is a music and computer keyboard combination. It is created by Singaporean audio company Creative Technology. So far there have been 3 different versions of Prodikeys: Creative Prodikeys, Creative Prodikeys DM and Creative Prodikeys PC-MIDI. It has 37 mini-sized music keys under detachable palm cover and comes with Prodikeys software. The MIDI keyboard can also be used as a MIDI controller for third-party MIDI software. It is compatible with Windows XP, 2000, and Linux, but is incompatible with Windows Vista, 7, 8, and Mac OS X.
Included Software:
EasyNotes
Learn to play any song melody on own - from the included song library or downloaded MIDI files of favourite pop tunes from the Internet. EasyNotes supports music format in SEQ and MIDI.
FunMix
Can create and record own music with pre-arranged mixes and personalize own ring tone or video soundtrack easily.
HotKeys Manager
It lets customize the keyboard's hotkeys functions for easy access to the software suite.
Mini Keyboard
Will be able to explore with more than a hundred different instrument sounds - including piano, flute, guitar and drums.
Prodikeys Launcher
Can use it to launch the software and the Product Tutorial for an interactive demo.
Prodikeys DM
The Prodikeys DM does not use USB, but rather has one single Mini-DIN connector for the PS/2 port and is therefore detected as a regular typing keyboard. The included Windows software communicates with the keyboard driver in order to send and receive MIDI data over the PS/2 line. This protocol has been partly reverse-engineered, making it possible to use the Prodikeys DM on a regular USB port using an Arduino microcontroller as an adaptor.
References
Singaporean brands
Keyboard instruments
Computer keyboards
|
https://en.wikipedia.org/wiki/MediaPortal
|
MediaPortal is an open-source media player and digital video recorder software project, often considered an alternative to Windows Media Center. It provides a 10-foot user interface for performing typical PVR/TiVo functionality, including playing, pausing, and recording live TV; playing DVDs, videos, and music; viewing pictures; and other functions. Plugins allow it to perform additional tasks, such as watching online video, listening to music from online services such as Last.fm, and launching other applications such as games. It interfaces with the hardware commonly found in HTPCs, such as TV tuners, infrared receivers, and LCD displays.
The MediaPortal source code was initially forked from XBMC (now Kodi), though it has been almost completely re-written since then. MediaPortal is designed specifically for Microsoft Windows, unlike most other open-source media center programs such as MythTV and Kodi, which are usually cross-platform.
Features
DirectX GUI
Video Hardware Acceleration
VMR / EVR on Windows Vista / 7
TV / Radio (DVB-S, DVB-S2, DVB-T, DVB-C, Analog television (Common Interface, DVB radio, DVB EPG, Teletext, etc...)
IPTV
Recording, pause and time shifting of TV and Radio broadcasts
Music player
Video/DVD player
Picture player
Internet Streams
Integrated Weather Forecasts
Built-in RSS reader
Metadata web scraping from TheTVDB and The Movie Database
Plug ins
Skins
Graphical User Interfaces
Control
MediaPortal can be controlled by any input device, that is supported by the Windows Operating System.
PC Remote
Keyboard / Mouse
Gamepad
Kinect
Wii Remote
Android / iOS/ WebOS / S60 handset devices
Television
MediaPortal uses its own TV-Server to allow to set up one central server with one or more TV cards. All TV related tasks are handled by the server and streamed over the network to one or more clients. Clients can then install the MediaPortal Client software and use the TV-Server to watch live or recorded TV, schedule recordings, view
|
https://en.wikipedia.org/wiki/Luby%20transform%20code
|
In computer science, Luby transform codes (LT codes) are the first class of practical fountain codes that are near-optimal erasure correcting codes. They were invented by Michael Luby in 1998 and published in 2002. Like some other fountain codes, LT codes depend on sparse bipartite graphs to trade reception overhead for encoding and decoding speed. The distinguishing characteristic of LT codes is in employing a particularly simple algorithm based on the exclusive or operation () to encode and decode the message.
LT codes are rateless because the encoding algorithm can in principle produce an infinite number of message packets (i.e., the percentage of packets that must be received to decode the message can be arbitrarily small). They are erasure correcting codes because they can be used to transmit digital data reliably on an erasure channel.
The next generation beyond LT codes are Raptor codes (see for example IETF RFC 5053 or IETF RFC 6330), which have linear time encoding and decoding. Raptor codes are fundamentally based on LT codes, i.e., encoding for Raptor codes uses two encoding stages, where the second stage is LT encoding. Similarly, decoding with Raptor codes primarily relies upon LT decoding, but LT decoding is intermixed with more advanced decoding techniques. The RaptorQ code specified in IETF RFC 6330, which is the most advanced fountain code, has vastly superior decoding probabilities and performance compared to using only an LT code.
Why use an LT code?
The traditional scheme for transferring data across an erasure channel depends on continuous two-way communication.
The sender encodes and sends a packet of information.
The receiver attempts to decode the received packet. If it can be decoded, the receiver sends an acknowledgment back to the transmitter. Otherwise, the receiver asks the transmitter to send the packet again.
This two-way process continues until all the packets in the message have been transferred successfully.
Certain networks,
|
https://en.wikipedia.org/wiki/Nick%20translation
|
Nick translation (or head translation), developed in 1977 by Peter Rigby and Paul Berg, is a tagging technique in molecular biology in which DNA Polymerase I is used to replace some of the nucleotides of a DNA sequence with their labeled analogues, creating a tagged DNA sequence which can be used as a probe in fluorescent in situ hybridization (FISH) or blotting techniques. It can also be used for radiolabeling.
This process is called nick translation because the DNA to be processed is treated with DNAase to produce single-stranded "nicks". This is followed by replacement in nicked sites by DNA polymerase I, which elongates the 3' hydroxyl terminus, removing nucleotides by 5'-3' exonuclease activity, replacing them with dNTPs. To radioactively label a DNA fragment for use as a probe in blotting procedures, one of the incorporated nucleotides provided in the reaction is radiolabeled in the alpha phosphate position. Similarly, a fluorophore can be attached instead for fluorescent labelling, or an antigen for immunodetection. When DNA polymerase I eventually detaches from the DNA, it leaves another nick in the phosphate backbone. The nick has "translated" some distance depending on the processivity of the polymerase. This nick could be sealed by DNA ligase, or its 3' hydroxyl group could serve as the template for further DNA polymerase I activity. Proprietary enzyme mixes are available commercially to perform all steps in the procedure in a single incubation.
Nick translation could cause double-stranded DNA breaks, if DNA polymerase I encounters another nick on the opposite strand, resulting in two shorter fragments. This does not influence the performance of the labelled probe in in-situ hybridization.
References
Biochemistry detection methods
Genetics techniques
Laboratory techniques
Molecular biology techniques
|
https://en.wikipedia.org/wiki/Beijing%E2%80%93Tianjin%20intercity%20railway
|
The Beijing–Tianjin intercity railway () is a Chinese high-speed railway that runs line between Beijing and Tianjin. Designed for passenger traffic only, the Chinese government built the line to accommodate trains traveling at a maximum speed of , and currently carries CRH high-speed trains running speeds up to since August 2018.
When the line opened on August 1, 2008, it set the record for the fastest conventional train service in the world by top speed, and reduced travel time between the two largest cities in northern China from 70 to 30 minutes. A second phase of construction extended this line from Tianjin to Yujiapu railway station in the Binhai New Area was opened on September 20, 2015.
The line is projected to approach operating capacity in the first half of 2016. Anticipating this, a second parallel line, the Beijing–Binhai intercity railway, commenced construction on December 29, 2015. It will run from Beijing Sub-Center railway station to Binhai railway station via Baodi and Tianjin Binhai International Airport, along a new route to the northeast of the Beijing–Tianjin ICR.
Route and stations
Beijing to Tianjin
From Beijing South railway station, the line runs in a southeasterly direction, following the Beijing–Tianjin–Tanggu Expressway to Tianjin. It has three intermediate stations at Yizhuang (reserved station), Yongle (reserved station) and Wuqing. The service has peak speed between cities.
As an intercity line, it will provide train service only between the two metropolitan areas, unlike the Beijing–Shanghai High-Speed Railway which will continue beyond Shanghai.
The Beijing–Tianjin intercity railway has a current length of (fare mileage: ), of which roughly is built on viaducts and the last on an embankment. The elevated track ensures level tracks over uneven terrain and eliminates the trains having to slow down to safely navigate through at-grade road crossings.
Extension to Binhai New Area
Sometimes known as the Tianjin–Binhai intercity
|
https://en.wikipedia.org/wiki/Xinetd
|
In computer networking, xinetd (Extended Internet Service Daemon) is an open-source super-server daemon which runs on many Unix-like systems, and manages Internet-based connectivity.
It offers a more secure alternative to the older inetd ("the Internet daemon"), which most modern Linux distributions have deprecated.
Description
xinetd listens for incoming requests over a network and launches the appropriate service for that request. Requests are made using port numbers as identifiers and xinetd usually launches another daemon to handle the request. It can be used to start services with both privileged and non-privileged port numbers.
xinetd features access control mechanisms such as TCP Wrapper ACLs, extensive logging capabilities, and the ability to make services available based on time. It can place limits on the number of servers that the system can start, and has deployable defense mechanisms to protect against port scanners, among other things.
On some implementations of Mac OS X, this daemon starts and maintains various Internet-related services, including FTP and telnet. As an extended form of inetd, it offers enhanced security. It replaced inetd in Mac OS X v10.3, and subsequently launchd replaced it in Mac OS X v10.4. However, Apple has retained inetd for compatibility purposes.
Configuration
Configuration of xinetd resides in the default configuration file /etc/xinetd.conf, and configuration of the services it supports resides in configuration files stored in the /etc/xinetd.d directory. The configuration for each service usually includes a switch to control whether xinetd should enable or disable the service.
An example configuration file for the RFC 868 time server:
# default: off
# description: An RFC 868 time server. This protocol provides a
# site-independent, machine readable date and time. The Time service sends back
# to the originating source the time in seconds since midnight on January first
# 1900.
# This is the tcp version.
service
|
https://en.wikipedia.org/wiki/Small%20subgroup%20confinement%20attack
|
In cryptography, a subgroup confinement attack, or small subgroup confinement attack, on a cryptographic method that operates in a large finite group is where an attacker attempts to compromise the method by forcing a key to be confined to an unexpectedly small subgroup of the desired group.
Several methods have been found to be vulnerable to subgroup confinement attack, including some forms or applications of Diffie–Hellman key exchange and DH-EKE.
References
Cryptographic attacks
Finite groups
|
https://en.wikipedia.org/wiki/Custom%20hardware%20attack
|
In cryptography, a custom hardware attack uses specifically designed application-specific integrated circuits (ASIC) to decipher encrypted messages.
Mounting a cryptographic brute force attack requires a large number of similar computations: typically trying one key, checking if the resulting decryption gives a meaningful answer, and then trying the next key if it does not. Computers can perform these calculations at a rate of millions per second, and thousands of computers can be harnessed together in a distributed computing network. But the number of computations required on average grows exponentially with the size of the key, and for many problems standard computers are not fast enough. On the other hand, many cryptographic algorithms lend themselves to fast implementation in hardware, i.e. networks of logic circuits, also known as gates. Integrated circuits (ICs) are constructed of these gates and often can execute cryptographic algorithms hundreds of times faster than a general purpose computer.
Each IC can contain large numbers of gates (hundreds of millions in 2005). Thus, the same decryption circuit, or cell, can be replicated thousands of times on one IC. The communications requirements for these ICs are very simple. Each must be initially loaded with a starting point in the key space and, in some situations, with a comparison test value (see known plaintext attack). Output consists of a signal that the IC has found an answer and the successful key.
Since ICs lend themselves to mass production, thousands or even millions of ICs can be applied to a single problem. The ICs themselves can be mounted in printed circuit boards. A standard board design can be used for different problems since the communication requirements for the chips are the same. Wafer-scale integration is another possibility. The primary limitations on this method are the cost of chip design, IC fabrication, floor space, electric power and thermal dissipation.
History
The earliest c
|
https://en.wikipedia.org/wiki/Chen%E2%80%93Ho%20encoding
|
Chen–Ho encoding is a memory-efficient alternate system of binary encoding for decimal digits.
The traditional system of binary encoding for decimal digits, known as binary-coded decimal (BCD), uses four bits to encode each digit, resulting in significant wastage of binary data bandwidth (since four bits can store 16 states and are being used to store only 10), even when using packed BCD.
The encoding reduces the storage requirements of two decimal digits (100 states) from 8 to 7 bits, and those of three decimal digits (1000 states) from 12 to 10 bits using only simple Boolean transformations avoiding any complex arithmetic operations like a base conversion.
History
In what appears to have been a multiple discovery, some of the concepts behind what later became known as Chen–Ho encoding were independently developed by Theodore M. Hertz in 1969 and by Tien Chi Chen () (1928–) in 1971.
Hertz of Rockwell filed a patent for his encoding in 1969, which was granted in 1971.
Chen first discussed his ideas with Irving Tze Ho () (1921–2003) in 1971. Chen and Ho were both working for IBM at the time, albeit in different locations. Chen also consulted with Frank Chin Tung to verify the results of his theories independently. IBM filed a patent in their name in 1973, which was granted in 1974. At least by 1973, Hertz's earlier work must have been known to them, as the patent cites his patent as prior art.
With input from Joseph D. Rutledge and John C. McPherson, the final version of the Chen–Ho encoding was circulated inside IBM in 1974 and published in 1975 in the journal Communications of the ACM. This version included several refinements, primarily related to the application of the encoding system. It constitutes a Huffman-like prefix code.
The encoding was referred to as Chen and Ho's scheme in 1975, Chen's encoding in 1982 and became known as Chen–Ho encoding or Chen–Ho algorithm since 2000. After having filed a patent for it in 2001, Michael F. Cowlishaw published a
|
https://en.wikipedia.org/wiki/Power-system%20automation
|
Power-system automation is the act of automatically controlling the power system via instrumentation and control devices. Substation automation refers to using data from Intelligent electronic devices (IED), control and automation capabilities within the substation, and control commands from remote users to control power-system devices.
Since full substation automation relies on substation integration, the terms are often used interchangeably. Power-system automation includes processes associated with generation and delivery of power. Monitoring and control of power delivery systems in the substation and on the pole reduce the occurrence of outages and shorten the duration of outages that do occur. The IEDs, communications protocols, and communications methods, work together as a system to perform power-system automation.
The term “power system” describes the collection of devices that make up the physical systems that generate, transmit, and distribute power. The term “instrumentation and control (I&C) system” refers to the collection of devices that monitor, control, and protect the power system. Many power-system automation are monitored by SCADA.
Automation tasks
Power-system automation is composed of several tasks.
Data acquisition Data acquisition refers to acquiring, or collecting, data. This data is collected in the form of measured analog current or voltage values or the open or closed status of contact points. Acquired data can be used locally within the device collecting it, sent to another device in a substation, or sent from the substation to one or several databases for use by operators, engineers, planners, and administration.
Supervision Computer processes and personnel supervise, or monitor, the conditions and status of the power system using this acquired data. Operators and engineers monitor the information remotely on computer displays and graphical wall displays or locally, at the device, on front-panel displays and laptop computers.
Control
|
https://en.wikipedia.org/wiki/Reduced%20product
|
In model theory, a branch of mathematical logic, and in algebra, the reduced product is a construction that generalizes both direct product and ultraproduct.
Let {Si | i ∈ I} be a nonempty family of structures of the same signature σ indexed by a set I, and let U be a proper filter on I. The domain of the reduced product is the quotient of the Cartesian product
by a certain equivalence relation ~: two elements (ai) and (bi) of the Cartesian product are equivalent if
If U only contains I as an element, the equivalence relation is trivial, and the reduced product is just the direct product. If U is an ultrafilter, the reduced product is an ultraproduct.
Operations from σ are interpreted on the reduced product by applying the operation pointwise. Relations are interpreted by
For example, if each structure is a vector space, then the reduced product is a vector space with addition defined as (a + b)i = ai + bi and multiplication by a scalar c as (ca)i = c ai.
References
, Chapter 6.
Model theory
|
https://en.wikipedia.org/wiki/Ipchains
|
Linux IP Firewalling Chains, normally called ipchains, is free software to control the packet filter or firewall capabilities in the 2.2 series of Linux kernels. It superseded ipfirewall (managed by ipfwadm command), but was replaced by iptables in the 2.4 series. Unlike iptables, ipchains is stateless.
It is a rewrite of Linux's previous IPv4 firewall, ipfirewall. This newer ipchains was required to manage the packet filter in Linux kernels starting with version 2.1.102 (which was a 2.2 development release). Patches are also available to add ipchains to 2.0 and earlier 2.1 series kernels. Improvements include larger maxima for packet counting, filtering for fragmented packets and a wider range of protocols, and the ability to match packets based on the inverse of a rule.
The ipchains suite also included some shell scripts for easier maintenance and to emulate the behavior of the old ipfwadm command.
The ipchains software was superseded by the iptables system in Linux kernel 2.4 and above, which was in turn superseded by the nftables system in 2014.
References
External links
IPChains HOWTO: on TLDP and on FAQs.org
Discontinued software
Firewall software
Free network-related software
Free security software
Free software programmed in C
Linux kernel features
Linux security software
|
https://en.wikipedia.org/wiki/Loop%20space
|
In topology, a branch of mathematics, the loop space ΩX of a pointed topological space X is the space of (based) loops in X, i.e. continuous pointed maps from the pointed circle S1 to X, equipped with the compact-open topology. Two loops can be multiplied by concatenation. With this operation, the loop space is an A∞-space. That is, the multiplication is homotopy-coherently associative.
The set of path components of ΩX, i.e. the set of based-homotopy equivalence classes of based loops in X, is a group, the fundamental group π1(X).
The iterated loop spaces of X are formed by applying Ω a number of times.
There is an analogous construction for topological spaces without basepoint. The free loop space of a topological space X is the space of maps from the circle S1 to X with the compact-open topology. The free loop space of X is often denoted by .
As a functor, the free loop space construction is right adjoint to cartesian product with the circle, while the loop space construction is right adjoint to the reduced suspension. This adjunction accounts for much of the importance of loop spaces in stable homotopy theory. (A related phenomenon in computer science is currying, where the cartesian product is adjoint to the hom functor.) Informally this is referred to as Eckmann–Hilton duality.
Eckmann–Hilton duality
The loop space is dual to the suspension of the same space; this duality is sometimes called Eckmann–Hilton duality. The basic observation is that
where is the set of homotopy classes of maps ,
and is the suspension of A, and denotes the natural homeomorphism. This homeomorphism is essentially that of currying, modulo the quotients needed to convert the products to reduced products.
In general, does not have a group structure for arbitrary spaces and . However, it can be shown that and do have natural group structures when and are pointed, and the aforementioned isomorphism is of those groups. Thus, setting (the sphere) gives the relationship
|
https://en.wikipedia.org/wiki/Sylvester%20matrix
|
In mathematics, a Sylvester matrix is a matrix associated to two univariate polynomials with coefficients in a field or a commutative ring. The entries of the Sylvester matrix of two polynomials are coefficients of the polynomials. The determinant of the Sylvester matrix of two polynomials is their resultant, which is zero when the two polynomials have a common root (in case of coefficients in a field) or a non-constant common divisor (in case of coefficients in an integral domain).
Sylvester matrices are named after James Joseph Sylvester.
Definition
Formally, let p and q be two nonzero polynomials, respectively of degree m and n. Thus:
The Sylvester matrix associated to p and q is then the matrix constructed as follows:
if n > 0, the first row is:
the second row is the first row, shifted one column to the right; the first element of the row is zero.
the following n − 2 rows are obtained the same way, shifting the coefficients one column to the right each time and setting the other entries in the row to be 0.
if m > 0 the (n + 1)th row is:
the following rows are obtained the same way as before.
Thus, if m = 4 and n = 3, the matrix is:
If one of the degrees is zero (that is, the corresponding polynomial is a nonzero constant polynomial), then there are zero rows consisting of coefficients of the other polynomial, and the Sylvester matrix is a diagonal matrix of dimension the degree of the non-constant polynomial, with the all diagonal coefficients equal to the constant polynomial. If m = n = 0, then the Sylvester matrix is the empty matrix with zero rows and zero columns.
A variant
The above defined Sylvester matrix appears in a Sylvester paper of 1840. In a paper of 1853, Sylvester introduced the following matrix, which is, up to a permutation of the rows, the Sylvester matrix of p and q, which are both considered as having degree max(m, n).
This is thus a -matrix containing pairs of rows. Assuming it is obtained as follows:
the first pair is:
|
https://en.wikipedia.org/wiki/EPPO%20Code
|
An EPPO code, formerly known as a Bayer code, is an encoded identifier that is used by the European and Mediterranean Plant Protection Organization (EPPO), in a system designed to uniquely identify organisms – namely plants, pests and pathogens – that are important to agriculture and crop protection. EPPO codes are a core component of a database of names, both scientific and vernacular. Although originally started by the Bayer Corporation, the official list of codes is now maintained by EPPO.
EPPO code database
All codes and their associated names are included in a database (EPPO Global Database). In total, there are over 93,500 species listed in the EPPO database, including:
55,000 species of plants (e.g. cultivated, wild plants and weeds)
27,000 species of animals (e.g. insects, mites, nematodes, rodents), biocontrol agents
11,500 microorganism species (e.g. bacteria, fungi, viruses, viroids and virus-like)
Plants are identified by a five-letter code, other organisms by a six-letter one. In many cases the codes are mnemonic abbreviations of the scientific name of the organism, derived from the first three or four letters of the genus and the first two letters of the species. For example, corn, or maize (Zea mays), was assigned the code "ZEAMA"; the code for potato late blight (Phytophthora infestans) is "PHYTIN". The unique and constant code for each organism provides a shorthand method of recording species. The EPPO code avoids many of the problems caused by revisions to scientific names and taxonomy which often result in different synonyms being in use for the same species. When the taxonomy changes, the EPPO code stays the same. The EPPO system is used by governmental organizations, conservation agencies, and researchers.
Example
External links
EPPO Global Database (lookup EPPO codes)
EPPO Data Services (download EPPO codes)
References
Taxonomy (biology)
Plant pathogens and diseases
|
https://en.wikipedia.org/wiki/Attacking%20Faulty%20Reasoning
|
Attacking Faulty Reasoning is a textbook on logical fallacies by T. Edward Damer that has been used for many years in a number of college courses on logic, critical thinking, argumentation, and philosophy. It explains 60 of the most commonly committed fallacies. Each of the fallacies is concisely defined and illustrated with several relevant examples. For each fallacy, the text gives suggestions about how to address or to "attack" the fallacy when it is encountered. The organization of the fallacies comes from the author’s own fallacy theory, which defines a fallacy as a violation of one of the five criteria of a good argument:
the argument must be structurally well-formed;
the premises must be relevant;
the premises must be acceptable;
the premises must be sufficient in number, weight, and kind;
there must be an effective rebuttal of challenges to the argument.
Each fallacy falls into at least one of Damer's five fallacy categories, which derive from the above criteria.
The five fallacy categories
Fallacies that violate the structural criterion. The structural criterion requires that one who argues for or against a position should use an argument that meets the fundamental structural requirements of a well-formed argument, using premises that are compatible with one another, that do not contradict the conclusion, that do not assume the truth of the conclusion, and that are not involved in any faulty deductive inference. Fallacies such as begging the question, denying the antecedent, or undistributed middle violate this criterion.
Fallacies that violate the relevance criterion. The relevance criterion requires that one who presents an argument for or against a position should attempt to set forth only reasons that are directly related to the merit of the position at issue. Fallacies such as appeal to tradition, appeal to force, or genetic fallacy fail to meet the argumentative demands of relevance.
Fallacies that violate the acceptability criterion. The a
|
https://en.wikipedia.org/wiki/Refugium%20%28population%20biology%29
|
In biology, a refugium (plural: refugia) is a location which supports an isolated or relict population of a once more widespread species. This isolation (allopatry) can be due to climatic changes, geography, or human activities such as deforestation and overhunting.
Present examples of refugial animal species are the mountain gorilla, isolated to specific mountains in central Africa, and the Australian sea lion, isolated to specific breeding beaches along the south-west coast of Australia, due to humans taking so many of their number as game. This resulting isolation, in many cases, can be seen as only a temporary state; however, some refugia may be longstanding, thereby having many endemic species, not found elsewhere, which survive as relict populations. The Indo-Pacific Warm Pool has been proposed to be a longstanding refugium, based on the discovery of the "living fossil" of a marine dinoflagellate called Dapsilidinium pastielsii, currently found in the Indo-Pacific Warm Pool only.
For plants, anthropogenic climate change propels scientific interest in identifying refugial species that were isolated into small or disjunct ranges during glacial episodes of the Pleistocene, yet whose ability to expand their ranges during the warmth of interglacial periods (such as the Holocene) was apparently limited or precluded by topographic, streamflow, or habitat barriers—or by the extinction of coevolved animal dispersers. The concern is that ongoing warming trends will expose them to extirpation or extinction in the decades ahead.
In anthropology, refugia often refers specifically to Last Glacial Maximum refugia, where some ancestral human populations may have been forced back to glacial refugia (similar small isolated pockets on the face of the continental ice sheets) during the last glacial period. Going from west to east, suggested examples include the Franco-Cantabrian region (in northern Iberia), the Italian and Balkan peninsulas, the Ukrainian LGM refuge, and the
|
https://en.wikipedia.org/wiki/Release%20engineering
|
Release engineering, frequently abbreviated as RE or as the clipped compound Releng, is a sub-discipline in software engineering concerned with the compilation, assembly, and delivery of source code into finished products or other software components. Associated with the software release life cycle, it was said by Boris Debic of Google Inc. that release engineering is to software engineering as manufacturing is to an industrial process:
Release engineering is the difference between manufacturing software in small teams or startups and manufacturing software in an industrial way that is repeatable, gives predictable results,
and scales well. These industrial style practices not only contribute to the growth of a company but also are
key factors in enabling growth.
The importance of release engineering in enabling growth of a technology company has been repeatedly argued by John O'Duinn and Bram Adams. While it is not the goal of release engineering to encumber software development with a process overlay, it is often seen as a sign of organizational and developmental maturity.
Modern release engineering is concerned with several aspects of software production:
Identifiability Being able to identify all of the source, tools, environment, and other components that make up a particular release.
Reproducibility The ability to integrate source, third party components, data, and deployment externals of a software system in order to guarantee operational stability.
Consistency The mission to provide a stable framework for development, deployment, audit and accountability for software components.
Agility The ongoing research into what are the repercussions of modern software engineering practices on the productivity in the software cycle, e.g. continuous integration and push on green initiatives.
Release engineering is often the integration hub for more complex software development teams, sitting at the cross between development, product management, quality assurance
|
https://en.wikipedia.org/wiki/Method%20of%20moments%20%28statistics%29
|
In statistics, the method of moments is a method of estimation of population parameters. The same principle is used to derive higher moments like skewness and kurtosis.
It starts by expressing the population moments (i.e., the expected values of powers of the random variable under consideration) as functions of the parameters of interest. Those expressions are then set equal to the sample moments. The number of such equations is the same as the number of parameters to be estimated. Those equations are then solved for the parameters of interest. The solutions are estimates of those parameters.
The method of moments was introduced by Pafnuty Chebyshev in 1887 in the proof of the central limit theorem. The idea of matching empirical moments of a distribution to the population moments dates back at least to Pearson.
Method
Suppose that the problem is to estimate unknown parameters characterizing the distribution of the random variable . Suppose the first moments of the true distribution (the "population moments") can be expressed as functions of the s:
Suppose a sample of size is drawn, resulting in the values . For , let
be the j-th sample moment, an estimate of . The method of moments estimator for denoted by is defined to be the solution (if one exists) to the equations:
The method described here for single random variables generalizes in an obvious manner to multiple random variables leading to multiple choices for moments to be used. Different choices generally lead to different solutions [5], [6].
Advantages and disadvantages
The method of moments is fairly simple and yields consistent estimators (under very weak assumptions), though these estimators are often biased.
It is an alternative to the method of maximum likelihood.
However, in some cases the likelihood equations may be intractable without computers, whereas the method-of-moments estimators can be computed much more quickly and easily. Due to easy computability, method-of-momen
|
https://en.wikipedia.org/wiki/Interval%20arithmetic
|
[[File:Set of curves Outer approximation.png|345px|thumb|right|Tolerance function (turquoise) and interval-valued approximation (red)]]
Interval arithmetic (also known as interval mathematics; interval analysis or interval computation) is a mathematical technique used to mitigate rounding and measurement errors in mathematical computation by computing function bounds. Numerical methods involving interval arithmetic can guarantee relatively reliable and mathematically correct results. Instead of representing a value as a single number, interval arithmetic or interval mathematics represents each value as a range of possibilities.
Mathematically, instead of working with an uncertain real-valued variable , interval arithmetic works with an interval that defines the range of values that can have. In other words, any value of the variable lies in the closed interval between and . A function , when applied to , produces an interval which includes all the possible values for for all .
Interval arithmetic is suitable for a variety of purposes; the most common use is in scientific works, particularly when the calculations are handled by software, where it is used to keep track of rounding errors in calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such as differential equations) and optimization problems.
Introduction
The main objective of interval arithmetic is to provide a simple way of calculating upper and lower bounds of a function's range in one or more variables. These endpoints are not necessarily the true supremum or infimum of a range since the precise calculation of those values can be difficult or impossible; the bounds only need to contain the function's range as a subset.
This treatment is typically limite
|
https://en.wikipedia.org/wiki/Our%20World%20%281967%20TV%20program%29
|
Our World was the first live multinational multi-satellite television production. National broadcasters from fourteen countries around the world, coordinated by the European Broadcasting Union (EBU), participated in the program. The two-hour event, which was broadcast on Sunday 25 June 1967 in twenty-four countries, had an estimated audience of 400 to 700 million people, the largest television audience up to that date. Four communications satellites were used to provide a worldwide coverage, which was a technological milestone in television broadcasting.
Creative artists, including opera singer Heather Harper, film director Franco Zeffirelli, conductor Leonard Bernstein, sculptor Alexander Calder and painter Joan Miró were invited to perform or appear in separate live segments, each of them produced by one of the participant broadcasters. The most famous segment is one from the United Kingdom starring the Beatles performing their song "All You Need Is Love" for the first time.
Planning
The project was conceived by British Broadcasting Corporation (BBC) producer Aubrey Singer. Due to the magnitude of the production, its coordination was transferred to the European Broadcasting Union (EBU), with Singer as the project's head. Two communications satellites in geosynchronous orbit over the Atlantic Ocean –Intelsat I (known as "Early Bird") and Intelsat II F-3 ("Canary Bird")–, two over the Pacific Ocean –Intelsat II F-2 ("Lani Bird") and NASA's ATS-1– and nine ground stations, in addition to EBU's Eurovision point-to-point communications network, all monitored by technical and production teams in forty-three control rooms, were used to link North America, Europe, Tunisia, Japan and Australia in real time.
The master control room for the broadcast was the TC1 studio control room at the BBC Television Centre in London. Contributions from North America, Japan and Australia were routed to London by the CBS Switching Center in New York –which was rented for the purpose–,
|
https://en.wikipedia.org/wiki/Patch%20Tuesday
|
Patch Tuesday (also known as Update Tuesday) is an unofficial term used to refer to when Microsoft, Adobe, Oracle and others regularly release software patches for their software products. It is widely referred to in this way by the industry. Microsoft formalized Patch Tuesday in October 2003. Patch Tuesday is known within Microsoft also as the "B" release, to distinguish it from the "C" and "D" releases that occur in the third and fourth weeks of the month, respectively.
Patch Tuesday occurs on the second Tuesday of each month in North America. Critical security updates are occasionally released outside of the normal Patch Tuesday cycle; these are known as "Out-of-band" releases. As far as the integrated Windows Update (WU) function is concerned, Patch Tuesday begins at 10:00 a.m. Pacific Time. Vulnerability information is immediately available in the Security Update Guide. The updates show up in Download Center before they are added to WU, and the KB articles are unlocked later.
Daily updates consist of malware database refreshes for Microsoft Defender and Microsoft Security Essentials, these updates are not part of the normal Patch Tuesday release cycle.
History
Starting with Windows 98, Microsoft included Windows Update, which once installed and executed would check for patches to Windows and its components, which Microsoft would release intermittently. With the release of Microsoft Update, this system also checks for updates for other Microsoft products, such as Microsoft Office, Visual Studio and SQL Server.
Earlier versions of Windows Update suffered from two problems:
Less experienced users often remained unaware of Windows Update and did not install it. Microsoft countered this issue in Windows ME with the Automatic Updates component, which displayed availability of updates, with the option of automatic installation.
Customers with multiple copies of Windows, such as corporate users, not only had to update every Windows deployment in the company but
|
https://en.wikipedia.org/wiki/King%27s%20Knight
|
is a scrolling shooter video game developed and published by Square for the Nintendo Entertainment System and MSX. The game was released in Japan on September 18, 1986 and in North America in 1989. It was later re-released for the Wii's Virtual Console in Japan on November 27, 2007 and in North America on March 24, 2008. This would be followed by a release on the Virtual Console in Japan on February 4, 2015, for 3DS and July 6, 2016, for Wii U.
The game became Square's first North American release under their Redmond subsidiary Squaresoft, and their first release as an independent company. The 1986 release's title screen credits Workss for programming. King's Knight saw a second release in 1987 on the NEC PC-8801mkII SR and the Sharp X1. These versions of the game were retitled King's Knight Special and released exclusively in Japan. It was the first game designed by Hironobu Sakaguchi for the Famicom. Nobuo Uematsu provided the musical score for King's Knight. It was Uematsu's third work of video game music composition.
Plot
King's Knight follows a basic storyline similar to many NES-era role-playing video games: Princess Claire of Olthea has been kidnapped in the Kingdom of Izander, and the player must choose one of the four heroes (the knight/warrior "Ray Jack", the wizard "Kaliva", the monster/gigant "Barusa" and the (kid) thief "Toby") to train and set forth to attack Gargatua Castle, defeat the evil dragon Tolfida and rescue the princess.
Gameplay
King's Knight is a vertically scrolling shooter, where the main objective is to dodge or destroy all onscreen enemies and obstacles. Various items, however, add depth to the game. As any character, the player can collect various power-ups to increase a character's level (maximum of twenty levels per character): as many as seven Jump Increases, seven Speed Increases, three Weapon Increases, and three Shield Increases. There are also Life Ups, which are collected to increase the character's life meter. There are
|
https://en.wikipedia.org/wiki/Epitope%20mapping
|
In immunology, epitope mapping is the process of experimentally identifying the binding site, or epitope, of an antibody on its target antigen (usually, on a protein). Identification and characterization of antibody binding sites aid in the discovery and development of new therapeutics, vaccines, and diagnostics. Epitope characterization can also help elucidate the binding mechanism of an antibody and can strengthen intellectual property (patent) protection. Experimental epitope mapping data can be incorporated into robust algorithms to facilitate in silico prediction of B-cell epitopes based on sequence and/or structural data.
Epitopes are generally divided into two classes: linear and conformational/discontinuous. Linear epitopes are formed by a continuous sequence of amino acids in a protein. Conformational epitopes epitopes are formed by amino acids that are nearby in the folded 3D structure but distant in the protein sequence. Note that conformational epitopes can include some linear segments. B-cell epitope mapping studies suggest that most interactions between antigens and antibodies, particularly autoantibodies and protective antibodies (e.g., in vaccines), rely on binding to discontinuous epitopes.
Importance for antibody characterization
By providing information on mechanism of action, epitope mapping is a critical component in therapeutic monoclonal antibody (mAb) development. Epitope mapping can reveal how a mAb exerts its functional effects - for instance, by blocking the binding of a ligand or by trapping a protein in a non-functional state. Many therapeutic mAbs target conformational epitopes that are only present when the protein is in its native (properly folded) state, which can make epitope mapping challenging. Epitope mapping has been crucial to the development of vaccines against prevalent or deadly viral pathogens, such as chikungunya, dengue, Ebola, and Zika viruses, by determining the antigenic elements (epitopes) that confer long-lasting
|
https://en.wikipedia.org/wiki/Cosmic%20dust
|
Cosmic dustalso called extraterrestrial dust, space dust, or star dustis dust that occurs in outer space or has fallen onto Earth. Most cosmic dust particles measure between a few molecules and , such as micrometeoroids. Larger particles are called meteoroids. Cosmic dust can be further distinguished by its astronomical location: intergalactic dust, interstellar dust, interplanetary dust (as in the zodiacal cloud), and circumplanetary dust (as in a planetary ring). There are several methods to obtain space dust measurement.
In the Solar System, interplanetary dust causes the zodiacal light. Solar System dust includes comet dust, planetary dust (like from Mars), asteroidal dust, dust from the Kuiper belt, and interstellar dust passing through the Solar System. Thousands of tons of cosmic dust are estimated to reach Earth's surface every year, with most grains having a mass between 10−16 kg (0.1 pg) and 10−4 kg (0.1 g). The density of the dust cloud through which the Earth is traveling is approximately 10−6 dust grains/m3.
Cosmic dust contains some complex organic compounds (amorphous organic solids with a mixed aromatic–aliphatic structure) that could be created naturally, and rapidly, by stars. A smaller fraction of dust in space is "stardust" consisting of larger refractory minerals that condensed as matter left by stars.
Interstellar dust particles were collected by the Stardust spacecraft and samples were returned to Earth in 2006.
Study and importance
Cosmic dust was once solely an annoyance to astronomers, as it obscures objects they wished to observe. When infrared astronomy began, the dust particles were observed to be significant and vital components of astrophysical processes. Their analysis can reveal information about phenomena like the formation of the Solar System. For example, cosmic dust can drive the mass loss when a star is nearing the end of its life, play a part in the early stages of star formation, and form planets. In the Solar System,
|
https://en.wikipedia.org/wiki/Airport%20problem
|
In mathematics and especially game theory, the airport problem is a type of fair division problem in which it is decided how to distribute the cost of an airport runway among different players who need runways of different lengths. The problem was introduced by S. C. Littlechild and G. Owen in 1973. Their proposed solution is:
Divide the cost of providing the minimum level of required facility for the smallest type of aircraft equally among the number of landings of all aircraft
Divide the incremental cost of providing the minimum level of required facility for the second smallest type of aircraft (above the cost of the smallest type) equally among the number of landings of all but the smallest type of aircraft. Continue thus until finally the incremental cost of the largest type of aircraft is divided equally among the number of landings made by the largest aircraft type.
The authors note that the resulting set of landing charges is the Shapley value for an appropriately defined game.
Introduction
In an airport problem there is a finite population N and a nonnegative function C: N-R. For technical reasons it is assumed that the population is taken from the set of the natural numbers: players are identified with their 'ranking number'. The cost function satisfies the inequality C(i) <C(j)whenever i <j. It is typical for airport problems that the cost C(i)is assumed to be a part of the cost C(j) if i<j, i.e. a coalition S is confronted with costs c(S): =MAX C(i). In this way an airport problem generates an airport game (N,c). As the value of each one-person coalition (i) equals C(i), we can rediscover the airport problem from the airport game theory.
Nash Equilibrium
Nash equilibrium, also known as non-cooperative game equilibrium, is an essential term in game theory described by John Nash in 1951. In a game process, regardless of the opponent's strategy choice, one of the parties will choose a certain strategy, which is called dominant strategy. If any par
|
https://en.wikipedia.org/wiki/World%20Wide%20Port%20Name
|
In computing, a World Wide Port Name, WWPN, or WWpN, is a World Wide Name assigned to a port in a Fibre Channel fabric. Used on storage area networks, it performs a function equivalent to the MAC address in Ethernet protocol, as it is supposed to be a unique identifier in the network.
A WWPN is a World Wide Port Name; a unique identifier for each Fibre Channel port presented to a Storage Area Network (SAN). Each port on a Storage Device has a unique and persistent WWPN.
A World Wide Node Name, WWNN, or WWnN, is a World Wide Name assigned to a node (an endpoint, a device) in a Fibre Channel fabric. It is valid for the same WWNN to be seen on many different ports (different addresses) on the network, identifying the ports as multiple network interfaces of a single network node.
External links
Locating the WWPN for a Linux host
Fibre Channel
Identifiers
|
https://en.wikipedia.org/wiki/Fabric%20OS
|
In storage area networking, Fabric OS is the firmware for Brocade Communications Systems's Fibre Channel switches and Fibre Channel directors. It is also known as FOS.
First generation
The first generation of Fabric OS was developed on top of a VxWorks kernel and was mainly used in the Brocade Silkworm 2000 and first 3000 series on Intel i960. Even today, many production environments are still running the older generation Silkworm models.
Second generation
The second generation of Fabric OS was developed on a PowerPC platform, and uses MontaVista Linux, a Linux derivative with real-time performance enhancements. With the advent of MontaVista, switches and directors have the ability of hot firmware activation (without downtime for Fibre Channel fabric), and many useful diagnostic commands.
According to free software licenses terms, Brocade provides access to sources of distributed free software, on which Fabric OS and other Brocade's software products are based.
Additional licensed products
Additional products for Fabric OS are offered by Brocade for one-time fee. They are licensed for use in a single specific switch (license key is coupled with device's serial number). Those include:
Integrated Routing
Adaptive Networking: Quality of service, Ingress Rate Limiting
Brocade Advanced Zoning (Free with rel 6.1.x)
ISL trunking
Ports on Demand
Extended Fabrics (more than 10 km of switched fabric connectivity, up to 3000 km)
Advanced Performance Monitoring (APM)
Fabric Watch
Secure Fabric OS (obsolete)
VMWare VSPEX integration
Versions
Fabric OS 9.x
9.2:
9.1: Root Access Removal, NTP Server authentication
9.0: Traffic optimizer, Fabric congestion notification, New Web Tools (graphical UI switched from Java to Web)
Fabric OS 8.x
8.2: NVMe capable + REST API
8.1:
8.0: Contains many new software features and enhancements as well as issue resolutions
Fabric OS 7.x
7.4: Switch to Linux 3.10 kernel
7.3:
7.2:
7.1:
7.0:
Fabric OS 6.x
6.4:
6.3: Fill
|
https://en.wikipedia.org/wiki/Oechsle%20scale
|
The Oechsle scale is a hydrometer scale measuring the density of grape must, which is an indication of grape ripeness and sugar content used in wine-making. It is named for Ferdinand Oechsle (1774–1852) and it is widely used in the German, Swiss and Luxembourgish wine-making industries. On the Oechsle scale, one degree Oechsle (°Oe) corresponds to one gram of the difference between the mass of one litre of must at 20 °C and 1 kg (the mass of 1 litre of water). For example, must with a specific mass of 1084 grams per litre has 84 °Oe.
Overview
The mass difference between equivalent volumes of must and water is almost entirely due to the dissolved sugar in the must. Since the alcohol in wine is produced by fermentation of the sugar, the Oechsle scale is used to predict the maximal possible alcohol content of the finished wine. This measure is commonly used to select when to harvest grapes. In the vineyard, the must density is usually measured by using a refractometer by crushing a few grapes between the fingers and letting the must drip onto the glass prism of the refractometer. In countries using the Oechsle scale, the refractometer will be calibrated in Oechsle degrees, but this is an indirect reading, as the refractometer actually measures the refractive index of the grape must, and translates it into Oechsle or different wine must scales, based on their relationship with refractive index.
Wine classification
The Oechsle scale forms the basis of most of the German wine classification. In the highest quality category, Prädikatswein (formerly known as Qualitätswein mit Prädikat, QmP), the wine is assigned a Prädikat based on the Oechsle reading of the must. The regulations set out minimum Oechsle readings for each Prädikat, which depend on wine-growing regions and grape variety:
Kabinett – 70–85 °Oe
Spätlese – 76–95 °Oe
Auslese – 83–105 °Oe
Beerenauslese and Eiswein – 110–128° Oe (Eiswein is made by late harvesting grapes after they have frozen on the vine and no
|
https://en.wikipedia.org/wiki/List%20of%20Fibre%20Channel%20standards
|
Fibre Channel
2005
FC-SATA (under development)
FC-PI-2 INCITS 404
2004
FC-SP ANSI INCITS 1570-D
FC-GS-4 (Fibre Channel Generic Services)ANSI INCITS 387. Includes the following standards:
FC-GS-2 ANSI INCITS 288 (1999)
FC-GS-3 ANSI INCITS 348 (2001)
FC-SW-3 INCITS 384. Includes the following standards:
FC-SW INCITS 321 (1998)
FC-SW-2 INCITS 355 (2001)
FC-DA INCITS TR-36. Includes the following standards:
FC-FLA INCITS TR-20 (1998)
FC-PLDA INCITS TR-19 (1998)
2003
FC-FS INCITS 373. Includes the following standards:
FC-PH ANSI X3.230 (1994)
FC-PH-2 ANSI X3.297 (1997)
FC-PH-3 ANSI X3.303 (1998)
FC-BB-2 INCITS 372
FC-SB-3 INCITS 374. Replaces:
FC-SB ANSI X3.271 (1996)
FC-SB-2 INCITS 374 (2001)
2002
FC-VI INCITS 357
FC-MI INCITS/TR-30
FC-PI INCITS 352
2001
FC-SB-2 INCITS 374. Replaced by: FC-SB-3 INCITS 374 (2003)
FC-SW-2 INCITS 355. Replaced by: FC-SW-3 INCITS 384 (2004)
FC-GS-3 ANSI INCITS 348. Replaced by: FC-GS-4 ANSI INCITS 387 (2004)
1999
FC-AL-2 INCITS 332
FC-TAPE INCITS TR-24
FC-GS-2 ANSI INCITS 288 (1999). Replaced by: FC-GS-4 ANSI INCITS 387 (2004)
1998
FC-PH-3 ANSI X3.303. Replaced by: FC-FS INCITS 373 (2003)
FC-FLA INCITS TR-20. Replaced by: FC-DA INCITS TR-36 (2004)
FC-PLDA INCITS TR-19. Replaced by: FC-DA INCITS TR-36 (2004)
FC-SW INCITS 321. Replaced by: FC-SW-3 INCITS 384 (2004)
1997
FC-PH-2 ANSI X3.297. Replaced by: FC-FS INCITS 373
1996
FC-SB ANSI X3.271. Replaced by: FC-SB-3 INCITS 374
FC-AL ANSI X3.272
1994
FC-PH ANSI X3.230. Replaced by: FC-FS INCITS 373 (2003)
Others:
FC-LS: Fibre Channel Link Services
FC-HBA API for Fibre Channel HBA management
FC-GS-3 CT Fibre Channel Global Services Common Transport
RFCs
- Transmission of IPv6, IPv4, and Address Resolution Protocol (ARP) Packets over Fibre Channel, 2006
- Transmission of IPv6 Packets over Fibre Channel (Obsoleted by: RFC 4338)
- IP and ARP over Fibre Channel (Obsoleted by: RFC 4338)
- Securing Block Storage Protocols over IP
SNMP-related specificatio
|
https://en.wikipedia.org/wiki/Link%20Control%20Protocol
|
In computer networking, the Link Control Protocol (LCP) forms part of the Point-to-Point Protocol (PPP), within the family of Internet protocols. In setting up PPP communications, both the sending and receiving devices send out LCP packets to determine the standards of the ensuing data transmission.
The protocol:
checks the identity of the linked device and either accepts or rejects the device
determines the acceptable packet size for transmission
searches for errors in configuration
can terminate the link if requirements exceed the parameters
Devices cannot use PPP to transmit data over a network until the LCP packet determines the acceptability of the link, but LCP packets are embedded into PPP packets and therefore a basic PPP connection has to be established before LCP can reconfigure it.
LCP over PPP packets have control code 0xC021 and their info field contains the LCP packet, which has four fields (Code, ID, Length and Data).
Code: Operation requested: configure link, terminate link, and acknowledge and deny codes
Data: Parameters for the operation
External links
: PPP LCP Extensions
: The Point-to-Point Protocol (PPP)
: PPP Reliable Transmission
Link protocols
Internet Standards
|
https://en.wikipedia.org/wiki/Tarski%27s%20axioms
|
Tarski's axioms, due to Alfred Tarski, are an axiom set for the substantial fragment of Euclidean geometry that is formulable in first-order logic with identity, and requiring no set theory (i.e., that part of Euclidean geometry that is formulable as an elementary theory). Other modern axiomizations of Euclidean geometry are Hilbert's axioms and Birkhoff's axioms.
Overview
Early in his career Tarski taught geometry and researched set theory. His coworker Steven Givant (1999) explained Tarski's take-off point:
From Enriques, Tarski learned of the work of Mario Pieri, an Italian geometer who was strongly influenced by Peano. Tarski preferred Pieri's system [of his Point and Sphere memoir], where the logical structure and the complexity of the axioms were more transparent.
Givant then says that "with typical thoroughness" Tarski devised his system:
What was different about Tarski's approach to geometry? First of all, the axiom system was much simpler than any of the axiom systems that existed up to that time. In fact the length of all of Tarski's axioms together is not much more than just one of Pieri's 24 axioms. It was the first system of Euclidean geometry that was simple enough for all axioms to be expressed in terms of the primitive notions only, without the help of defined notions. Of even greater importance, for the first time a clear distinction was made between full geometry and its elementary — that is, its first order — part.
Like other modern axiomatizations of Euclidean geometry, Tarski's employs a formal system consisting of symbol strings, called sentences, whose construction respects formal syntactical rules, and rules of proof that determine the allowed manipulations of the sentences. Unlike some other modern axiomatizations, such as Birkhoff's and Hilbert's, Tarski's axiomatization has no primitive objects other than points, so a variable or constant cannot refer to a line or an angle. Because points are the only primitive objects, and because Tars
|
https://en.wikipedia.org/wiki/X-ray%20microtomography
|
In radiography, X-ray microtomography uses X-rays to create cross-sections of a physical object that can be used to recreate a virtual model (3D model) without destroying the original object. It is similar to tomography and X-ray computed tomography. The prefix micro- (symbol: µ) is used to indicate that the pixel sizes of the cross-sections are in the micrometre range. These pixel sizes have also resulted in creation of its synonyms high-resolution X-ray tomography, micro-computed tomography (micro-CT or µCT), and similar terms. Sometimes the terms high-resolution computed tomography (HRCT) and micro-CT are differentiated, but in other cases the term high-resolution micro-CT is used. Virtually all tomography today is computed tomography.
Micro-CT has applications both in medical imaging and in industrial computed tomography. In general, there are two types of scanner setups. In one setup, the X-ray source and detector are typically stationary during the scan while the sample/animal rotates. The second setup, much more like a clinical CT scanner, is gantry based where the animal/specimen is stationary in space while the X-ray tube and detector rotate around. These scanners are typically used for small animals (in vivo scanners), biomedical samples, foods, microfossils, and other studies for which minute detail is desired.
The first X-ray microtomography system was conceived and built by Jim Elliott in the early 1980s. The first published X-ray microtomographic images were reconstructed slices of a small tropical snail, with pixel size about 50 micrometers.
Working principle
Imaging system
Fan beam reconstruction
The fan-beam system is based on a one-dimensional (1D) X-ray detector and an electronic X-ray source, creating 2D cross-sections of the object. Typically used in human computed tomography systems.
Cone beam reconstruction
The cone-beam system is based on a 2D X-ray detector (camera) and an electronic X-ray source, creating projection images that late
|
https://en.wikipedia.org/wiki/Jonathan%20Bowen
|
Jonathan P. Bowen FBCS FRSA (born 1956) is a British computer scientist and an Emeritus Professor at London South Bank University, where he headed the Centre for Applied Formal Methods. Prof. Bowen is also the Chairman of Museophile Limited and has been a Professor of Computer Science at Birmingham City University, Visiting Professor at the Pratt Institute (New York City), University of Westminster and King's College London, and a visiting academic at University College London.
Early life and education
Bowen was born in Oxford, the son of Humphry Bowen, and was educated at the Dragon School, Bryanston School, prior to his matriculation at University College, Oxford (Oxford University) where he received the MA degree in Engineering Science.
Career
Bowen later worked at Imperial College, London, the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science), the University of Reading, and London South Bank University. His early work was on formal methods in general, and later the Z notation in particular. He was Chair of the Z User Group from the early 1990s until 2011. In 2002, Bowen was elected Chair of the British Computer Society FACS Specialist Group on Formal Aspects of Computing Science. Since 2005, Bowen has been an Associate Editor-in-Chief of the journal Innovations in Systems and Software Engineering. He is also an associate editor on the editorial board for the ACM Computing Surveys journal, covering software engineering and formal methods. From 2008–9, he was an Associate at Praxis High Integrity Systems, working on a large industrial project using the Z notation.
Bowen's other major interest is the area of online museums. In 1994, he founded the Virtual Library museums pages (VLmp), an online museums directory that was soon adopted by the International Council of Museums (ICOM). In the same year he also started the Virtual Museum of Computing. In 2002, he founded Museophile Limited to help museums, especially onl
|
https://en.wikipedia.org/wiki/Fourier-transform%20ion%20cyclotron%20resonance
|
Fourier-transform ion cyclotron resonance mass spectrometry is a type of mass analyzer (or mass spectrometer) for determining the mass-to-charge ratio (m/z) of ions based on the cyclotron frequency of the ions in a fixed magnetic field. The ions are trapped in a Penning trap (a magnetic field with electric trapping plates), where they are excited (at their resonant cyclotron frequencies) to a larger cyclotron radius by an oscillating electric field orthogonal to the magnetic field. After the excitation field is removed, the ions are rotating at their cyclotron frequency in phase (as a "packet" of ions). These ions induce a charge (detected as an image current) on a pair of electrodes as the packets of ions pass close to them. The resulting signal is called a free induction decay (FID), transient or interferogram that consists of a superposition of sine waves. The useful signal is extracted from this data by performing a Fourier transform to give a mass spectrum.
History
FT-ICR was invented by Melvin B. Comisarow and Alan G. Marshall at the University of British Columbia. The first paper appeared in Chemical Physics Letters in 1974. The inspiration was earlier developments in conventional ICR and Fourier-transform nuclear magnetic resonance (FT-NMR) spectrometry. Marshall has continued to develop the technique at The Ohio State University and Florida State University.
Theory
The physics of FTICR is similar to that of a cyclotron at least in the first approximation.
In the simplest idealized form, the relationship between the cyclotron frequency and the mass-to-charge ratio is given by
where f = cyclotron frequency, q = ion charge, B = magnetic field strength and m = ion mass.
This is more often represented in angular frequency:
where is the angular cyclotron frequency, which is related to frequency by the definition .
Because of the quadrupolar electrical field used to trap the ions in the axial direction, this relationship is only approximate. The axial ele
|
https://en.wikipedia.org/wiki/Reid%20W.%20Barton
|
Reid William Barton (born May 6, 1983) is a mathematician and also one of the most successful performers in the International Science Olympiads.
Biography
Barton is the son of two environmental engineers. Barton took part-time classes at Tufts University in chemistry (5th grade), physics (6th grade), and subsequently Swedish, Finnish, French, and Chinese. Since eighth grade he worked part-time with MIT computer scientist Charles E. Leiserson on CilkChess, a computer chess program. Subsequently, he worked at Akamai Technologies with computer scientist Ramesh Sitaraman to build one of the earliest video performance measurement systems that have since become a standard in industry. After Akamai, Barton went to grad school at Harvard to pursue a Ph.D. in mathematics, which he completed in 2019 under the supervision of Michael J. Hopkins. Afterwards, he did research as a post-doctoral fellow at Pittsburgh. As of November 2021 he sits on the committee for the
Mathematical and programming competitions
Barton was the first student to win four gold medals at the International Mathematical Olympiad, culminating in full marks at the 2001 Olympiad held in Washington, D.C., shared with Gabriel Carroll, Xiao Liang and Zhang Zhiqiang.
Barton is one of seven people to have placed among the five top ranked competitors (who are themselves not ranked against each other) in the William Lowell Putnam Competition four times (2001–2004). Barton was a member of the MIT team which finished second in 2001 and first in 2003 and 2004.
Barton has won two gold medals at the International Olympiad in Informatics. In 2001 he finished first with 580 points out of 600, 55 ahead of his nearest competitor, the largest margin in IOI history at the time. Barton was a member of the 2nd and 5th place MIT team at the ACM International Collegiate Programming Contest, and reached the finals in the Topcoder Open (2004), semi-finals (2003, 2006), the TopCoder Collegiate Challenge (2004), semi-finals (2
|
https://en.wikipedia.org/wiki/DEC%20RADIX%2050
|
RADIX 50 or RAD50 (also referred to as RADIX50, RADIX-50 or RAD-50), is an uppercase-only character encoding created by Digital Equipment Corporation (DEC) for use on their DECsystem, PDP, and VAX computers.
RADIX 50's 40-character repertoire (050 in octal) can encode six characters plus four additional bits into one 36-bit machine word (PDP-6, PDP-10/DECsystem-10, DECSYSTEM-20), three characters plus two additional bits into one 18-bit word (PDP-9, PDP-15), or three characters into one 16-bit word (PDP-11, VAX).
The actual encoding differs between the 36-bit and 16-bit systems.
36-bit systems
In 36-bit DEC systems RADIX 50 was commonly used in symbol tables for assemblers or compilers which supported six-character symbol names from a 40-character alphabet. This left four bits to encode properties of the symbol.
For its similarities to the SQUOZE encoding scheme used in IBM's SHARE Operating System for representing object code symbols, DEC's variant was also sometimes called DEC Squoze, however, IBM SQUOZE packed six characters of a 50-character alphabet plus two additional flag bits into one 36-bit word.
RADIX 50 was not normally used in 36-bit systems for encoding ordinary character strings; file names were normally encoded as six six-bit characters, and full ASCII strings as five seven-bit characters and one unused bit per 36-bit word.
18-bit systems
RADIX 50 (also called Radix 508 format) was used in Digital's 18-bit PDP-9 and PDP-15 computers to store symbols in symbol tables, leaving two extra bits per 18-bit word ("symbol classification bits").
16-bit systems
Some strings in DEC's 16-bit systems were encoded as 8-bit bytes, while others used RADIX 50 (then also called MOD40).
In RADIX 50, strings were encoded in successive words as needed, with the first character within each word located in the most significant position.
For example, using the PDP-11 encoding, the string "ABCDEF", with character values 1, 2, 3, 4, 5, and 6, would be encoded as a wor
|
https://en.wikipedia.org/wiki/Sleep%20mode
|
Sleep mode (or suspend to RAM) is a low power mode for electronic devices such as computers, televisions, and remote controlled devices. These modes save significantly on electrical consumption compared to leaving a device fully on and, upon resume, allow the user to avoid having to reissue instructions or to wait for a machine to boot. Many devices signify this power mode with a pulsed or red colored LED power light.
Computers
In computers, entering a sleep state is roughly equivalent to "pausing" the state of the machine. When restored, the operation continues from the same point, having the same applications and files open.
Sleep
Sleep mode has gone by various names, including Stand By, Suspend and Suspend to RAM. Machine state is held in RAM and, when placed in sleep mode, the computer cuts power to unneeded subsystems and places the RAM into a minimum power state, just sufficient to retain its data. Because of the large power saving, most laptops automatically enter this mode when the computer is running on batteries and the lid is closed. If undesired, the behavior can be altered in the operating system settings of the computer.
A computer must consume some energy while sleeping in order to power the RAM and to be able to respond to a wake-up event. A sleeping PC is on standby power, and this is covered by regulations in many countries, for example in the United States limiting such power under the One Watt Initiative, from 2010. In addition to a wake-up press of the power button, PCs can also respond to other wake cues, such as from keyboard, mouse, incoming telephone call on a modem, or local area network signal.
Hibernation
Hibernation, also called Suspend to Disk on Linux, saves all computer operational data on the fixed disk before turning the computer off completely. On switching the computer back on, the computer is restored to its state prior to hibernation, with all programs and files open, and unsaved data intact. In contrast with standby mode
|
https://en.wikipedia.org/wiki/Push%20Access%20Protocol
|
Push Access Protocol (or PAP) is a protocol defined in WAP-164 of the Wireless Application Protocol (WAP) suite from the Open Mobile Alliance. PAP is used for communicating with the Push Proxy Gateway, which is usually part of a WAP Gateway.
PAP is intended for use in delivering content from Push Initiators to Push Proxy Gateways for subsequent delivery to narrow band devices, including mobile phones and pagers. Example messages include news, stock quotes, weather, traffic reports, and notification of events such as email arrival. With Push functionality, users are able to receive information without having to request it. In many cases it is important for the user to get the information as soon as it is available.
The Push Access Protocol is not intended for use over the air.
PAP is designed to be independent of the underlying transport protocol. PAP specifies the following possible operations between the Push Initiator and the Push Proxy Gateway:
Submit a Push
Cancel a Push
Query for status of a Push
Query for wireless device capabilities
Result notification
The interaction between the Push Initiators and the Push Proxy Gateways is in the form of XML messages.
Operations
Push Submission
The purpose of the Push Submission is to deliver a Push message from a Push Initiator to a PPG, which should then deliver the message to a user agent in a device on the wireless network. The Push message contains a control entity and a content entity, and MAY contain a capabilities entity. The control entity is an XML document that contains control information (push-message) for the PPG to use in processing the message for delivery. The content entity represents content to be sent to the wireless device. The capabilities entity contains client capabilities assumed by the Push Initiator and is in the RDF [RDF] format as defined in the User Agent Profile [UAPROF]. The PPG MAY use
the capabilities information to validate that the message is appropriate for the cli
|
https://en.wikipedia.org/wiki/Conditional%20access
|
Conditional access (CA) is a term commonly used in relation to software and to digital television systems. Conditional access is that ‘just-in-time’ evaluation to ensure the person who is seeking access to content is authorized to access the content. Said another way, conditional access is a type of access management. Access is managed by requiring certain criteria to be met before granting access to the content.
In software
Conditional access is a function that lets you manage people’s access to the software in question, such as email, applications, and documents. It is usually offered as SaaS (Software-as-a-Service) and deployed in organizations to keep company data safe. By setting conditions on the access to this data, the organization has more control over who accesses the data and where and in what way the information is accessed.
When setting up conditional access, access can be limited to or prevented based on the policy defined by the system administrator. For example, a policy might require access is available from certain networks, or access is blocked when a specific web browser is requesting the access.
In digital television
Under the Digital Video Broadcasting (DVB) standard, conditional access system (CAS) standards are defined in the specification documents for DVB-CA (conditional access), DVB-CSA (the common scrambling algorithm) and DVB-CI (the Common Interface). These standards define a method by which one can obfuscate a digital-television stream, with access provided only to those with valid decryption smart-cards. The DVB specifications for conditional access are available from the standards page on the DVB website.
This is achieved by a combination of scrambling and encryption. The data stream is scrambled with a 48-bit secret key, called the control word. Knowing the value of the control word at a given moment is of relatively little value, as under normal conditions, content providers will change the control word several times per minut
|
https://en.wikipedia.org/wiki/Josh%20Fisher
|
Joseph A "Josh" Fisher is an American and Spanish computer scientist noted for his work on VLIW architectures, compiling, and instruction-level parallelism, and for the founding of Multiflow Computer. He is a Hewlett-Packard Senior Fellow (Emeritus).
Biography
Fisher holds a BA (1968) in mathematics (with honors) from New York University and obtained a Master's and PhD degree (1979) in Computer Science from The Courant Institute of Mathematics of New York University.
Fisher joined the Yale University Department of Computer Science in 1979 as an assistant professor, and was promoted to associate professor in 1983. In 1984 Fisher left Yale to found Multiflow Computer with Yale colleagues John O'Donnell and John Ruttenberg. Fisher joined HP Labs upon the closing of Multiflow in 1990. He directed HP Labs in Cambridge, MA USA from its founding in 1994, and became an HP Fellow (2000) and then Senior Fellow (2002) upon the inception of those titles at Hewlett-Packard. Fisher retired from HP Labs in 2006.
Fisher is married (1967) to Elizabeth Fisher; they have a son, David Fisher, and a daughter, Dora Fisher. He holds Spanish citizenship due to his Sephardic heritage.
Work
Trace Scheduling
In his Ph.D. dissertation, Fisher created the Trace Scheduling compiler algorithm and coined the term Instruction-level parallelism to characterize VLIW, superscalar, dataflow and other architecture styles that involve fine-grained parallelism among simple machine-level instructions. Trace scheduling was the first practical algorithm to find large amounts of parallelism between instructions that occupied different basic blocks. This greatly increased the potential speed-up for instruction-level parallel architectures.
The VLIW architecture style
Because of the difficulty of applying trace scheduling to idiosyncratic systems (such as 1970s-era DSPs) that in theory should have been suitable targets for a trace scheduling compiler, Fisher put forward the VLIW architectural style. VLIW
|
https://en.wikipedia.org/wiki/Storage%20Management%20Initiative%20%E2%80%93%20Specification
|
The Storage Management Initiative Specification, commonly called SMI-S, is a computer data storage management standard developed and maintained by the Storage Networking Industry Association (SNIA). It has also been ratified as an ISO standard. SMI-S is based upon the Common Information Model and the Web-Based Enterprise Management standards defined by the Distributed Management Task Force, which define management functionality via HTTP. The most recent approved version of SMI-S is available on the SNIA website.
The main objective of SMI-S is to enable broad interoperable management of heterogeneous storage vendor systems. The current version is SMI-S 1.8.0 Rev 5. Over 1,350 storage products are certified as conformant to SMI-S.
Basic concepts
SMI-S defines CIM management profiles for storage systems. The entire SMI Specification is categorized in profiles and subprofiles. A profile describes the behavioral aspects of an autonomous, self-contained management domain. SMI-S includes profiles for Arrays, Switches, Storage Virtualizers, Volume Management and several other management domains. In DMTF parlance, an SMI-S provider is an implementation for a specific profile or set of profiles. A subprofile describes a part of a management domain, and can be a common part in more than one profile.
At a very basic level, SMI-S entities are divided into two categories:
Clients are management software applications that can reside virtually anywhere within a network, provided they have a communications link (either within the data path or outside the data path) to providers.
Servers are the devices under management. Servers can be disk arrays, virtualization engines, host bus adapters, switches, tape drives, etc.
SMI-S timeline
2000 – Collection of computer storage industry leaders led by Roger Reich begins building an interoperable management backbone for storage and storage networks (named Bluefin) in a small consortia called the Partner Development Process.
2002 – B
|
https://en.wikipedia.org/wiki/Bioprocessor
|
A bioprocessor is a miniaturized bioreactor capable of culturing mammalian, insect and microbial cells. Bioprocessors are capable of mimicking performance of large-scale bioreactors, hence making them ideal for laboratory scale experimentation of cell culture processes. Bioprocessors are also used for concentrating bioparticles (such as cells) in bioanalytical systems. Microfluidic processes such as electrophoresis can be implemented by bioprocessors to aid in DNA isolation and purification.
References
Biochemical engineering
Biotechnology
|
https://en.wikipedia.org/wiki/Ciphertext%20indistinguishability
|
Ciphertext indistinguishability is a property of many encryption schemes. Intuitively, if a cryptosystem possesses the property of indistinguishability, then an adversary will be unable to distinguish pairs of ciphertexts based on the message they encrypt. The property of indistinguishability under chosen plaintext attack is considered a basic requirement for most provably secure public key cryptosystems, though some schemes also provide indistinguishability under chosen ciphertext attack and adaptive chosen ciphertext attack. Indistinguishability under chosen plaintext attack is equivalent to the property of semantic security, and many cryptographic proofs use these definitions interchangeably.
A cryptosystem is considered secure in terms of indistinguishability if no adversary, given an encryption of a message randomly chosen from a two-element message space determined by the adversary, can identify the message choice with probability significantly better than that of random guessing (). If any adversary can succeed in distinguishing the chosen ciphertext with a probability significantly greater than , then this adversary is considered to have an "advantage" in distinguishing the ciphertext, and the scheme is not considered secure in terms of indistinguishability. This definition encompasses the notion that in a secure scheme, the adversary should learn no information from seeing a ciphertext. Therefore, the adversary should be able to do no better than if it guessed randomly.
Formal definitions
Security in terms of indistinguishability has many definitions, depending on assumptions made about the capabilities of the attacker. It is normally presented as a game, where the cryptosystem is considered secure if no adversary can win the game with significantly greater probability than an adversary who must guess randomly. The most common definitions used in cryptography are indistinguishability under chosen plaintext attack (abbreviated IND-CPA), indistinguishabil
|
https://en.wikipedia.org/wiki/DBFS
|
Decibels relative to full scale (dBFS or dB FS) is a unit of measurement for amplitude levels in digital systems, such as pulse-code modulation (PCM), which have a defined maximum peak level. The unit is similar to the units dBov and decibels relative to overload (dBO).
The level of 0dBFS is assigned to the maximum possible digital level. For example, a signal that reaches 50% of the maximum level has a level of −6dBFS, which is 6dB below full scale. Conventions differ for root mean square (RMS) measurements, but all peak measurements smaller than the maximum are negative levels.
A digital signal that does not contain any samples at 0dBFS can still clip when converted to analog form due to the signal reconstruction process interpolating between samples. This can be prevented by careful digital-to-analog converter circuit design. Measurements of the true inter-sample peak levels are notated as dBTP or dB TP ("decibels true peak").
RMS levels
Since a peak measurement is not useful for qualifying the noise performance of a system, or measuring the loudness of an audio recording, for instance, RMS measurements are often used instead.
A potential for ambiguity exists when assigning a level on the dBFS scale to a waveform rather than to a specific amplitude, because some engineers follow the mathematical definition of RMS, which for sinusoidal signals is −3dB below the peak value, while others choose the reference level so that RMS and peak measurements of a sine wave produce the same result.
The unit dB FS or dBFS is defined in AES Standard AES17-1998, IEC 61606, and ITU-T Recs. P.381 and P.382, such that the RMS value of a full-scale sine wave is designated 0dB FS. This means a full-scale square wave would have an RMS value of +3dB FS. This convention is used in Wolfson and Cirrus Logic digital microphone specs, etc.
The unit dBov is defined in the ITU-T G.100.1 telephony standard such that the RMS value of a full-scale square wave is designated 0dBov. All po
|
https://en.wikipedia.org/wiki/Universal%20Satellites%20Automatic%20Location%20System
|
Universal Satellites Automatic Location System (USALS), also known (unofficially) as DiSEqC 1.3, Go X or Go to XX is a satellite dish motor protocol that automatically creates a list of available satellite positions in a motorised satellite dish setup. It is used in conjunction with the DiSEqC 1.2 protocol. It was developed by STAB, an Italian motor manufacturer, who still make the majority of USALS compatible motors.
Software on the satellite receiver (or external positioner) calculates the position of all available satellites from an initial location (input by the user), which is the latitude and longitude relative to Earth. Calculated positions can differ ±0.1 degrees from the offset. This is adjusted automatically and does not require previous technical knowledge.
Compared to DiSEqC 1.2, it is not necessary to manually search and store every known satellite position. Pointing to a known satellite position (for example 19.2ºE) is enough; this position will act as the central point, and the USALS system will then calculate visible satellites position within the offset.
Receivers are aligned to the satellite most southern to their position in the northern hemisphere, or the northernmost in the southern hemisphere.
As it is not an open standard, for a receiver to carry the USALS logo it must undergo a certification test by STAB's laboratories. If successful the manufacturer can include a USALS settings entry in its own menu, as well as place the logo on the front of their unit. However, a large number of manufacturers of both receivers and motors provide compatible modes which have not received certification, leading to use of unofficial terms.
USALS is a program and not a communication protocol. The USALS calculates the dish angular position given by the dish longitude/latitude and the position of the satellite in geostationary orbit. It then sends the angular position to the positioner using the DiSEqC 1.2 protocol. This calculation is straight on using geo
|
https://en.wikipedia.org/wiki/Dancing%20mania
|
Dancing mania (also known as dancing plague, choreomania, St. John's Dance, tarantism and St. Vitus' Dance) was a social phenomenon that occurred primarily in mainland Europe between the 14th and 17th centuries. It involved groups of people dancing erratically, sometimes thousands at a time. The mania affected adults and children who danced until they collapsed from exhaustion and injuries. One of the first major outbreaks was in Aachen, in the Holy Roman Empire (in modern-day Germany), in 1374, and it quickly spread throughout Europe; one particularly notable outbreak occurred in Strasbourg in 1518 in Alsace, also in the Holy Roman Empire (now in modern-day France).
Affecting thousands of people across several centuries, dancing mania was not an isolated event, and was well documented in contemporary reports. It was nevertheless poorly understood, and remedies were based on guesswork. Often musicians accompanied dancers, due to a belief that music would treat the mania, but this tactic sometimes backfired by encouraging more to join in. There is no consensus among modern-day scholars as to the cause of dancing mania.
The several theories proposed range from religious cults being behind the processions to people dancing to relieve themselves of stress and put the poverty of the period out of their minds. It is speculated to have been a mass psychogenic illness, in which physical symptoms with no known physical cause are observed to affect a group of people, as a form of social influence.
Definition
"Dancing mania" is derived from the term "choreomania", from the Greek choros (dance) and mania (madness), and is also known as "dancing plague". The term was coined by Paracelsus, and the condition was initially considered a curse sent by a saint, usually St. John the Baptist or St. Vitus, and was therefore known as "St. Vitus' Dance" or "St. John's Dance". Victims of dancing mania often ended their processions at places dedicated to that saint, who was prayed to in a
|
https://en.wikipedia.org/wiki/Drinking%20bird
|
Drinking birds, also known as insatiable birdies, dunking birds, drinky birds, water birds, dipping birds, and “Sippy Chickens” are toy heat engines that mimic the motions of a bird drinking from a water source. They are sometimes incorrectly considered examples of a perpetual motion device.
Construction and materials
A drinking bird consists of two glass bulbs joined by a glass tube (the bird's neck/body). The tube extends nearly all the way into the bottom bulb, and attaches to the top bulb but does not extend into it.
The space inside the bird contains a fluid, usually colored for visibility. (This dye might fade when exposed to light, with the rate depending on the dye/color).
The fluid is typically dichloromethane (DCM), also known as methylene chloride.
Earlier versions contained trichlorofluoromethane.
Miles V. Sullivan's 1945 patent suggested ether, alcohol, carbon tetrachloride, or chloroform.
Air is removed from the apparatus during manufacture, so the space inside the body is filled by vapor evaporated from the fluid. The upper bulb has a "beak" attached which, along with the head, is covered in a felt-like material. The bird is typically decorated with paper eyes, a plastic top hat, and one or more tail feathers. The whole device pivots on a crosspiece attached to the body.
Heat engine steps
The drinking bird is a heat engine that exploits a temperature difference to convert heat energy to a pressure difference within the device, and performs mechanical work. Like all heat engines, the drinking bird works through a thermodynamic cycle. The initial state of the system is a bird with a wet head oriented vertically.
The process operates as follows:
The water evaporates from the felt on the head.
Evaporation lowers the temperature of the glass head (heat of vaporization).
The temperature decrease causes some of the dichloromethane vapor in the head to condense.
The lower temperature and condensation together cause the pressure to drop in the head
|
https://en.wikipedia.org/wiki/Western%20Latin%20character%20sets%20%28computing%29
|
Several 8-bit character sets (encodings) were designed for binary representation of common Western European languages (Italian, Spanish, Portuguese, French, German, Dutch, English, Danish, Swedish, Norwegian, and Icelandic), which use the Latin alphabet, a few additional letters and ones with precomposed diacritics, some punctuation, and various symbols (including some Greek letters). These character sets also happen to support many other languages such as Malay, Swahili, and Classical Latin.
This material is technically obsolete, having been functionally replaced by Unicode. However it continues to have historical interest.
Summary
The ISO-8859 series of 8-bit character sets encodes all Latin character sets used in Europe, albeit that the same code points have multiple uses that caused some difficulty (including mojibake, or garbled characters, and communication issues). The arrival of Unicode, with a unique code point for every glyph, resolved these issues.
ISO/IEC 8859-1 or Latin-1 is the most used and also defines the first 256 codepoints in Unicode.
ISO/IEC 8859-15 modifies ISO-8859-1 to fully support Estonian, Finnish and French and add the euro sign.
Windows-1252 is a superset of ISO-8859-1 that includes the printable characters from ISO/IEC 8859-15 and popular punctuation such as curved quotation marks (also known as smart quotes, such as in Microsoft Word settings and similar programs). It is common that web page tools for Windows use Windows-1252 but label the web page as using ISO-8859-1, this has been addressed in HTML5, which mandates that pages labeled as ISO-8859-1 must be interpreted as Windows-1252.
IBM CP437, being intended for English only, has very little in the way of accented letters (particularly uppercase) but has far more graphics characters than the other IBM code pages listed here and also some mathematical and Greek characters that are useful as technical symbols.
IBM CP850 has all the printable characters that ISO-8859-1 has
|
https://en.wikipedia.org/wiki/IRC%20subculture
|
IRC subculture refers to the particular set of social features common to interaction on the various Internet Relay Chat (IRC) systems around the world, and the culture associated with them. IRC is particularly popular among programmers, hackers, and computer gamers.
Overview
Internet Relay Chat is an Internet-based chat system that has existed in one form or another since 1988. Networks are connected groups of IRC servers which share a common userbase. Channels are the "chat rooms" on said networks. IRC channel operators (commonly referred to as chops or chanops) are the individuals who run any given channel.
While there are many different IRC networks, and across those networks there are usually large numbers of IRC channels, there are some unifying features common to the social structures of them all. Many of the features of the IRC subculture mesh with other Internet subcultures, such as various forum subcultures. This is especially prevalent in IRC channels or networks that are directly related to other Internet phenomena, such as an IRC channel created by and for the users of a particular Internet forum.
Communication on IRC
IRC has much in common with a regular in-person conversation. It is real-time many-on-many communication that is not logged by the server for posterity (many IRC clients do offer a logging feature, but the logs aren't generally publicly available then). Some bots may also feature logging facilities. Users on IRC usually identify users as ones "saying" something (instead of posting it) to reflect the similarity with face-to-face communication.
Because IRC is a text-based communication medium, the obvious limitation of this metaphor is that the participants of a conversation on IRC do not actually see or hear each other, so alternative ways must be employed to convey the information that would otherwise be gained from facial expressions, tones of voice, and other audio-visual clues. It is common practice among IRC users to use emotico
|
https://en.wikipedia.org/wiki/Microstructure
|
Microstructure is the very small scale structure of a material, defined as the structure of a prepared surface of material as revealed by an optical microscope above 25× magnification. The microstructure of a material (such as metals, polymers, ceramics or composites) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behaviour or wear resistance. These properties in turn govern the application of these materials in industrial practice.
Microstructure at scales smaller than can be viewed with optical microscopes is often called nanostructure, while the structure in which individual atoms are arranged is known as crystal structure. The nanostructure of biological specimens is referred to as ultrastructure. A microstructure’s influence on the mechanical and physical properties of a material is primarily governed by the different defects present or absent of the structure. These defects can take many forms but the primary ones are the pores. Even if those pores play a very important role in the definition of the characteristics of a material, so does its composition. In fact, for many materials, different phases can exist at the same time. These phases have different properties and if managed correctly, can prevent the fracture of the material.
Methods
The concept of microstructure is observable in macrostructural features in commonplace objects. Galvanized steel, such as the casing of a lamp post or road divider, exhibits a non-uniformly colored patchwork of interlocking polygons of different shades of grey or silver. Each polygon is a single crystal of zinc adhering to the surface of the steel beneath. Zinc and lead are two common metals which form large crystals (grains) visible to the naked eye. The atoms in each grain are organized into one of seven 3d stacking arrangements or crystal lattices (cubic, tetrahedral, hexagonal, monoclinic, triclinic, rhombohedral and orthorhombic).
|
https://en.wikipedia.org/wiki/Corner%20solution
|
In mathematics and economics, a corner solution is a special solution to an agent's maximization problem in which the quantity of one of the arguments in the maximized function is zero. In non-technical terms, a corner solution is when the chooser is either unwilling or unable to make a trade-off between goods.
In economics
In the context of economics the corner solution is best characterised by when the highest indifference curve attainable is not tangential to the budget line, in this scenario the consumer puts their entire budget into purchasing as much of one of the goods as possible and none of any other. When the slope of the indifference curve is greater than the slope of the budget line, the consumer is willing to give up more of good 1 for a unit of good 2 than is required by the market. Thus, it follows that if the slope of the indifference curve is strictly greater than the slope of the budget line:
Then the result will be a corner solution intersecting the x-axis. The converse is also true for a corner solution resulting from an intercept through the y-axis.
Examples
Real world examples of a corner solution occur when someone says "I wouldn't buy that at any price", "Why would I buy X when Y is cheaper" or "I will do X no matter the cost" , this could be for any number of reasons e.g. a bad brand experience, loyalty to a specific brand or when a cheaper version of the same good exists.
Another example is "zero-tolerance" policies, such as a parent who is unwilling to expose their children to any risk, no matter how small and no matter what the benefits of the activity might be. "Nothing is more important than my child's safety" is a corner solution in its refusal to admit there might be trade-offs. The term "corner solution" is sometimes used by economists in a more colloquial fashion to refer to these sorts of situations.
Another situation a corner solution may arise is when the two goods in question are perfect substitutes. The word "corner" re
|
https://en.wikipedia.org/wiki/Comfort%20noise
|
Comfort noise (or comfort tone) is synthetic background noise used in radio and wireless communications to fill the artificial silence in a transmission resulting from voice activity detection or from the audio clarity of modern digital lines.
Some modern telephone systems (such as wireless and VoIP) use voice activity detection (VAD), a form of squelching where low volume levels are ignored by the transmitting device. In digital audio transmissions, this saves bandwidth of the communications channel by transmitting nothing when the source volume is under a certain threshold, leaving only louder sounds (such as the speaker's voice) to be sent. However, improvements in background noise reduction technologies can occasionally result in the complete removal of all noise. Although maximizing call quality is of primary importance, exhaustive removal of noise may not properly simulate the typical behavior of terminals on the PSTN system.
The result of receiving total silence, especially for a prolonged period, has a number of unwanted effects on the listener, including the following:
the listener may believe that the transmission has been lost, and therefore hang up prematurely
the speech may sound "choppy" (see noise gate) and difficult to understand
the sudden change in sound level can be jarring to the listener.
To counteract these effects, comfort noise is added, usually on the receiving end in wireless or VoIP systems, to fill in the silent portions of transmissions with artificial noise. The noise generated is at a low but audible volume level, and can vary based on the average volume level of received signals to minimize jarring transitions.
In many VoIP products, users may control how VAD and comfort noise are configured, or disable the feature entirely.
As part of the RTP audio video profile, RFC 3389 defines a standard for distributing comfort noise information in VoIP systems.
A similar concept is that of sidetone, the effect of sound that is picked
|
https://en.wikipedia.org/wiki/Complete%20market
|
In economics, a complete market (aka Arrow-Debreu market or complete system of markets) is a market with two conditions:
Negligible transaction costs and therefore also perfect information,
Every asset in every possible state of the world has a price.
In such a market, the complete set of possible bets on future states of the world can be constructed with existing assets without friction. Here, goods are state-contingent; that is, a good includes the time and state of the world in which it is consumed. For instance, an umbrella tomorrow if it rains is a distinct good from an umbrella tomorrow if it is clear. The study of complete markets is central to state-preference theory. The theory can be traced to the work of Kenneth Arrow (1964), Gérard Debreu (1959), Arrow & Debreu (1954) and Lionel McKenzie (1954). Arrow and Debreu were awarded the Nobel Memorial Prize in Economics (Arrow in 1972, Debreu in 1983), largely for their work in developing the theory of complete markets and applying it to the problem of general equilibrium.
States of the world
A state of the world is a complete specification of the values of all relevant variables over the relevant time horizon. A state-contingent claim, or state claim, is a contract whose future payoffs depend on future states of the world. For example, suppose you can bet on the outcome of a coin toss. If you guess the outcome correctly, you will win one dollar, and otherwise you will lose one dollar. A bet on heads is a state claim, with payoff of one dollar if heads is the outcome, and payoff of negative one dollar if tails is the outcome. "Heads" and "tails" are the states of the world in this example. A state-contingent claim can be represented as a payoff vector with one element for each state of the world, e.g. (payoff if heads, payoff if tails). So a bet on heads can be represented as ($1, −$1) and a bet on tails can be represented as (−$1, $1). Notice that by placing one bet on heads and one bet on tails, you have
|
https://en.wikipedia.org/wiki/Bearer-Independent%20Call%20Control
|
The Bearer-Independent Call Control (BICC) is a signaling protocol based on N-ISUP that is used for supporting narrowband Integrated Services Digital Network (ISDN) service over a broadband backbone network. BICC is designed to interwork with existing transport technologies. BICC is specified in ITU-T recommendation Q.1901.
BICC signaling messages are nearly identical to those in ISDN User Part (ISUP); the main difference being that the narrowband circuit identification code (CIC) has been modified. The BICC architecture consists of interconnected serving nodes that provide the call service function and the bearer control function. The call service function uses BICC signaling for call setup and may also interwork with ISUP. The bearer control function receives directives from the call service function via BICC Bearer Control Protocol (ITU-T recommendation Q.1950) and is responsible for setup and teardown of bearer paths on a set of physical transport links. Transport links are most commonly Asynchronous Transfer Mode (ATM) or Internet Protocol (IP).
According to the ITU, the completion of the BICC protocols is a historic step toward broadband multimedia networks because it enables the seamless migration from circuit-switched TDM networks to high-capacity broadband multimedia networks.
The Third-Generation Partnership Project (3GPP) has included BICC CS 2 in the Universal Mobile Telecommunications System (UMTS) release 4.
References
ITU-T Recommendation Q.1901 : Bearer Independent Call Control protocol
ITU-T Recommendation Q.1902.1 : Bearer Independent Call Control protocol (Capability Set 2): Functional description
ITU-T Recommendation Q.1950 : Bearer independent call bearer control protocol
ITU-T Press Release : Agreement on BICC protocols: a historic step for evolution towards next-generation server- networks
3GPP TS 29.205 : Application of Q.1900 series to Bearer Independent CS Network architecture; Stage 3
Network protocols
|
https://en.wikipedia.org/wiki/Matrix-assisted%20laser%20desorption/ionization
|
In mass spectrometry, matrix-assisted laser desorption/ionization (MALDI) is an ionization technique that uses a laser energy-absorbing matrix to create ions from large molecules with minimal fragmentation. It has been applied to the analysis of biomolecules (biopolymers such as DNA, proteins, peptides and carbohydrates) and various organic molecules (such as polymers, dendrimers and other macromolecules), which tend to be fragile and fragment when ionized by more conventional ionization methods. It is similar in character to electrospray ionization (ESI) in that both techniques are relatively soft (low fragmentation) ways of obtaining ions of large molecules in the gas phase, though MALDI typically produces far fewer multi-charged ions.
MALDI methodology is a three-step process. First, the sample is mixed with a suitable matrix material and applied to a metal plate. Second, a pulsed laser irradiates the sample, triggering ablation and desorption of the sample and matrix material. Finally, the analyte molecules are ionized by being protonated or deprotonated in the hot plume of ablated gases, and then they can be accelerated into whichever mass spectrometer is used to analyse them.
History
The term matrix-assisted laser desorption ionization (MALDI) was coined in 1985 by Franz Hillenkamp, Michael Karas and their colleagues. These researchers found that the amino acid alanine could be ionized more easily if it was mixed with the amino acid tryptophan and irradiated with a pulsed 266 nm laser. The tryptophan was absorbing the laser energy and helping to ionize the non-absorbing alanine. Peptides up to the 2843 Da peptide melittin could be ionized when mixed with this kind of "matrix". The breakthrough for large molecule laser desorption ionization came in 1987 when Koichi Tanaka of Shimadzu Corporation and his co-workers used what they called the "ultra fine metal plus liquid matrix method" that combined 30 nm cobalt particles in glycerol with a 337 nm nitrogen la
|
https://en.wikipedia.org/wiki/Tree%20%28descriptive%20set%20theory%29
|
In descriptive set theory, a tree on a set is a collection of finite sequences of elements of such that every prefix of a sequence in the collection also belongs to the collection.
Definitions
Trees
The collection of all finite sequences of elements of a set is denoted .
With this notation, a tree is a nonempty subset of , such that if
is a sequence of length in , and if ,
then the shortened sequence also belongs to . In particular, choosing shows that the empty sequence belongs to every tree.
Branches and bodies
A branch through a tree is an infinite sequence of elements of , each of whose finite prefixes belongs to . The set of all branches through is denoted and called the body of the tree .
A tree that has no branches is called wellfounded; a tree with at least one branch is illfounded. By Kőnig's lemma, a tree on a finite set with an infinite number of sequences must necessarily be illfounded.
Terminal nodes
A finite sequence that belongs to a tree is called a terminal node if it is not a prefix of a longer sequence in . Equivalently, is terminal if there is no element of such that that . A tree that does not have any terminal nodes is called pruned.
Relation to other types of trees
In graph theory, a rooted tree is a directed graph in which every vertex except for a special root vertex has exactly one outgoing edge, and in which the path formed by following these edges from any vertex eventually leads to the root vertex.
If is a tree in the descriptive set theory sense, then it corresponds to a graph with one vertex for each sequence in , and an outgoing edge from each nonempty sequence that connects it to the shorter sequence formed by removing its last element. This graph is a tree in the graph-theoretic sense. The root of the tree is the empty sequence.
In order theory, a different notion of a tree is used: an order-theoretic tree is a partially ordered set with one minimal element in which each element has a well-ordered set of pre
|
https://en.wikipedia.org/wiki/Internal%20set
|
In mathematical logic, in particular in model theory and nonstandard analysis, an internal set is a set that is a member of a model.
The concept of internal sets is a tool in formulating the transfer principle, which concerns the logical relation between the properties of the real numbers R, and the properties of a larger field denoted *R called the hyperreal numbers. The field *R includes, in particular, infinitesimal ("infinitely small") numbers, providing a rigorous mathematical justification for their use. Roughly speaking, the idea is to express analysis over R in a suitable language of mathematical logic, and then point out that this language applies equally well to *R. This turns out to be possible because at the set-theoretic level, the propositions in such a language are interpreted to apply only to internal sets rather than to all sets (note that the term "language" is used in a loose sense in the above).
Edward Nelson's internal set theory is an axiomatic approach to nonstandard analysis (see also Palmgren at constructive nonstandard analysis). Conventional infinitary accounts of nonstandard analysis also use the concept of internal sets.
Internal sets in the ultrapower construction
Relative to the ultrapower construction of the hyperreal numbers as equivalence classes of sequences of reals, an internal subset [An] of *R is one defined by a sequence of real sets , where a hyperreal is said to belong to the set if and only if the set of indices n such that , is a member of the ultrafilter used in the construction of *R.
More generally, an internal entity is a member of the natural extension of a real entity. Thus, every element of *R is internal; a subset of *R is internal if and only if it is a member of the natural extension of the power set of R; etc.
Internal subsets of the reals
Every internal subset of *R that is a subset of (the embedded copy of) R is necessarily finite (see Theorem 3.9.1 Goldblatt, 1998). In other words, every inter
|
https://en.wikipedia.org/wiki/Forward%20secrecy
|
In cryptography, forward secrecy (FS), also known as perfect forward secrecy (PFS), is a feature of specific key-agreement protocols that gives assurances that session keys will not be compromised even if long-term secrets used in the session key exchange are compromised. For HTTPS, the long-term secret is typically the private key of the server. Forward secrecy protects past sessions against future compromises of keys or passwords. By generating a unique session key for every session a user initiates, the compromise of a single session key will not affect any data other than that exchanged in the specific session protected by that particular key. This by itself is not sufficient for forward secrecy which additionally requires that a long-term secret compromise does not affect the security of past session keys.
Forward secrecy protects data on the transport layer of a network that uses common transport layer security protocols, including OpenSSL, when its long-term secret keys are compromised, as with the Heartbleed security bug. If forward secrecy is used, encrypted communications and sessions recorded in the past cannot be retrieved and decrypted should long-term secret keys or passwords be compromised in the future, even if the adversary actively interfered, for example via a man-in-the-middle (MITM) attack.
The value of forward secrecy is that it protects past communication. This reduces the motivation for attackers to compromise keys. For instance, if an attacker learns a long-term key, but the compromise is detected and the long-term key is revoked and updated, relatively little information is leaked in a forward secure system.
The value of forward secrecy depends on the assumed capabilities of an adversary. Forward secrecy has value if an adversary is assumed to be able to obtain secret keys from a device (read access) but is either detected or unable to modify the way session keys are generated in the device (full compromise). In some cases an adversary
|
https://en.wikipedia.org/wiki/Chinese%20numerology
|
Some numbers are believed by some to be auspicious or lucky (吉利, ) or inauspicious or unlucky (不吉, ) based on the Chinese word that the number sounds similar to. The numbers 2, 3, 6, and 8 are generally considered to be lucky, while 4 is considered unlucky. These traditions are not unique to Chinese culture, with other countries with a history of Han characters also having similar beliefs stemming from these concepts.
Zero
The number 0 (零, ) is the beginning of all things and is generally considered a good number, because it sounds like 良 (pinyin: liáng), which means 'good'.
One
The number 1 (一, ) is neither auspicious nor inauspicious. It is a number given to winners to indicate the first place. But it can also symbolize loneliness or being single. For example: November 11 is the Singles' Day in China, as the date has four ‘1’ which stand for singles.
Two
The number 2 (二, cardinal, or 兩, used with units, ) is most often considered a good number in Chinese culture. In Cantonese, 2 (二 or 兩, ) is homophonous with the characters for "easy" (易, ) and "bright" (亮, ), respectively. There is a Chinese saying: "good things come in pairs". It is common to repeat characters in product brand names, such as the character 喜 (), can be repeated to form the character 囍 ().
24 () in Cantonese sounds like "easy die" (易死, ).
28 () in Cantonese sounds like "easy prosper" (易發, ).
Three
The number 3 (三, ) sounds like 生 (), which means "to live" or "life" so it's considered a good number. It's significant since it is one of three important stages in a person's life (birth, marriage, and death).On the other hand, number 3 (三,) sounds like 散 () which means "to split" or "to separate" or "to part ways" or "to break up with" so it is a bad number too.
Four
While not traditionally considered an unlucky number, 4 has in recent times, gained an association with bad luck because of its pronunciation, predominantly for the Cantonese.
The belief that the number 4 is unlucky originated
|
https://en.wikipedia.org/wiki/WPFO
|
WPFO (channel 23) is a television station licensed to Waterville, Maine, United States, serving the Portland area as an affiliate of the Fox network. It is owned by Cunningham Broadcasting, which maintains a local marketing agreement (LMA) with Sinclair Broadcast Group, owner of CBS affiliate WGME-TV (channel 13), for the provision of certain services. However, Sinclair effectively owns WPFO as the majority of Cunningham's stock is owned by the family of deceased group founder Julian Smith. The stations share studios on Northport Drive in the North Deering section of Portland, while WPFO's transmitter is located on Brown Hill west of Raymond.
History
The station began broadcasting on August 27, 1999, as WMPX-TV and was a Pax TV (now Ion Television) affiliate owned by Paxson Communications (now Ion Media Networks). In addition to Pax programming, WMPX carried a small amount of local programming and in 2001, the station began airing rebroadcasts of NBC affiliate WCSH (channel 6)'s 11 p.m. newscasts when NBC had a partnership with Pax. Paxson sold the station in December 2002 to Corporate Media Consultants Group who changed the call sign to the current WPFO. The new calls reflected an affiliation change to Fox, which took place on April 7, 2003, filling a gap created in fall 2001 when WPXT (channel 51) switched to The WB. In the interim, prime time and children's programming from the Fox network was provided exclusively on WFXT (which was owned by the network at the time) for those living on the New Hampshire side of the market, and on Foxnet for cable subscribers throughout the entire state of Maine; WCKD-LP served as a secondary affiliate of Fox during that time, but only carried the network's sports programming. In July 2007, WPFO debuted a new logo and updated website. The website's design was outsourced to Fox Interactive Media which also develops websites for Fox's owned-and-operated stations. WPFO switched website providers to Broadcast Interactive Media in Ma
|
https://en.wikipedia.org/wiki/American%20Society%20for%20Artificial%20Internal%20Organs
|
American Society for Artificial Internal Organs (ASAIO) is an organization of individuals and groups that are interested in artificial internal organs and their development.
It supports research into artificial internal organs and holds an annual meeting, which attracts industry, researchers and government officials. ASAIO's most heavily represented areas are nephrology, cardiopulmonary devices (artificial hearts, heart-lung machines) and biomaterials. It publishes a peer-reviewed publication, the ASAIO Journal, 10 times a year.
References
External links
American Society for Artificial Internal Organs home page
ASAIO Journal - home
Implants (medicine)
Medical associations based in the United States
Prosthetics
Medical and health organizations based in Florida
|
https://en.wikipedia.org/wiki/CPU%20core%20voltage
|
The CPU core voltage (VCORE) is the power supply voltage supplied to the CPU (which is a digital circuit), GPU, or other device containing a processing core. The amount of power a CPU uses, and thus the amount of heat it dissipates, is the product of this voltage and the current it draws.
In modern CPUs, which are CMOS circuits, the current is almost proportional to the clock speed, the CPU drawing almost no current between clock cycles. (See, however, subthreshold leakage.)
Power saving and clock speed
To conserve power and manage heat, many laptop and desktop processors have a power management feature that software (usually the operating system) can use to adjust the clock speed and core voltage dynamically.
Often a voltage regulator module converts from 5V or 12 V or some other voltage to whatever CPU core voltage is required by the CPU.
The trend is towards lower core voltages, which conserve power. This presents the CMOS designer with a challenge, because in CMOS the voltages go only to ground and the supply voltage, the source, gate, and drain terminals of the FETs have only the supply voltage or zero voltage across them.
The MOSFET formula: says that the current supplied by the FET is proportional to the gate-source voltage reduced by a threshold voltage , which depends on the geometrical shape of the FET's channel and gate and their physical properties, especially capacitance. To reduce (necessary to reduce supply voltage and increase current) one must increase capacitance. However, the load being driven is another FET gate, so the current it requires is proportional to capacitance, which thus requires the designer to keep capacitance low.
The trend towards lower supply voltage therefore works against the goal of high clock speed.
Only improvements in photolithography and reduction in threshold voltage allow both to improve at once. On another note, the formula shown above is for long channel MOSFETs. With the area of the MOSFETs halving every
|
https://en.wikipedia.org/wiki/CAVNET
|
CAVNET was a secure military forum which became operational in April 2004. A part of SIPRNet, it allows fast access to knowledge acquired on the ground in combat.
It was used in Iraq war, and helps US military forces against the insurgents' adaptive tactics by providing data laterally and on a broader scale than with traditional reports.
The data shared between patrols on "The Net" (as is it is sometimes referred to by soldiers) has already played a crucial role to dismantle grenade-traps hidden behind posters of Moqtada al-Sadr that US soldiers often rip down.
References
Wide area networks
History of cryptography
United States government secrecy
|
https://en.wikipedia.org/wiki/RAYDAC
|
The RAYDAC (for Raytheon Digital Automatic Computer) was a one-of-a-kind computer built by Raytheon. It was started in 1949 and finished in 1953. It was installed at the Naval Air Missile Test Center at Point Mugu, California.
The RAYDAC used 5,200 vacuum tubes and 18,000 crystal diodes. It had 1,152 words of memory (36 bits per word), using delay-line memory, with an access time of up to 305 microseconds. Its addition time was 38 microseconds, multiplication time was 240 microseconds, and division time was 375 microseconds. (These times exclude the memory-access time.)
See also
List of vacuum-tube computers
External links
Erwin Tomash photo of General Front View From Right Side of RAYDAC Test Control Board (image)
Erwin Tomash drawing of RADAC Computer Control Room Showing Main Computer and Operator's Console (image)
References
One-of-a-kind computers
Vacuum tube computers
36-bit computers
Raytheon Company products
|
https://en.wikipedia.org/wiki/LDAP%20Data%20Interchange%20Format
|
The LDAP Data Interchange Format (LDIF) is a standard plain text data interchange format for representing Lightweight Directory Access Protocol (LDAP) directory content and update requests. LDIF conveys directory content as a set of records, one record for each object (or entry). It also represents update requests, such as Add, Modify, Delete, and Rename, as a set of records, one record for each update request.
LDIF was designed in the early 1990s by Tim Howes, Mark C. Smith, and Gordon Good while at the University of Michigan. LDIF was updated and extended in the late 1990s for use with Version 3 of LDAP. This later version of LDIF is called version 1 and is formally specified in RFC 2849, an IETF Standard Track RFC. RFC 2849 is authored by Gordon Good and was published in June 2000. It is currently a Proposed Standard.
A number of extensions to LDIF have been proposed over the years. One extension has been formally specified by the IETF and published. RFC 4525, authored by Kurt Zeilenga, extended LDIF to support the LDAP Modify-Increment extension. It is expected that additional extensions will be published by the IETF in the future.
Content record format
Each content record is represented as a group of attributes, with records separated from one another by blank lines. The individual attributes of a record are represented as single logical lines (represented as one or more multiple physical lines via a line-folding mechanism), comprising "name: value" pairs. Value data that do not fit within a portable subset of ASCII characters are marked with '::' after the attribute name and encoded into ASCII using base64 encoding. The content record format is a subset of the Internet Directory Information type.RFC 2425
Tools that employ LDIF
The OpenLDAP utilities include tools for exporting data from LDAP servers to LDIF content records (), importing data from LDIF content records to LDAP servers (), and applying LDIF change records to LDAP servers ().
LDI
|
https://en.wikipedia.org/wiki/Rectification%20%28geometry%29
|
In Euclidean geometry, rectification, also known as critical truncation or complete-truncation, is the process of truncating a polytope by marking the midpoints of all its edges, and cutting off its vertices at those points. The resulting polytope will be bounded by vertex figure facets and the rectified facets of the original polytope.
A rectification operator is sometimes denoted by the letter with a Schläfli symbol. For example, is the rectified cube, also called a cuboctahedron, and also represented as . And a rectified cuboctahedron is a rhombicuboctahedron, and also represented as .
Conway polyhedron notation uses for ambo as this operator. In graph theory this operation creates a medial graph.
The rectification of any regular self-dual polyhedron or tiling will result in another regular polyhedron or tiling with a tiling order of 4, for example the tetrahedron becoming an octahedron As a special case, a square tiling will turn into another square tiling under a rectification operation.
Example of rectification as a final truncation to an edge
Rectification is the final point of a truncation process. For example, on a cube this sequence shows four steps of a continuum of truncations between the regular and rectified form:
Higher degree rectifications
Higher degree rectification can be performed on higher-dimensional regular polytopes. The highest degree of rectification creates the dual polytope. A rectification truncates edges to points. A birectification truncates faces to points. A trirectification truncates cells to points, and so on.
Example of birectification as a final truncation to a face
This sequence shows a birectified cube as the final sequence from a cube to the dual where the original faces are truncated down to a single point:
In polygons
The dual of a polygon is the same as its rectified form. New vertices are placed at the center of the edges of the original polygon.
In polyhedra and plane tilings
Each platonic solid an
|
https://en.wikipedia.org/wiki/Fungiculture
|
Fungiculture is the cultivation of fungi such as mushrooms. Cultivating fungi can yield foods (which include mostly mushrooms), medicine, construction materials and other products. A mushroom farm is involved in the business of growing fungi.
The word is also commonly used to refer to the practice of cultivation of fungi by animals such as leafcutter ants, termites, ambrosia beetles, and marsh periwinkles.
Overview
Mushrooms are fungi and require different conditions than plants for optimal growth. Plants develop through photosynthesis, a process that converts atmospheric carbon dioxide into carbohydrates, especially cellulose. While sunlight provides an energy source for plants, mushrooms derive all of their energy and growth materials from their growth medium, through biochemical decomposition processes. This does not mean that light is an irrelevant requirement, since some fungi use light as a signal for fruiting. However, all the materials for growth must already be present in the growth medium. Mushrooms grow well at relative humidity levels of around 95–100%, and substrate moisture levels of 50 to 75%.
Instead of seeds, mushrooms reproduce through spores. Spores can be contaminated with airborne microorganisms, which will interfere with mushroom growth and prevent a healthy crop.
Mycelium, or actively growing mushroom culture, is placed on a substrate—usually sterilized grains such as rye or millet—and induced to grow into those grains. This is called inoculation. Inoculated grains (or plugs) are referred to as spawn. Spores are another inoculation option, but are less developed than established mycelium. Since they are also contaminated easily, they are only manipulated in laboratory conditions with a laminar flow cabinet.
Techniques
All mushroom growing techniques require the correct combination of humidity, temperature, substrate (growth medium) and inoculum (spawn or starter culture). Wild harvests, outdoor log inoculation and indoor trays all provi
|
https://en.wikipedia.org/wiki/Heliometer
|
A heliometer (from Greek ἥλιος hḗlios "sun" and measure) is an instrument originally designed for measuring the variation of the Sun's diameter at different seasons of the year, but applied now to the modern form of the instrument which is capable of much wider use.
Description
The basic concept is to introduce a split element into a telescope's optical path so as to produce a double image. If one element is moved using a screw micrometer, precise angle measurements can be made. The simplest arrangement is to split the object lens in half, with one half fixed and the other attached to the micrometer screw and slid along the cut diameter. To measure the diameter of the sun, for example, the micrometer is first adjusted so that the two images of the solar disk coincide (the "zero" position where the split elements form essentially a single element). The micrometer is then adjusted so that diametrically opposite sides of the two images of the solar disk just touch each other. The difference in the two micrometer readings so obtained is the (angular) diameter of the sun. Similarly, a precise measurement of the apparent separation between two nearby stars, A and B, is made by first superimposing the two images of the stars and then adjusting the double image so that star A in one image coincides with star B in the other. The difference in the two micrometer readings so obtained is the apparent separation or angular distance between the two stars.
History
The Syrian Arab astronomer Mu'ayyad al-Din al-Urdi, in his book, described a device called "the instrument with the two holes," which he used to measure and observe the apparent diameters of the sun and the moon.
The first application of the divided object-glass and the employment of double images in astronomical measures is due to Servington Savery of Shilstone in 1743. Pierre Bouguer, in 1748, originated the true conception of measurement by double image without the auxiliary aid of a filar micrometer, that is by c
|
https://en.wikipedia.org/wiki/Channel%20spacing
|
Channel spacing, also known as bandwidth, is a term used in radio frequency planning. It describes the frequency difference between adjacent allocations in a frequency plan. Channels for mediumwave radio stations, for example are allocated in internationally agreed steps of 9 or 10 kHz: 10 kHz in ITU Region 2 (the Americas), and 9 kHz elsewhere in the world.
References
Broadcast engineering
|
https://en.wikipedia.org/wiki/Headphone%20amplifier
|
A headphone amplifier is a low-powered audio amplifier designed particularly to drive headphones worn on or in the ears, instead of loudspeakers in speaker enclosures. Most commonly, headphone amplifiers are found embedded in electronic devices that have a headphone jack, such as integrated amplifiers, portable music players (e.g., iPods), and televisions. However, standalone units are used, especially in audiophile markets and in professional audio applications, such as music studios. Headphone amplifiers are available in consumer-grade models used by hi-fi enthusiasts and audiophiles and professional audio models, which are used in recording studios.
Consumer models
Consumer headphone amplifiers are commercially available separate devices, sold to a niche audiophile market of hi-fi enthusiasts. These devices allow for higher possible volumes and superior current capacity compared to the smaller, less expensive headphone amplifiers that are used in most audio players. In the case of the extremely high-end electrostatic headphones, such as the Stax SR-007, a specialized electrostatic headphone amplifier or transformer step-up box and power amplifier is required to use the headphones, as only a dedicated electrostatic headphone amplifier or transformer can provide the voltage levels necessary to drive the headphones. Most headphone amplifiers provide power between 10 mW and 2 W depending on the specific headphone being used and the design of the amplifier. Certain high power designs can provide up to 6W of power into low impedance loads, although the benefit of such power output with headphones is unclear, as the few orthodynamic headphones that have sufficiently low sensitivities to function with such power levels will reach dangerously high volume levels with such amplifiers.
Effectively, a headphone amplifier is a small power amplifier that can be connected to a standard headphone jack or the line output of an audio source. Electrically, a headphone amplifier
|
https://en.wikipedia.org/wiki/Invention%20of%20the%20telephone
|
The invention of the telephone was the culmination of work done by more than one individual, and led to an array of lawsuits relating to the patent claims of several individuals and numerous companies.
Early development
The concept of the telephone dates back to the string telephone or lover's telephone that has been known for centuries, comprising two diaphragms connected by a taut string or wire. Sound waves are carried as mechanical vibrations along the string or wire from one diaphragm to the other. The classic example is the tin can telephone, a children's toy made by connecting the two ends of a string to the bottoms of two metal cans, paper cups or similar items. The essential idea of this toy was that a diaphragm can collect voice sounds from the voice sounds for reproduction at a distance. One precursor to the development of the electromagnetic telephone originated in 1833 when Carl Friedrich Gauss and Wilhelm Eduard Weber invented an electromagnetic device for the transmission of telegraphic signals at the University of Göttingen, in Lower Saxony, helping to create the fundamental basis for the technology that was later used in similar telecommunication devices. Gauss's and Weber's invention is purported to be the world's first electromagnetic telegraph.
Charles Grafton Page
In 1840, American Charles Grafton Page passed an electric current through a coil of wire placed between the poles of a horseshoe magnet. He observed that connecting and disconnecting the current caused a ringing sound in the magnet. He called this effect "galvanic music".
Innocenzo Manzetti
Innocenzo Manzetti considered the idea of a telephone as early as 1844, and may have made one in 1864, as an enhancement to an automaton built by him in 1849.
Charles Bourseul was a French telegraph engineer who proposed (but did not build) the first design of a "make-and-break" telephone in 1854. That is about the same time that Meucci later claimed to have created his first attempt at the
|
https://en.wikipedia.org/wiki/Buffer%20credits
|
Buffer credits, also called buffer-to-buffer credits (BBC) are used as a flow control method by Fibre Channel technology and represent the number of frames a port can store.
Each time a port transmits a frame that port's BB Credit is decremented by one; for each R_RDY received, that port's BB Credit is incremented by one. If the BB Credit is zero the corresponding node cannot transmit until an R_RDY is received back.
The benefits of a large data buffer are particularly evident in long-distance applications, when operating at higher data rates (2 Gbit/s, 4 Gbit/s), or in systems with a heavily loaded PCI bus.
See also
Fibre Channel
Host adapter
Fibre Channel
|
https://en.wikipedia.org/wiki/EtherChannel
|
EtherChannel is a port link aggregation technology or port-channel architecture used primarily on Cisco switches. It allows grouping of several physical Ethernet links to create one logical Ethernet link for the purpose of providing fault-tolerance and high-speed links between switches, routers and servers. An EtherChannel can be created from between two and eight active Fast, Gigabit or 10-Gigabit Ethernet ports, with an additional one to eight inactive (failover) ports which become active as the other active ports fail. EtherChannel is primarily used in the backbone network, but can also be used to connect end user machines.
EtherChannel technology was invented by Kalpana in the early 1990s. Kalpana was acquired by Cisco Systems in 1994. In 2000, the IEEE passed 802.3ad, which is an open standard version of EtherChannel.
Benefits
Using an EtherChannel has numerous advantages, and probably the most desirable aspect is the bandwidth. Using the maximum of 8 active ports a total bandwidth of 800 Mbit/s, 8 Gbit/s or 80 Gbit/s is possible depending on port speed. This assumes there is a traffic mixture, as those speeds do not apply to a single application only. It can be used with Ethernet running on twisted pair wiring, single-mode and multimode fiber.
Because EtherChannel takes advantage of existing wiring it makes it very scalable. It can be used at all levels of the network to create higher bandwidth links as the traffic needs of the network increase. All Cisco switches have the ability to support EtherChannel.
When an EtherChannel is configured all adapters that are part of the channel share the same Layer 2 (MAC) address. This makes the EtherChannel transparent to network applications and users because they only see the one logical connection; they have no knowledge of the individual links.
EtherChannel aggregates the traffic across all the available active ports in the channel. The port is selected using a Cisco-proprietary hash algorithm, based on source o
|
https://en.wikipedia.org/wiki/Decomposition%20%28computer%20science%29
|
Decomposition in computer science, also known as factoring, is breaking a complex problem or system into parts that are easier to conceive, understand, program, and maintain.
Overview
There are different types of decomposition defined in computer sciences:
In structured programming, algorithmic decomposition breaks a process down into well-defined steps.
Structured analysis breaks down a software system from the system context level to system functions and data entities as described by Tom DeMarco.<ref>Tom DeMarco (1978). Structured Analysis and System Specification. New York, NY: Yourdon, 1978. , .</ref>
Object-oriented decomposition, on the other hand, breaks a large system down into progressively smaller classes or objects that are responsible for some part of the problem domain.
According to Booch, algorithmic decomposition is a necessary part of object-oriented analysis and design, but object-oriented systems start with and emphasize decomposition into objects.
More generally, functional decomposition in computer science is a technique for mastering the complexity of the function of a model. A functional model of a system is thereby replaced by a series of functional models of subsystems.
Decomposition topics
Decomposition paradigm
A decomposition paradigm in computer programming is a strategy for organizing a program as a number of parts, and it usually implies a specific way to organize a program text. Usually the aim of using a decomposition paradigm is to optimize some metric related to program complexity, for example the modularity of the program or its maintainability.
Most decomposition paradigms suggest breaking down a program into parts so as to minimize the static dependencies among those parts, and to maximize the cohesiveness of each part. Some popular decomposition paradigms are the procedural, modules, abstract data type and object oriented ones.
The concept of decomposition paradigm is entirely independent and different from that
|
https://en.wikipedia.org/wiki/Cisco%20Inter-Switch%20Link
|
Cisco Inter-Switch Link (ISL) is a Cisco Systems proprietary protocol that maintains VLAN information in Ethernet frames as traffic flows between switches and routers, or switches and switches. ISL is Cisco's VLAN encapsulation protocol and is supported only on some Cisco equipment over the Fast and Gigabit Ethernet links. It is offered as an alternative to the IEEE 802.1Q standard, a widely used VLAN tagging protocol, although the use of ISL for new sites is deprecated by Cisco.
With ISL, an Ethernet frame is encapsulated with a header that transports VLAN IDs between switches and routers. With IEEE 802.1Q the tag is internal. This is a key advantage for IEEE 802.1Q as it means tagged frames can be sent over standard Ethernet links.
ISL does add overhead to the frame as a 26-byte header containing a 10-bit VLAN ID. In addition, a 4-byte CRC is appended to the end of each frame. This CRC is in addition to any frame checking that the Ethernet frame requires. The fields in an ISL header identify the frame as belonging to a particular VLAN.
A VLAN ID is added only if the frame is forwarded out a port configured as a trunk link. If the frame is to be forwarded out a port configured as an access link, the ISL encapsulation is removed.
The size of an Ethernet encapsulated ISL frame can be expected to start from 94 bytes and increase up to 1548 bytes because of the overhead (additional fields) the protocol creates via encapsulation. ISL adds a 26-byte header (containing a 15-bit VLAN identifier) and a 4-byte CRC trailer to the frame. ISL functions at the data-link layer of the OSI model.
Another related Cisco protocol, Dynamic Inter-Switch Link Protocol (DISL), simplifies the creation of an ISL trunk from two interconnected Fast Ethernet devices. Fast EtherChannel technology enables aggregation of two full-duplex Fast Ethernet links for high-capacity backbone connections. DISL minimizes VLAN trunk configuration procedures because only one end of a link needs to be
|
https://en.wikipedia.org/wiki/Parafoil
|
A parafoil is a nonrigid (textile) airfoil with an aerodynamic cell structure which is inflated by the wind. Ram-air inflation forces the parafoil into a classic wing cross-section. Parafoils are most commonly constructed out of ripstop nylon.
The device was developed in 1964 by Domina Jalbert (1904–1991). Jalbert had a history of designing kites and was involved in the development of hybrid balloon-kite aerial platforms for carrying scientific instruments. He envisaged the parafoil would be used to suspend an aerial platform or for the recovery of space equipment. A patent was granted in 1966.
Deployment shock prevented the parafoil's immediate acceptance as a parachute. It was not until the addition of a drag canopy on the riser lines (known as a "slider") which slowed their spread that the parafoil became a suitable parachute. Compared to a simple round canopy, a parafoil parachute has greater steerability, will glide further and allows greater control of the rate of descent; the parachute format is mechanically a glider of the free-flight kite type and such aspects spawned paraglider use.
The air flow into the parafoil is coming more from below than the flight path might suggest, so the frontmost ropes tow against the airflow. When gliding, the angle of attack is lowered and the airflow meets the parafoil head on. This makes it difficult to achieve an optimum gliding angle without the parafoil deflating.
In 1984 Jalbert was awarded the Fédération Aéronautique Internationale (FAI) Gold Parachuting Medal for inventing the parafoil.
Parafoils see wide use in a variety of windsports such as kite flying, powered parachutes, paragliding, kitesurfing, speed flying, wingsuit flying and skydiving. The world's largest kite is a parafoil-variant.
Today, SpaceX uses steerable Parafoils to recover the Fairings of their Falcon 9 Rocket on two ships, GO Ms. Tree and GO Ms. Chief.
Patents
Multi-cell wing type aerial device, filed October 1964, issued November 1966
See
|
https://en.wikipedia.org/wiki/Resultant
|
In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root (possibly in a field extension), or, equivalently, a common factor (over their field of coefficients). In some older texts, the resultant is also called the eliminant.
The resultant is widely used in number theory, either directly or through the discriminant, which is essentially the resultant of a polynomial and its derivative. The resultant of two polynomials with rational or polynomial coefficients may be computed efficiently on a computer. It is a basic tool of computer algebra, and is a built-in function of most computer algebra systems. It is used, among others, for cylindrical algebraic decomposition, integration of rational functions and drawing of curves defined by a bivariate polynomial equation.
The resultant of n homogeneous polynomials in n variables (also called multivariate resultant, or Macaulay's resultant for distinguishing it from the usual resultant) is a generalization, introduced by Macaulay, of the usual resultant. It is, with Gröbner bases, one of the main tools of elimination theory.
Notation
The resultant of two univariate polynomials and is commonly denoted or
In many applications of the resultant, the polynomials depend on several indeterminates and may be considered as univariate polynomials in one of their indeterminates, with polynomials in the other indeterminates as coefficients. In this case, the indeterminate that is selected for defining and computing the resultant is indicated as a subscript: or
The degrees of the polynomials are used in the definition of the resultant. However, a polynomial of degree may also be considered as a polynomial of higher degree where the leading coefficients are zero. If such a higher degree is used for the resultant, it is usually indicated as a subscript or a superscript, such as or
Definition
The resultant of two u
|
https://en.wikipedia.org/wiki/California%20Senate%20Bill%201386%20%282002%29
|
California S.B. 1386 was a bill passed by the California legislature that amended the California law regulating the privacy of personal information: civil codes 1798.29, 1798.82 and 1798.84. This was an early example of many future U.S. and international security breach notification laws, it was introduced by California State Senator Steve Peace on February 12, 2002, and became operative July 1, 2003.
Sections
Enactment of a requirement for notification to any resident of California whose unencrypted personal information was, or is reasonably believed to have been, acquired by an unauthorized person. This requires an agency, person or business that conducts business in California and owns or licenses to computerized 'personal information,' to disclose any breach of security (to any resident whose unencrypted data is believed to have been disclosed).
The bill mandates various mechanisms and procedures with respect to many aspects of this scenario, subject also to other defined provisions.
Any agency that owns or licenses computerized data that includes personal information shall disclose any breach of the security of the system following discovery or notification of the breach in the security of the data to any resident of California whose unencrypted personal information was, or is reasonably believed to have been, acquired by an unauthorized person. An out-of-state corporation that has personal information relating to a California resident would fall under this statute. A question on minimum contacts would then ensue as to whether an action may be brought in California to enforce the California resident's rights under the statute.
Corporations with no physical locations in California are not subject to California law. SB 1386 no more impacts a Delaware corporation with no presence in California than do California laws regarding vehicle emissions. That SB 1386 would affect an out-of-state corporation is based on the notion of 'quasi in rem' jurisdiction, a no
|
https://en.wikipedia.org/wiki/Apparent%20death
|
Apparent death is a behavior in which animals take on the appearance of being dead. It is an immobile state most often triggered by a predatory attack and can be found in a wide range of animals from insects and crustaceans to mammals, birds, reptiles, amphibians, and fish. Apparent death is separate from the freezing behavior seen in some animals.
Apparent death is a form of animal deception considered to be an anti-predator strategy, but it can also be used as a form of aggressive mimicry. When induced by humans, the state is sometimes colloquially known as animal hypnosis. The earliest written record of "animal hypnosis" dates back to the year 1646 in a report by Athanasius Kircher, in which he subdued chickens.
Description
Tonic immobility (also known as the act of feigning death, or exhibiting thanatosis) is a behaviour in which some animals become apparently temporarily paralysed and unresponsive to external stimuli. Tonic immobility is most generally considered to be an anti-predator behavior because it occurs most often in response to an extreme threat such as being captured by a (perceived) predator. Some animals use it to attract prey or facilitate reproduction. For example, in sharks exhibiting the behaviour, some scientists relate it to mating, arguing that biting by the male immobilizes the female and thus facilitates mating.
Despite appearances, some animals remain conscious throughout tonic immobility. Evidence for this includes the occasional responsive movement, scanning of the environment and animals in tonic immobility often taking advantage of escape opportunities. Tonic immobility is preferred in the literature because it has neutral connotations compared to 'thanatosis' which has a strong association with death.
Difference from freezing
Tonic immobility is different from freezing behavior in animals. A deer in headlights and an opossum "playing dead" are common examples of an animal freezing and playing dead, respectively. Freezing occur
|
https://en.wikipedia.org/wiki/IP%20%28complexity%29
|
In computational complexity theory, the class IP (interactive proof) is the class of problems solvable by an interactive proof system. It is equal to the class PSPACE. The result was established in a series of papers: the first by Lund, Karloff, Fortnow, and Nisan showed that co-NP had multiple prover interactive proofs; and the second, by Shamir, employed their technique to establish that IP=PSPACE. The result is a famous example where the proof does not relativize.
The concept of an interactive proof system was first introduced by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in 1985. An interactive proof system consists of two machines, a prover, P, which presents a proof that a given string n is a member of some language, and a verifier, V, that checks that the presented proof is correct. The prover is assumed to be infinite in computation and storage, while the verifier is a probabilistic polynomial-time machine with access to a random bit string whose length is polynomial on the size of n. These two machines exchange a polynomial number, p(n), of messages and once the interaction is completed, the verifier must decide whether or not n is in the language, with only a 1/3 chance of error. (So any language in BPP is in IP, since then the verifier could simply ignore the prover and make the decision on its own.)
Definition
A language L belongs to IP if there exist V, P such that for all Q, w:
The Arthur–Merlin protocol, introduced by László Babai, is similar in nature, except that the number of rounds of interaction is bounded by a constant rather than a polynomial.
Goldwasser et al. have shown that public-coin protocols, where the random numbers used by the verifier are provided to the prover along with the challenges, are no less powerful than private-coin protocols. At most two additional rounds of interaction are required to replicate the effect of a private-coin protocol. The opposite inclusion is straightforward, because the verifier can always s
|
https://en.wikipedia.org/wiki/Harvard%20Mark%20III
|
The Harvard Mark III, also known as ADEC (for Aiken Dahlgren Electronic Calculator) was an early computer that was partially electronic and partially electromechanical. It was built at Harvard University under the supervision of Howard Aiken for use at Naval Surface Warfare Center Dahlgren Division.
Technical overview
The Mark III processed numbers of 16 decimal digits (plus sign), each digit encoded with four bits, though using a form of encoding that is different to conventional binary-coded decimal today. Numbers were read and processed serially, meaning one decimal digit at a time, but the four bits for the digit were read in parallel. The instruction length, however, was 38 bits, read in parallel.
It used 5,000 vacuum tubes and 1,500 crystal diodes. It weighed . It used magnetic drum memory of 4,350 words. Its addition time was 4,400 microseconds and the multiplication time was 13,200 microseconds (times include memory access time). Aiken boasted that the Mark III was the fastest electronic computer in the world.
The Mark III used nine magnetic drums (one of the first computers to do so). One drum could contain 4,000 instructions and has an access time of 4,400 microseconds; thus it was a stored-program computer. The arithmetic unit could access two other drums – one contained 150 words of constants and the other contained 200 words of variables. Both of these drums also had an access time of 4,400 microseconds. This separation of data and instructions is now sometimes referred to as the Harvard architecture although that term was not coined until the 1970s (in the context of microcontrollers). There were six other drums that held a total of 4,000 words of data, but the arithmetic unit couldn't access these drums directly. Data had to be transferred between these drums and the drum the arithmetic unit could access via registers implemented by electromechanical relays. This was a bottleneck in the computer and made the access time to data on these drums long
|
https://en.wikipedia.org/wiki/Dentinogenesis
|
Dentinogenesis is the formation of dentin, a substance that forms the majority of teeth. Dentinogenesis is performed by odontoblasts, which are a special type of biological cell on the outer wall of dental pulps, and it begins at the late bell stage of a tooth development. The different stages of dentin formation after differentiation of the cell result in different types of dentin: mantle dentin, primary dentin, secondary dentin, and tertiary dentin.
Odontoblast differentiation
Odontoblasts differentiate from cells of the dental papilla. This is an expression of signaling molecules and growth factors of the inner enamel epithelium (IEE).
Formation of mantle dentin
They begin secreting an organic matrix around the area directly adjacent to the IEE, closest to the area of the future cusp of a tooth. The organic matrix contains collagen fibers with large diameters (0.1-0.2 μm in diameter). The odontoblasts begin to move toward the center of the tooth, forming an extension called the odontoblast process. Thus, dentin formation proceeds toward the inside of the tooth. The odontoblast process causes the secretion of hydroxyapatite crystals and mineralization of the matrix (mineralisation occurs due to matrix vesicles). This area of mineralization is known as mantle dentin and is a layer usually about 20-150 μm thick.
Formation of primary dentin
Whereas mantle dentin forms from the preexisting ground substance of the dental papilla, primary dentin forms through a different process. Odontoblasts increase in size, eliminating the availability of any extracellular resources to contribute to an organic matrix for mineralization. Additionally, the larger odontoblasts cause collagen to be secreted in smaller amounts, which results in more tightly arranged, heterogeneous nucleation that is used for mineralization. Other materials (such as lipids, phosphoproteins, and phospholipids) are also secreted. There is some dispute about the control of mineralization during de
|
https://en.wikipedia.org/wiki/Unified%20Science
|
"Unified Science" can refer to any of three related strands in contemporary thought.
Belief in the unity of science was a central tenet of logical positivism. Different logical positivists construed this doctrine in several different ways, e.g. as a reductionist thesis, that the objects investigated by the special sciences reduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all of the theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common method.
The writings of Edward Haskell and a few associates, seeking to rework science into a single discipline employing a common artificial language. This work culminated in the 1972 publication of Full Circle: The Moral Force of Unified Science. The vast part of the work of Haskell and his contemporaries remains unpublished, however. Timothy Wilken and Anthony Judge have recently revived and extended the insights of Haskell and his coworkers.
Unified Science has been a consistent thread since the 1940s in Howard T. Odum's systems ecology and the associated Emergy Synthesis, modeling the "ecosystem": the geochemical, biochemical, and thermodynamic processes of the lithosphere and biosphere. Modeling such earthly processes in this manner requires a science uniting geology, physics, biology, and chemistry (H.T.Odum 1995). With this in mind, Odum developed a common language of science based on electronic schematics, with applications to ecology economic systems in mind (H.T.Odum 1994).
See also
Consilience — the unification of knowledge, e.g. science and the humanities
Tree of knowledge system
References
Odum, H.T. 1994. Ecological and General Systems: An Introduction to Systems Ecology. Colorado University Press, Colorado.
Odum, H.T. 1995. 'Energy Systems and the Unification of Science', in Hall, C.S. (ed.) Maximum Power: The
|
https://en.wikipedia.org/wiki/MakeIndex
|
MakeIndex is a computer program which provides a sorted index from unsorted raw data. MakeIndex can process raw data output by various programs, however, it is generally used with LaTeX and troff.
MakeIndex was written around the year 1986 by Pehong Chen in the C programming language and is free software. Six pages of documentation titled "MakeIndex: An Index Processor for LaTeX" by Leslie Lamport are available on the web and dated "17 February 1987."
See also
xindy
References
Wikibooks: LaTeX/Indexing
Pehong Chen and Michael A. Harrison: Index preparation and processing (distributed with MakeIndex)
Leslie Lamport: MakeIndex: an index processor for LaTeX
Frank Mittelbach et al., The LaTeX Companion, Addison-Wesley Professional, 2nd edition, 2004,
Free software programmed in C
Free TeX software
Troff
Index (publishing)
|
https://en.wikipedia.org/wiki/Cross-covariance
|
In probability and statistics, given two stochastic processes and , the cross-covariance is a function that gives the covariance of one process with the other at pairs of time points. With the usual notation for the expectation operator, if the processes have the mean functions and , then the cross-covariance is given by
Cross-covariance is related to the more commonly used cross-correlation of the processes in question.
In the case of two random vectors and , the cross-covariance would be a matrix (often denoted ) with entries Thus the term cross-covariance is used in order to distinguish this concept from the covariance of a random vector , which is understood to be the matrix of covariances between the scalar components of itself.
In signal processing, the cross-covariance is often called cross-correlation and is a measure of similarity of two signals, commonly used to find features in an unknown signal by comparing it to a known one. It is a function of the relative time between the signals, is sometimes called the sliding dot product, and has applications in pattern recognition and cryptanalysis.
Cross-covariance of random vectors
Cross-covariance of stochastic processes
The definition of cross-covariance of random vectors may be generalized to stochastic processes as follows:
Definition
Let and denote stochastic processes. Then the cross-covariance function of the processes is defined by:
where and .
If the processes are complex-valued stochastic processes, the second factor needs to be complex conjugated:
Definition for jointly WSS processes
If and are a jointly wide-sense stationary, then the following are true:
for all ,
for all
and
for all
By setting (the time lag, or the amount of time by which the signal has been shifted), we may define
.
The cross-covariance function of two jointly WSS processes is therefore given by:
which is equivalent to
.
Uncorrelatedness
Two stochastic processes and are called uncorrelated i
|
https://en.wikipedia.org/wiki/Dedicated%20hosting%20service
|
A dedicated hosting service, dedicated server, or managed hosting service is a type of Internet hosting in which the client leases an entire server not shared with anyone else. This is more flexible than shared hosting, as organizations have full control over the server(s), including choice of operating system, hardware, etc.
There is also another level of dedicated or managed hosting commonly referred to as complex managed hosting. Complex Managed Hosting applies to both physical dedicated servers, Hybrid server and virtual servers, with many companies choosing a hybrid (combination of physical and virtual) hosting solution.
There are many similarities between standard and complex managed hosting but the key difference is the level of administrative and engineering support that the customer pays for – owing to both the increased size and complexity of the infrastructure deployment. The provider steps in to take over most of the management, including security, memory, storage and IT support. The service is primarily proactive in nature. Server administration can usually be provided by the hosting company as an add-on service. In some cases a dedicated server can offer less overhead and a larger return on investment. Dedicated servers are hosted in data centers, often providing redundant power sources and HVAC systems. In contrast to colocation, the server hardware is owned by the provider and in some cases they will provide support for operating systems or applications.
Using a dedicated hosting service offers the benefits of high performance, security, email stability, and control. Due to the relatively high price of dedicated hosting, it is mostly used by websites that receive a large volume of traffic.
Operating system support
Availability, price and employee familiarity often determines which operating systems are offered on dedicated servers. Variations of Linux and Unix (open source operating systems) are often included at no charge to the customer. Com
|
https://en.wikipedia.org/wiki/Sephadex
|
Sephadex is a cross-linked dextran gel used for gel filtration. It was launched by Pharmacia in 1959, after development work by Jerker Porath and Per Flodin. The name is derived from separation Pharmacia dextran. It is normally manufactured in a bead form and most commonly used for gel filtration columns. By varying the degree of cross-linking, the fractionation properties of the gel can be altered.
These highly specialized gel filtration and chromatographic media are composed of macroscopic beads synthetically derived from the polysaccharide dextran. The organic chains are cross-linked to give a three-dimensional network having functional ionic groups attached by ether linkages to glucose units of the polysaccharide chains.
Available forms include anion and cation exchangers, as well as gel filtration resins, with varying degrees of porosity; bead sizes fall in discrete ranges between 20 and 300 µm.
Sephadex is also used for ion-exchange chromatography.
Sephadex is crosslinked with epichlorohydrin.
Applications
Sephadex is used to separate molecules by molecular weight. Sephadex is a faster alternative to dialysis (de-salting), requiring a low dilution factor (as little as 1.4:1), with high activity recoveries. Sephadex is also used for buffer exchange and the removal of small molecules during the preparation of large biomolecules, such as ampholytes, detergents, radioactive or fluorescent labels, and phenol (during DNA purification).
A special hydroxypropylated form of Sephadex resin, named Sephadex LH-20, is used for the separation and purification of small organic molecules such as steroids, terpenoids, lipids. An example of use is the purification of cholesterol.
Fractionation
Exclusion chromatography.
Fractionation Range of Globular Proteins and Dextrans (Da).
Ion-exchange chromatography.
See also
PEGylation
Size exclusion chromatography
Superose
Sepharose
References
Biochemistry methods
Chromatography
Swedish brands
|
https://en.wikipedia.org/wiki/Soil%20crust
|
Soil crusts are soil surface layers that are distinct from the rest of the bulk soil, often hardened with a platy surface. Depending on the manner of formation, soil crusts can be biological or physical. Biological soil crusts are formed by communities of microorganisms that live on the soil surface whereas physical crusts are formed by physical impact such as that of raindrops.
Biological soil crusts
Biological soil crusts are communities of living organisms on the soil surface in arid- and semi-arid ecosystems. They are found throughout the world with varying species composition and cover depending on topography, soil characteristics, climate, plant community, microhabitats, and disturbance regimes. Biological soil crusts perform important ecological roles including carbon fixation, nitrogen fixation, soil stabilization, alter soil albedo and water relations, and affect germination and nutrient levels in vascular plants. They can be damaged by fire, recreational activity, grazing, and other disturbance and can require long time periods to recover composition and function. Biological soil crusts are also known as cryptogamic, microbiotic, microphytic, or cryptobiotic soils.
Physical soil crusts
Physical (as opposed to biological) soil crusts results from raindrop or trampling impacts. They are often hardened relative to uncrusted soil due to the accumulation of salts and silica. These can coexist with biological soil crusts, but have different ecological impact due to their difference in formation and composition. Physical soil crusts often reduce water infiltration, can inhibit plant establishment, and when disrupted can be eroded rapidly.
References
External links
Cryptobiotic soils by the USGS
Crust, soil
Soil physics
Lichenology
|
https://en.wikipedia.org/wiki/The%20PracTeX%20Journal
|
The PracTeX Journal, or simply PracTeX, also known as TPJ, was an online journal focussing on practical use of the TeX typesetting system. The first issue appeared in March 2005. It was published by the TeX Users Group and intended to be a complement to their primary print journal, TUGboat. The PracTeX Journal was last published in October 2012.
Topics covered in PracTeX included:
publishing projects or activities accomplished through the use of TeX
problems that were resolved through the use of TeX or problems with TeX that were resolved
how to use certain LaTeX packages
questions & answers
introductions for beginners
The editorial board included many long-time and well-known TeX developers, including Lance Carnes, Arthur Ogawa, and Hans Hagen.
References
External links
Journal home page
Online magazines published in the United States
Defunct computer magazines published in the United States
Magazines established in 2005
Magazines disestablished in 2012
TeX
Typesetting
|
https://en.wikipedia.org/wiki/Angle%20of%20arrival
|
The angle of arrival (AoA) of a signal is the direction from which the signal (e.g. radio, optical or acoustic) is received.
Measurement
Measurement of AoA can be done by determining the direction of propagation of a radio-frequency wave incident on an antenna array or determined from maximum signal strength during antenna rotation.
The AoA can be calculated by measuring the time difference of arrival (TDOA) between individual elements of the array.
Generally this TDOA measurement is made by measuring the difference in received phase at each element in the antenna array. This can be thought of as beamforming in reverse. In beamforming, the signal from each element is weighed to "steer" the gain of the antenna array. In AoA, the delay of arrival at each element is measured directly and converted to an AoA measurement.
Consider, for example, a two element array spaced apart by one-half the wavelength of an incoming RF wave. If a wave is incident upon the array at boresight, it will arrive at each antenna simultaneously. This will yield 0° phase-difference measured between the two antenna elements, equivalent to a 0° AoA. If a wave is incident upon the array at broadside, then a 180° phase difference will be measured between the elements, corresponding to a 90° AoA.
In optics, AoA can be calculated using interferometry.
Applications
An application of AoA is in the geolocation of cell phones. The aim is either for the cell system to report the location of a cell phone placing an emergency call or to provide a service to tell the user of the cell phone where they are. Multiple receivers on a base station would calculate the AoA of the cell phone's signal, and this information would be combined to determine the phone's location.
AoA is generally used to discover the location of pirate radio stations or of any military radio transmitter.
In submarine acoustics, AoA is used to localize objects with active or passive ranging.
Limitation
Limitations on the acc
|
https://en.wikipedia.org/wiki/Recurrence%20quantification%20analysis
|
Recurrence quantification analysis (RQA) is a method of nonlinear data analysis (cf. chaos theory) for the investigation of dynamical systems. It quantifies the number and duration of recurrences of a dynamical system presented by its phase space trajectory.
Background
The recurrence quantification analysis (RQA) was developed in order to quantify differently appearing recurrence plots (RPs), based on the small-scale structures therein. Recurrence plots are tools which visualise the recurrence behaviour of the phase space trajectory of dynamical systems:
,
where is the Heaviside function and a predefined tolerance.
Recurrence plots mostly contain single dots and lines which are parallel to the mean diagonal (line of identity, LOI) or which are vertical/horizontal. Lines parallel to the LOI are referred to as diagonal lines and the vertical structures as vertical lines. Because an RP is usually symmetric, horizontal and vertical lines correspond to each other, and, hence, only vertical lines are considered. The lines correspond to a typical behaviour of the phase space trajectory: whereas the diagonal lines represent such segments of the phase space trajectory which run parallel for some time, the vertical lines represent segments which remain in the same phase space region for some time.
If only a time series is available, the phase space can be reconstructed by using a time delay embedding (see Takens' theorem):
where is the time series, the embedding dimension and the time delay.
The RQA quantifies the small-scale structures of recurrence plots, which present the number and duration of the recurrences of a dynamical system. The measures introduced for the RQA were developed heuristically between 1992 and 2002 (Zbilut & Webber 1992; Webber & Zbilut 1994; Marwan et al. 2002). They are actually measures of complexity. The main advantage of the recurrence quantification analysis is that it can provide useful information even for short and non-stationary d
|
https://en.wikipedia.org/wiki/City%20Connection
|
is a 1985 platform game developed and published as an arcade video game by Jaleco. It was released in North America by Kitkorp as Cruisin'. The player controls Clarice in her Honda City hatchback and must drive over elevated roads to paint them. Clarice is pursued by police cars, which she can stun by hitting them with oil cans. The design was inspired by maze chase games like Pac-Man (1980) and Crush Roller (1981)
City Connection was ported to the Nintendo Entertainment System, MSX, and ZX Spectrum. In Japan, the game has maintained a loyal following, and the NES version is seen as a classic for the platform. It was re-released in several Jaleco game collections and services such as the Wii Virtual Console. These received mixed responses in North America, with critics disliking its simplicity, lack of replay value, and poor controls. Some felt it possessed a cute aesthetic and unique concept and was entertaining. Clarice is one of the first female protagonists in a console game.
Jaleco released a sequel, City Connection Rocket, for Japanese mobile phones in 2004.
Gameplay
In City Connection, the player controls Clarice, a blue-haired teen driving an orange Honda City hatchback, as she travels around the world in the quest of finding herself the perfect man. Clarice traverses through twelve side-scrolling stages that take place within famous locations around the world, including New York, London, and Japan. To clear these levels, the player must drive over each of the elevated highways to change their color from white to green. The car can jump over large gaps to reach higher sections of the stage.
Clarice is constantly being pursued by police cars that follow her around the stage, and must also avoid flag-waving cats that block her from moving past them. Clarice can collect and launch oil cans at police cars and traffic vehicles to temporarily stun them; ramming into them while stunned will knock them off the stage. Cats are invulnerable to oil cans, and cann
|
https://en.wikipedia.org/wiki/Defeminization
|
In developmental biology and zoology, defeminization is an aspect of the process of sexual differentiation by which a potential female-specific structure, function, or behavior is changed by one of the processes of male development.
See also
Sexual differentiation
Defeminization and masculinization
Virilization
Feminization
References
Sexual anatomy
Zoology
Physiology
|
https://en.wikipedia.org/wiki/Thermal%20decomposition
|
Thermal decomposition (or thermolysis) is a chemical decomposition caused by heat. The decomposition temperature of a substance is the temperature at which the substance chemically decomposes. The reaction is usually endothermic as heat is required to break chemical bonds in the compound undergoing decomposition. If decomposition is sufficiently exothermic, a positive feedback loop is created producing thermal runaway and possibly an explosion or other chemical reaction.
Decomposition temperature definition
A simple substance (like water) may exist in equilibrium with its thermal decomposition products, effectively halting the decomposition. The equilibrium fraction of decomposed molecules increases with the temperature.
Since thermal decomposition is a kinetic process, the observed temperature of its beginning in most instances will be a function of the experimental conditions and sensitivity of the experimental setup. For rigorous depiction of the process, the use of thermokinetic modeling is recommended.
Examples
Calcium carbonate (limestone or chalk) decomposes into calcium oxide and carbon dioxide when heated. The chemical reaction is as follows:
CaCO3 → CaO + CO2
The reaction is used to make quick lime, which is an industrially important product.
Another example of thermal decomposition is 2Pb(NO3)2 → 2PbO + O2 + 4NO2.
Some oxides, especially of weakly electropositive metals decompose when heated to high enough temperature. A classical example is the decomposition of mercuric oxide to give oxygen and mercury metal. The reaction was used by Joseph Priestley to prepare samples of gaseous oxygen for the first time.
When water is heated to well over 2000 °C, a small percentage of it will decompose into OH, monatomic oxygen, monatomic hydrogen, O2, and H2.
The compound with the highest known decomposition temperature is carbon monoxide at ≈3870 °C (≈7000 °F).
Decomposition of nitrates, nitrites and ammonium compounds
Ammonium dichromate on heating yields nitro
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.