source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/RMX%20%28operating%20system%29
|
Real-time Multitasking eXecutive (iRMX) is a real-time operating system designed for use with the Intel 8080 and 8086 family of processors.
Overview
Intel developed iRMX in the 1970s and originally released RMX/80 in 1976 and RMX/86 in 1980 to support and create demand for their processors and Multibus system platforms.
The functional specification for RMX/86 was authored by Bruce Schafer and Miles Lewitt and was completed in the summer of 1978 soon after Intel relocated the entire Multibus business from Santa Clara, California to Aloha, Oregon. Schafer and Lewitt went on to each manage one of the two teams that developed the RMX/86 product for release on schedule in 1980.
Effective 2000 iRMX is supported, maintained, and licensed worldwide by TenAsys Corporation, under an exclusive licensing arrangement with Intel.
iRMX is a layered design: containing a kernel, nucleus, basic i/o system, extended i/o system and human interface. An installation need include only the components required: intertask synchronization, communication subsystems, a filesystem, extended memory management, command shell, etc. The native filesystem is specific to iRMX, but has many similarities to the original Unix (V6) filesystem, such as 14 character path name components, file nodes, sector lists, application readable directories, etc.
iRMX supports multiple processes (known as jobs in RMX parlance) and multiple threads are supported within each process (task). In addition, interrupt handlers and threads exist to run in response to hardware interrupts. Thus, iRMX is a multi-processing, multi-threaded, pre-emptive, real-time operating system (RTOS).
Commands
The following list of commands are supported by iRMX 86.
ATTACHDEVICE
ATTACHFILE
BACKUP
COPY
CREATEDIR
DATE
DEBUG
DELETE
DETACHDEVICE
DETACHFILE
DIR
DISKVERIFY
DOWNCOPY
FORMAT
INITSTATUS
JOBDELETE
LOCDATA
LOCK
LOGICALNAMES
MEMORY
PATH
PERMIT
RENAME
RESTORE
SUBMIT
SUPER
TIME
UPCOPY
VERSION
WHOAMI
Histo
|
https://en.wikipedia.org/wiki/DVK
|
DVK (, Interactive Computing Complex) is a Soviet PDP-11-compatible personal computer.
Overview
The design is also known as Elektronika MS-0501 and Elektronika MS-0502.
Early models of the DVK series are based on K1801VM1 or K1801VM2 microprocessors with a 16 bit address bus. Later models use the KM1801VM3 microprocessor with a 22 bit extended address bus.
Models
DVK-1
DVK-1M
DVK-2
DVK-2M
DVK-3
DVK-3M2
Kvant 4C (aka DVK-4)
DVK-4M
See also
Elektronika BK-0010
SM EVM
UKNC
External links
Articles about the USSR Computers history
Images of the DVK computers
Archive software and documentation for Soviet computers UK-NC, DVK and BK0010.
Microcomputers
Ministry of the Electronics Industry (Soviet Union) computers
PDP-11
|
https://en.wikipedia.org/wiki/Bicyclic%20semigroup
|
In mathematics, the bicyclic semigroup is an algebraic object important for the structure theory of semigroups. Although it is in fact a monoid, it is usually referred to as simply a semigroup. It is perhaps most easily understood as the syntactic monoid describing the Dyck language of balanced pairs of parentheses. Thus, it finds common applications in combinatorics, such as describing binary trees and associative algebras.
History
The first published description of this object was given by Evgenii Lyapin in 1953. Alfred H. Clifford and Gordon Preston claim that one of them, working with David Rees, discovered it independently (without publication) at some point before 1943.
Construction
There are at least three standard ways of constructing the bicyclic semigroup, and various notations for referring to it. Lyapin called it P; Clifford and Preston used ; and most recent papers have tended to use B. This article will use the modern style throughout.
From a free semigroup
The bicyclic semigroup is the quotient of the free monoid on two generators p and q by the congruence generated by the relation p q = 1. Thus, each semigroup element is a string of those two letters, with the proviso that the subsequence "p q" does not appear.
The semigroup operation is concatenation of strings, which is clearly associative.
It can then be shown that all elements of B in fact have the form qa pb, for some natural numbers a and b. The composition operation simplifies to
(qa pb) (qc pd) = qa + c − min{b, c} pd + b − min{b, c}.
From ordered pairs
The way in which these exponents are constrained suggests that the "p and q structure" can be discarded, leaving only operations on the "a and b" part.
So B is the semigroup of pairs of natural numbers (including zero), with operation
(a, b) (c, d) = (a + c − min{b, c}, d + b − min{b, c}).
This is sufficient to define B so that it is the same object as in the original construction. Just as p and q generated B originally, with the e
|
https://en.wikipedia.org/wiki/Temporal%20difference%20learning
|
Temporal difference (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods.
While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known. This is a form of bootstrapping, as illustrated with the following example:
Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday – and thus be able to change, say, Saturday's model before Saturday arrives.
Temporal difference methods are related to the temporal difference model of animal learning.
Mathematical formulation
The tabular TD(0) method is one of the simplest TD methods. It is a special case of more general stochastic approximation methods. It estimates the state value function of a finite-state Markov decision process (MDP) under a policy . Let denote the state value function of the MDP with states , rewards and discount rate under the policy :
We drop the action from the notation for convenience. satisfies the Hamilton-Jacobi-Bellman Equation:
so is an unbiased estimate for . This observation motivates the following algorithm for estimating .
The algorithm starts by initializing a table arbitrarily, with one value for each state of the MDP. A positive learning rate is chosen.
We then repeatedly evaluate the policy , obtain a reward and update the value function for the old state using the rule:
where and a
|
https://en.wikipedia.org/wiki/Depaneling
|
Depaneling is a process step in high-volume electronics assembly production. In order to increase the throughput of printed circuit board (PCB) manufacturing and surface mount (SMT) lines, PCBs are often designed so that they consist of many smaller individual PCBs that will be used in the final product. This PCB cluster is called a panel or multiblock. The large panel is broken up or "depaneled" as a certain step in the process - depending on the product, it may happen right after SMT process, after in-circuit test (ICT), after soldering of through-hole elements, or even right before the final assembly of the PCBA into the enclosure.
Risks
When selecting a depaneling technique, it is important to be mindful of the risks, including:
Mechanical Strain: depaneling can be a violent operation and may bend the PCB causing some components to fracture, or in the worst case, break traces. Ways to mitigate this are avoiding placing components near the edge of the PCBA, and orienting components parallel to the break line.
Tolerance: some methods of depaneling may result in the PCBA being a different size than intended. Ways to mitigate are to communicate with the manufacturer about which dimensions are critical, and selecting a depaneling method that meets your needs. Hand depaneling will have the loosest tolerance, laser depaneling the tightest.
Main depanel technologies
There are six main depaneling cutting techniques currently in use:
hand break
pizza cutter / V-cut
punch
router
saw
laser
Hand break
This method is suitable for strain-resistant circuits (e.g. without SMD components). The operator simply breaks the PCB, usually along a prepared V-groove line, with the help of a proper fixture.
Pizza cutter / V-cut
A pizza cutter is a rotary blade, sometimes rotating using its own motor. The operator moves a pre-scored PCB along a V-groove line, usually with the help of a special fixture. This method is often used only for cutting huge panels into smaller ones. The equip
|
https://en.wikipedia.org/wiki/Conax
|
Conax develops television encryption, conditional access and content security for digital television. Conax provide CAS technology to pay TV operators in 85 countries. The company has offices in Norway (headquarters), Russia, Germany, Brazil, the United States, Canada, Mexico, Indonesia, Philippines, Thailand, China, Singapore, and India, with a 24/7 Global Support Center in India.
Conax stems from Telenor Research Labs in the 1980s. It was incorporated as a separate company Conax AS in 1994.
In March 2014, the company was sold by Telenor Group to Swiss-based Kudelski Group for NOK 1.5 billion.
Conax CAS employs several versions, namely Conax CAS 3, Conax CAS 5, Conax CAS 7, Conax CAS 7.5 and Conax Contego. Those versions are shared amongst two types of CAM: Chipset Pairing and Generic/Non-Chipset Pairing in which compatible TV Smart Cards may not support one or the other. The company also provide DRM-solution for streaming services based on Microsoft PlayReady and Google Widevine.
A few pay TV operators using Conax conditional access are (alphabetic ordre) :
4TV Myanmar
AKTA Telecom Romania
Allente (Norway) (previously Viasat/Canal Digital Satellite)
Antik SAT (Slovakia)
Cignal Philippines
Cyfrowy Polsat, Platforma Canal+ and Orange Polska (Poland)
DigitAlb (Albania)
Digicable India
Dish TV (India)
DMAX - germany
Focus Sat (operated by UPC Romania, later M7 Group)
HOMESAT (Lebanon)
Joyne Netherlands
JSTV (Europe)
K-Vision (Indonesia)
Malivision (Mali)
Mindig TV Hungary
RiksTV (Norway)
SBB (Serbia)
SitiCable India
StarTimes Media (SSA)
Telenor (Norway and Sweden)
TeleRed (Argentina)
TVR Romania
Conax is also used by MNC Media's free to air channels (RCTI, MNCTV and GTV. also iNews was encrypted during sport programme.) and K-Vision to encrypt some programs for satellite transmissions for prevent illegal redistribution by unlicensed local cable operators or others third parties and encrypt all copyrighted content on MNC media Transponder at Telkom-4.
Referen
|
https://en.wikipedia.org/wiki/Rotating%20reference%20frame
|
A rotating frame of reference is a special case of a non-inertial reference frame that is rotating relative to an inertial reference frame. An everyday example of a rotating reference frame is the surface of the Earth. (This article considers only frames rotating about a fixed axis. For more general rotations, see Euler angles.)
Fictitious forces
All non-inertial reference frames exhibit fictitious forces; rotating reference frames are characterized by three:
the centrifugal force,
the Coriolis force,
and, for non-uniformly rotating reference frames,
the Euler force.
Scientists in a rotating box can measure the rotation speed and axis of rotation by measuring these fictitious forces. For example, Léon Foucault was able to show the Coriolis force that results from Earth's rotation using the Foucault pendulum. If Earth were to rotate many times faster, these fictitious forces could be felt by humans, as they are when on a spinning carousel.
Centrifugal force
In classical mechanics, centrifugal force is an outward force associated with rotation. Centrifugal force is one of several so-called pseudo-forces (also known as inertial forces), so named because, unlike real forces, they do not originate in interactions with other bodies situated in the environment of the particle upon which they act. Instead, centrifugal force originates in the rotation of the frame of reference within which observations are made.
Coriolis force
The mathematical expression for the Coriolis force appeared in an 1835 paper by a French scientist Gaspard-Gustave Coriolis in connection with hydrodynamics, and also in the tidal equations of Pierre-Simon Laplace in 1778. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology.
Perhaps the most commonly encountered rotating reference frame is the Earth. Moving objects on the surface of the Earth experience a Coriolis force, and appear to veer to the right in the northern hemisphere, and to the l
|
https://en.wikipedia.org/wiki/Security%20token
|
A security token is a peripheral device used to gain access to an electronically restricted resource. The token is used in addition to, or in place of, a password. It acts like an electronic key to access something. Examples of security tokens include wireless keycards used to open locked doors, or a banking token used as a digital authenticator for signing in to online banking, or signing a transaction such as a wire transfer.
Security tokens can be used to store information such as passwords, cryptographic keys used to generate digital signatures, or biometric data (such as fingerprints). Some designs incorporate tamper resistant packaging, while others may include small keypads to allow entry of a PIN or a simple button to start a generating routine with some display capability to show a generated key number. Connected tokens utilize a variety of interfaces including USB, near-field communication (NFC), radio-frequency identification (RFID), or Bluetooth. Some tokens have audio capabilities designed for those who are vision-impaired.
Password types
All tokens contain some secret information that is used to prove identity. There are four different ways in which this information can be used:
Static password token The device contains a password which is physically hidden (not visible to the possessor), but which is transmitted for each authentication. This type is vulnerable to replay attacks.
Synchronous dynamic password token A timer is used to rotate through various combinations produced by a cryptographic algorithm. The token and the authentication server must have synchronized clocks.
Asynchronous password token A one-time password is generated without the use of a clock, either from a one-time pad or cryptographic algorithm.
Challenge–response token Using public key cryptography, it is possible to prove possession of a private key without revealing that key. The authentication server encrypts a challenge (typically a random number, or at least data
|
https://en.wikipedia.org/wiki/MRV%20Communications
|
MRV Communications is a communications equipment and services company based in Chatsworth, California. Through its business units, the company is a provider of optical communications network infrastructure equipment and services to a broad range of telecom concerns, including multi-national telecommunications operators, local municipalities, MSOs, corporate and consumer high-speed G-Internet service providers, and data storage and cloud computing providers.
MRV Communications was acquired by ADVA Optical Networking on August 14, 2017.
History
MRV was founded in 1988 by Prof. Shlomo Margalit and Dr. Zeev Rav-Noy as a maker of Metro and Access optoelectronic components. MRV’s Metro and Access transceivers enable network equipment to be deployed across large campuses or in municipal and regional networks. To expand its leadership, MRV established LuminentOIC, an independent subsidiary. LuminentOIC makes fiber-to-the-premises (FTTP) components, an activity that was initiated by Rob Goldman, who founded the FTTx business unit in Luminant in 2001.
In the 1990s, MRV produced Ethernet switching and Optical Transport for Metro and campus environments. MRV began building switches and routers used by carriers implementing Metro Ethernet networks that provide Ethernet services to enterprise customers and multi-dwelling residential buildings.
Significant Milestones – Acquisitions, Product Development:
1992 – Created NBASE Communications through Galcom and Ace acquisitions.
Acquisitions created a networking company with a focus on technology for the Token Ring LAN, IBM Connectivity, and Multi-Platform Network Management for the IBM NetView and HP OpenView platforms. Following the acquisitions, the Company consolidated these operations in Israel with its networking operations in the U.S.
1996 – Acquired Fibronics from Elbit Ltd.
Enhanced the development of Fast Ethernet and Gigabit Ethernet functions through the acquisitions of the Fibronics GigaHub family of products – Serv
|
https://en.wikipedia.org/wiki/MOS%20Technology%206551
|
The 6551 Asynchronous Communications Interface Adapter (ACIA) is an integrated circuit made by MOS Technology. It served as a companion UART chip for the widely popular 6502 microprocessor. Intended to implement RS-232, its specifications called for a maximum speed of 19,200 bits per second with its onboard baud-rate generator, or 125kbit/s using an external 16x clock. The 6551 was used in several computers of the 1970s and 1980s, including the Commodore PET and Plus/4. It was also used by Apple Computer on the Apple II Super Serial Card, and by Radio Shack on the Deluxe RS-232 Program Pak for their TRS-80 Color Computer.
Several companies, including Dr. Evil Labs and Creative Micro Designs, marketed an add-on cartridge containing a 6551 and an industry-standard RS-232 port to allow the C64 and 128 to use high-speed modems from companies such as U.S. Robotics and Hayes Communications. The Dr. Evil and CMD cartridges pushed the 6551 to 38,400 baud and, with a faster-still clock crystal, some end users reported getting 115,200 bit/s from the 6551. The ADTPro file transfer program disables the baud rate generator in the 6551, allowing 115,200 bit/s transfers with an unmodified clock crystal.
Variants
The Rockwell 65C52 combines two CMOS 6551s on a chip.
Similar chips
The Motorola 6850 is a similar chip to the MOS Technology 6551, but without an onboard bit rate generator. The 6850 is often used for MIDI.
The Western Design Center WDC 65C51 is designed as a drop in replacement for the original MOS 6551, electrically, physically and programming- compatible with most 6551 and 6850 derivatives from most other suppliers. The WDC 65C51 has errata, in which the transmitter “ready” bit is “stuck” in the ready state.
External links
Datasheet
MOS Technology integrated circuits
Input/output integrated circuits
|
https://en.wikipedia.org/wiki/Eight-to-fourteen%20modulation
|
Eight-to-fourteen modulation (EFM) is a data encoding technique – formally, a line code – used by compact discs (CD), laserdiscs (LD) and pre-Hi-MD MiniDiscs. EFMPlus is a related code, used in DVDs and Super Audio CDs (SACDs).
EFM and EFMPlus were both invented by Kees A. Schouhamer Immink. According to European Patent Office former President Benoît Battistelli, "Immink's invention of EFM made a decisive contribution to the digital revolution."
Technological classification
EFM belongs to the class of DC-free run-length limited (RLL) codes; these have the following two properties:
the spectrum (power density function) of the encoded sequence vanishes at the low-frequency end, and
both the minimum and maximum number of consecutive bits of the same kind are within specified bounds.
In optical recording systems, servo mechanisms accurately follow the track in three dimensions: radial, focus, and rotational speed. Everyday handling damage, such as dust, fingerprints, and tiny scratches, not only affects retrieved data, but also disrupts the servo functions. In some cases, the servos may skip tracks or get stuck. Specific sequences of pits and lands are particularly susceptible to disc defects, and disc playability can be improved if such sequences are barred from recording. The use of EFM produces a disc that is highly resilient to handling and solves the engineering challenge in a very efficient manner.
How it works
Under EFM rules, the data to be stored is first broken into eight-bit blocks (bytes). Each eight-bit block is translated into a corresponding fourteen-bit codeword using a lookup table.
The 14-bit words are chosen such that binary ones are always separated by a minimum of two and a maximum of ten binary zeroes. This is because bits are encoded with NRZI encoding, or modulo-2 integration, so that a binary one is stored on the disc as a change from a land to a pit or a pit to a land, while a binary zero is indicated by no change. A sequence 0011 woul
|
https://en.wikipedia.org/wiki/Greenspun%27s%20tenth%20rule
|
Greenspun's tenth rule of programming is an aphorism in computer programming and especially programming language circles that states:
Overview
The rule expresses the opinion that the argued flexibility and extensibility designed into the programming language Lisp includes all functionality that is theoretically needed to write any complex computer program, and that the features required to develop and manage such complexity in other programming languages are equivalent to some subset of the methods used in Lisp.
Other programming languages, while claiming to be simpler, require programmers to reinvent in a haphazard way a significant amount of needed functionality that is present in Lisp as a standard, time-proven base.
It can also be interpreted as a satiric critique of systems that include complex, highly configurable sub-systems. Rather than including a custom interpreter for some domain-specific language, Greenspun's rule suggests using a widely accepted, fully featured language like Lisp.
Paul Graham also highlights the satiric nature of the concept, albeit based on real issues:
The rule was written sometime around 1993 by Philip Greenspun. Although it is known as his tenth rule, this is a misnomer. There are in fact no preceding rules, only the tenth. The reason for this according to Greenspun:
Hacker Robert Morris later declared a corollary, which clarifies the set of "sufficiently complicated" programs to which the rule applies:
This corollary jokingly refers to the fact that many Common Lisp implementations (especially those available in the early 1990s) depend upon a low-level core of compiled C, which sidesteps the issue of bootstrapping but may itself be somewhat variable in quality, at least compared to a cleanly self-hosting Common Lisp.
See also
Inner-platform effect
Software Peter principle
Turing tarpit
Zawinski's law of software envelopment
References
Computer architecture statements
Lisp (programming language)
1993 neologisms
Progra
|
https://en.wikipedia.org/wiki/Abstract%20polytope
|
In mathematics, an abstract polytope is an algebraic partially ordered set which captures the dyadic property of a traditional polytope without specifying purely geometric properties such as points and lines.
A geometric polytope is said to be a realization of an abstract polytope in some real N-dimensional space, typically Euclidean. This abstract definition allows more general combinatorial structures than traditional definitions of a polytope, thus allowing new objects that have no counterpart in traditional theory.
Introductory concepts
Traditional versus abstract polytopes
In Euclidean geometry, two shapes that are not similar can nonetheless share a common structure. For example, a square and a trapezoid both comprise an alternating chain of four vertices and four sides, which makes them quadrilaterals. They are said to be isomorphic or “structure preserving”.
This common structure may be represented in an underlying abstract polytope, a purely algebraic partially ordered set which captures the pattern of connections (or incidences) between the various structural elements. The measurable properties of traditional polytopes such as angles, edge-lengths, skewness, straightness and convexity have no meaning for an abstract polytope.
What is true for traditional polytopes (also called classical or geometric polytopes) may not be so for abstract ones, and vice versa. For example, a traditional polytope is regular if all its facets and vertex figures are regular, but this is not necessarily so for an abstract polytope.
Realizations
A traditional polytope is said to be a realization of the associated abstract polytope. A realization is a mapping or injection of the abstract object into a real space, typically Euclidean, to construct a traditional polytope as a real geometric figure.
The six quadrilaterals shown are all distinct realizations of the abstract quadrilateral, each with different geometric properties. Some of them do not conform to traditional def
|
https://en.wikipedia.org/wiki/Statement%20of%20work
|
A statement of work (SOW) is a document routinely employed in the field of project management. It is the narrative description of a project's work requirement. It defines project-specific activities, deliverables and timelines for a vendor providing services to the client. The SOW typically also includes detailed requirements and pricing, with standard regulatory and governance terms and conditions. It is often an important accompaniment to a master service agreement or request for proposal (RFP).
Overview
Many formats and styles of statement of work document templates have been specialized for the hardware or software solutions described in the request for proposal. Many companies create their own customized version of SOWs that are specialized or generalized to accommodate typical requests and proposals they receive. However, it is usually informed by the goals of the top management as well as input from the customer and/or user groups.
Note that in many cases the statement of work is a binding contract. Master service agreements or consultant/training service agreements postpone certain work-specific contractual components that are addressed in individual statements of work. The master service agreement serves as a master contract governing the terms over potentially multiple SOWs. Sometimes it refers to scope of work. For instance, if a project is done on contract, the scope statement included as part of it can be used as the SOW since it also outlines the work of the project in clear and concise terms.
Areas addressed
A statement of work typically addresses these subjects.
Purpose: Why are we doing this project? A purpose statement attempts to answer this.
Scope of work: This describes the work to be done and specifies the hardware and software involved. The definition of scope becomes the scope statement.
Location of work: This describes where the work is to be performed, including the location of hardware and software and where people will meet to
|
https://en.wikipedia.org/wiki/Nicole-Reine%20Lepaute
|
Nicole-Reine Lepaute (; , 5 January 1723 – 6 December 1788), also erroneously known as Hortense Lepaute, was a French astronomer and human computer. Lepaute along with Alexis Clairaut and Jérôme Lalande calculated the date of the return of Halley's Comet. Her other astronomical feats include calculating the 1764 solar eclipse and producing almanacs from 1759 to 1783. She was also a member of the Scientific .
The asteroid 7720 Lepaute is named in her honour, as is the lunar crater Lepaute.
Early life
Nicole-Reine Lepaute was born on 5 January 1723 in the Luxembourg Palace in Paris as the daughter of Jean Étable, valet in the service of Louise Élisabeth d'Orléans. Her father had worked for the royal family for a long time, both in the service of the duchess de Berry and her sister Louise. Louise Elizabeth of Orleans is a widow of Louis of Spain. When her husband died, she returned to Paris and was given part of Luxembourg to live in and to house her staff members.^ She was the sixth of nine children. As a child she was described as precocious and intelligent, being mostly self-taught. She stayed up all night "devouring" books and read every book in the library, with Jérôme Lalande saying of her that even as a child "she had too much spirit not to be curious". Lepaute’s interest in astronomy had begun at a young age when she most likely witnessed a comet in the sky. She became eager to learn about it and started her curiosity journey when she started studying about comets. This led to one of the biggest discoveries of her time.^
In August 1748, she married Jean-André Lepaute, a royal clockmaker in the Luxembourg Palace.
Career
Mathematics of clockmaking
Her marriage gave her the freedom to exercise her scientific skill. At the same time as she kept the household's accounts, she studied astronomy, mathematics, and "she observed, she calculated, and she described the inventions of her husband".
She met Jérôme Lalande, with whom she would work for thirty years, i
|
https://en.wikipedia.org/wiki/All%20American%20Five
|
The term All American Five (abbreviated AA5) is a colloquial name for mass-produced, superheterodyne radio receivers that used five vacuum tubes in their design. These radio sets were designed to receive amplitude modulation (AM) broadcasts in the medium wave band, and were manufactured in the United States from the mid-1930s until the early 1960s. By eliminating a power transformer, cost of the units was kept low; the same principle was later applied to television receivers. Variations in the design for lower cost, shortwave bands, better performance or special power supplies existed, although many sets used an identical set of vacuum tubes.
Philosophy
The radio was called the "All American Five" because the design typically used five vacuum tubes, and comprised the majority of radios manufactured for home use in the USA and Canada in the tube era.
They were manufactured in the millions by hundreds of manufacturers from the 1930s onward, with the last examples being made in Japan. The heaters of the tubes were connected in series, all requiring the same current, but with different voltages across them. The standard line up of tubes were designed so that the total rated voltage of the five tubes was 121 volts, slightly more than the electricity supply voltage of 110–117V. An extra dropper resistor was therefore not required. Transformerless designs had a metal chassis connected to one side of the power line, which was a dangerous electric shock hazard and required a thoroughly insulated cabinet. Transformerless radios could be powered by either AC or DC (consequently called AC/DC receivers)—DC supplies were still not uncommon. When operated on DC, they would only work if the plug was inserted with the correct polarity. Also, if run from a DC supply the radio had a reduced performance because the B+ voltage would only be 120 volts compared with 160–170 volts when operated from AC.
The philosophy of the design was simple: it had to be as cheap to make as possib
|
https://en.wikipedia.org/wiki/Hping
|
hping is an open-source packet generator and analyzer for the TCP/IP protocol created by Salvatore Sanfilippo (also known as Antirez).
It is one of the common tools used for security auditing and testing of firewalls and networks, and was used to exploit the idle scan scanning technique (also invented by the hping author), and now implemented in the Nmap Security Scanner. The new version of hping, hping3, is scriptable using the Tcl language and implements an engine for string based, human-readable description of TCP/IP packets so that the programmer can write scripts related to low level TCP/IP packet manipulation and analysis in a short time.
See also
Nmap Security Scanner: Nmap and hping are often considered complementary to one another.
Mausezahn: Another fast and versatile packet generator that also supports Ethernet header manipulation.
Packet Sender: A packet generator with a focus on ease of use.
External links
The Hping Website
The Hping Wiki
Idle Scanning, paper by Nmap author Fyodor.
Hping 2 Fixed for Windows XP SP2 (Service Pack 2)
Free network-related software
Unix network-related software
Network analyzers
|
https://en.wikipedia.org/wiki/Least%20fixed%20point
|
In order theory, a branch of mathematics, the least fixed point (lfp or LFP, sometimes also smallest fixed point) of a function from a partially ordered set to itself is the fixed point which is less than each other fixed point, according to the order of the poset. A function need not have a least fixed point, but if it does then the least fixed point is unique.
Examples
With the usual order on the real numbers, the least fixed point of the real function f(x) = x2 is x = 0 (since the only other fixed point is 1 and 0 < 1). In contrast, f(x) = x + 1 has no fixed points at all, so has no least one, and f(x) = x has infinitely many fixed points, but has no least one.
Let be a directed graph and be a vertex. The set of vertices accessible from can be defined as the least fixed-point of the function , defined as
The set of vertices which are co-accessible from is defined by a similar least fix-point. The strongly connected component of is the intersection of those two least fixed-points.
Let be a context-free grammar. The set of symbols which produces the empty string can be obtained as the least fixed-point of the function , defined as , where denotes the power set of .
Applications
Many fixed-point theorems yield algorithms for locating the least fixed point. Least fixed points often have desirable properties that arbitrary fixed points do not.
Denotational semantics
In computer science, the denotational semantics approach uses least fixed points to obtain from a given program text a corresponding mathematical function, called its semantics. To this end, an artificial mathematical object, , is introduced, denoting the exceptional value "undefined".
Given e.g. the program datatype int, its mathematical counterpart is defined as
it is made a partially ordered set by defining for each and letting any two different members be uncomparable w.r.t. , see picture.
The semantics of a program definition int f(int n){...} is some mathematical function If t
|
https://en.wikipedia.org/wiki/Virtual%20work
|
In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action. The work of a force on a particle along a virtual displacement is known as the virtual work.
Historically, virtual work and the associated calculus of variations were formulated to analyze systems of rigid bodies, but they have also been developed for the study of the mechanics of deformable bodies.
History
The principle of virtual work had always been used in some form since antiquity in the study of statics. It was used by the Greeks, medieval Arabs and Latins, and Renaissance Italians as "the law of lever". The idea of virtual work was invoked by many notable physicists of the 17th century, such as Galileo, Descartes, Torricelli, Wallis, and Huygens, in varying degrees of generality, when solving problems in statics. Working with Leibnizian concepts, Johann Bernoulli systematized the virtual work principle and made explicit the concept of infinitesimal displacement. He was able to solve problems for both rigid bodies as well as fluids. Bernoulli's version of virtual work law appeared in his letter to Pierre Varignon in 1715, which was later published in Varignon's second volume of Nouvelle mécanique ou Statique in 1725. This formulation of the principle is today known as the principle of virtual velocities and is commonly considered as the prototype of the contemporary virtual work principles. In 1743 D'Alembert published his Traité de Dynamique where he applied the principle of virtual work, based on Bernoulli's work, to solve various problems in dynamics. His
|
https://en.wikipedia.org/wiki/GEOS%20%2816-bit%20operating%20system%29
|
GEOS (later renamed GeoWorks Ensemble, NewDeal Office, and Breadbox Ensemble) is a computer operating environment, graphical user interface (GUI), and suite of application software. Originally released as PC/GEOS, it runs on DOS-based, IBM PC compatible computers. Versions for some handheld platforms were also released and licensed to some companies.
PC/GEOS was first created by Berkeley Softworks, which later became GeoWorks Corporation. Version 4.0 was developed in 2001 by Breadbox Computer Company, limited liability company (LLC), and was renamed Breadbox Ensemble. In 2015, Frank Fischer, the CEO of Breadbox, died and efforts on the operating system stopped until later in 2017 when it was bought by blueway.Softworks.
PC/GEOS should not be confused with the 8-bit GEOS product from the same company, which runs on the Commodore 64 and Apple II.
PC/GEOS
GeoWorks Ensemble
In 1990, GeoWorks (formerly Berkeley Softworks) released PC/GEOS for IBM PC compatible systems. Commonly referred to as GeoWorks Ensemble, it was incompatible with the earlier 8-bit versions of GEOS for Commodore and Apple II computers, but provided numerous enhancements, including scalable fonts and multitasking on IBM PC XT- and AT-class PC clones. GeoWorks saw a market opportunity to provide a graphical user interface for the 16 million older model PCs that were unable to run Microsoft Windows 2.x.
GEOS was packaged with a suite of productivity applications. Each had a name prefixed by "Geo": GeoWrite, GeoDraw; GeoManager; GeoPlanner; GeoDex, and GeoComm. It was also bundled with many PCs at the time, but like other GUI environments for the PC platform, such as Graphics Environment Manager (GEM), it ultimately proved less successful in the marketplace than Windows. Former CEO of GeoWorks claims that GEOS faded away "because Microsoft threatened to withdraw supply of MS-DOS to hardware manufacturers who bundled Geoworks with their machines".
In December 1992, NEC and Sony bundled an origina
|
https://en.wikipedia.org/wiki/Frank%20Wanlass
|
Frank Marion Wanlass (May 17, 1933 in Thatcher, AZ – September 9, 2010 in Santa Clara, California) was an American electrical engineer. He is best known for inventing CMOS (complementary MOS) logic with Chih-Tang Sah in 1963. CMOS has since become the standard semiconductor device fabrication process for MOSFETs (metal–oxide–semiconductor field-effect transistors).
Biography
He obtained his PhD from the University of Utah. Wanlass invented CMOS (complementary MOS) logic circuits with Chih-Tang Sah in 1963, while working at Fairchild Semiconductor. Wanlass was given U.S. patent #3,356,858 for "Low Stand-By Power Complementary Field Effect Circuitry" in 1967.
In 1963, while studying MOSFET (metal–oxide–semiconductor field-effect transistor) structures, he noted the movement of charge through oxide onto a gate. While he did not pursue it, this idea would later become the basis for EPROM (erasable programmable read-only memory) technology.
In 1964, Wanlass moved to General Microelectronics (GMe), where he made the first commercial MOS integrated circuits, and a year later to General Instrument Microelectronics Division in New York,
where he developed four-phase logic.
He was also remembered for his contribution to solving threshold voltage stability in MOS transistors due to sodium ion drift.
In 1991, Wanlass was awarded the IEEE Solid-State Circuits Award.
In 2009, on the 50th anniversary of both the MOSFET and the integrated circuit, Frank Wanlass was inducted into the National Inventors Hall of Fame for his invention of CMOS logic. He was part of the 2009 class celebrating semiconductor pioneers, along with inventors of semiconductor technologies such as the MOSFET (Mohamed Atalla and Dawon Kahng), planar process (Jean Hoerni), EPROM (Dov Frohman) and molecular beam epitaxy (Alfred Y. Cho).
Wanlass died on 9 September 2010.
References
External links
Invent Now, Inventor Profile – Frank Wanlass
University of Utah, Alumni Profile – Frank Wanlass
Obituary (
|
https://en.wikipedia.org/wiki/Diomidis%20Spinellis
|
Diomidis D. Spinellis (; 2 February 1967, Athens) is a Greek computer science academic and author of the books Code Reading, Code Quality, Beautiful Architecture (co-author) and Effective Debugging.
Education
Spinellis holds a Master of Engineering degree in Software Engineering and a Ph.D. in Computer Science both from Imperial College London. His PhD was supervised by Susan Eisenbach and Sophia Drossopoulou.
Career and research
He is a professor at the Department of Management Science and Technology at the Athens University of Economics and Business, and a member of the IEEE Software editorial board, contributing the Tools of the Trade column. Since 2014, he is also editor-in-chief of IEEE Software. Spinellis is a four-time winner of the International Obfuscated C Code Contest in 1988, 1990, 1991 and 1995.
He is also a committer in the FreeBSD project, and author of a number of popular free or open-source systems: the UMLGraph declarative UML diagram generator, the bib2xhtml BibTeX to XHTML converter, the outwit Microsoft Windows data with command line programs integration tool suite, the CScout source code analyzer and refactoring browser, the socketpipe fast inter-process communication plumbing utility and directed graph shell the directed graph Unix shell for big data and stream processing pipelines.
In 2008, together with a collaborator, Spinellis claimed that "red links" (a Wikipedia slang for wikilinks that lead to non-existing pages) is what drives Wikipedia growth.
On 5 November 2009 he was appointed the General Secretary of Information Systems at the Greek Ministry of Finance. In October 2011, he resigned citing personal reasons.
On 20 March 2015 he was elected President of Open Technologies Alliance (GFOSS). GFOSS is a non-profit organization founded in 2008, 36 Universities and Research Centers are shareholders of GFOSS. The main goal of GFOSS is to promote Openness through the use and the development of Open Standards and Open Technologies in Ed
|
https://en.wikipedia.org/wiki/VME%20eXtensions%20for%20Instrumentation
|
VME eXtensions for instrumentation bus (VXI bus) refers to standards for automated test based upon VMEbus.
VXI defines additional bus lines for timing and triggering as well as mechanical requirements and standard protocols for configuration, message-based communication, multi-chassis extension, and other features. In 2004, the 2eVME extension was added to the VXI bus specification, giving it a maximum data rate of 160 MB/s.
The basic building block of a VXI system is the mainframe or chassis. This contains up to 13 slots into which various modules (instruments) can be added. The mainframe also contains all the power supply requirements for the rack and the instruments it contains. Instruments in the form of VXI Modules then fit the slots in the rack. VXI bus modules are typically 6U in height (see Eurocard) and C-size (unlike VME bus modules which are more commonly B-size). It is therefore possible to configure a system to meet a particular requirement by selecting the required instruments.
The basic architecture of the instrument system is described in US patent 4,707,834. This patent was freely licensed by Tektronix to the VXIbus Consortium. The VXIbus grew from the VME bus specification, it was established in 1987 by Hewlett Packard (now Keysight Technologies), Racal Instruments (now Astronics Test Systems), Colorado Data Systems, Wavetek and Tektronix. VXI is promoted by the VXIbus Consortium, whose sponsor members are currently (in alphabetical order) Astronics Test Systems (formerly Racal Instruments), Bustec, Keysight Technologies, National Instruments, Teradyne, and VTI Instruments (formerly known as VXI Technology)
.
ZTEC Instruments] is a participating Executive Member. VXI's core market is in Military and Avionics Automatic Test Systems.
The VXIplug&play Alliance specified additional hardware and software interoperability standards, such as the Virtual Instrument Software Architecture (VISA), although the alliance was eventually merged with the I
|
https://en.wikipedia.org/wiki/Release%20notes
|
Release notes are documents that are distributed with software products or hardware products, sometimes when the product is still in the development or test state (e.g., a beta release). For products that have already been in use by clients, the release note is delivered to the customer when an update is released. Another abbreviation for Release notes is Changelog or Release logs or Software changes or Revision history Updates or README file. However, in some cases, the release notes and changelog are published separately. This split is for clarity and differentiation of feature-highlights from bugs, change requests (CRs) or improvements on the other side.
Purpose
Release Notes are documents that are shared with end users, customers and clients of an organization. The definition of the terms 'End Users', 'Clients' and 'Customers' are very relative in nature and might have various interpretations based on the specific context. For instance, the Quality Assurance group within a software development organization can be interpreted as an internal customer.
Content
Release notes detail the corrections, changes or enhancements (functional or non-functional) made to the service or product the company provides.
They might also be provided as an artifact accompanying the deliverables for System Testing and System Integration Testing and other managed environments especially with reference to an information technology organization.
Release notes can also contain test results and information about the test procedure. This kind of information gives readers of the release note more confidence in the fix/change done; this information also enables implementer of the change to conduct rudimentary acceptance tests.
They differ from End-user license agreement, since they do not (should not) contain any legal terms of the software product or service. The focus should be on the software release itself, not for example legal conditions.
Release notes can also be interpreted as d
|
https://en.wikipedia.org/wiki/Exonuclease
|
Exonucleases are enzymes that work by cleaving nucleotides one at a time from the end (exo) of a polynucleotide chain. A hydrolyzing reaction that breaks phosphodiester bonds at either the 3′ or the 5′ end occurs. Its close relative is the endonuclease, which cleaves phosphodiester bonds in the middle (endo) of a polynucleotide chain. Eukaryotes and prokaryotes have three types of exonucleases involved in the normal turnover of mRNA: 5′ to 3′ exonuclease (Xrn1), which is a dependent decapping protein; 3′ to 5′ exonuclease, an independent protein; and poly(A)-specific 3′ to 5′ exonuclease.
In both archaea and eukaryotes, one of the main routes of RNA degradation is performed by the multi-protein exosome complex, which consists largely of 3′ to 5′ exoribonucleases.
Significance to polymerase
RNA polymerase II is known to be in effect during transcriptional termination; it works with a 5' exonuclease (human gene Xrn2) to degrade the newly formed transcript downstream, leaving the polyadenylation site and simultaneously shooting the polymerase. This process involves the exonuclease's catching up to the pol II and terminating the transcription.
Pol I then synthesizes DNA nucleotides in place of the RNA primer it had just removed. DNA polymerase I also has 3' to 5' and 5' to 3' exonuclease activity, which is used in editing and proofreading DNA for errors. The 3' to 5' can only remove one mononucleotide at a time, and the 5' to 3' activity can remove mononucleotides or up to 10 nucleotides at a time.
E. coli types
In 1971, Lehman IR discovered exonuclease I in E. coli. Since that time, there have been numerous discoveries including: exonuclease, II, III, IV, V, VI, VII, and VIII. Each type of exonuclease has a specific type of function or requirement.
Exonuclease I breaks apart single-stranded DNA in a 3' → 5' direction, releasing deoxyribonucleoside 5'-monophosphates one after another. It does not cleave DNA strands without terminal 3'-OH groups because they ar
|
https://en.wikipedia.org/wiki/Micronization
|
Micronization is the process of reducing the average diameter of a solid material's particles. Traditional techniques for micronization focus on mechanical means, such as milling and grinding. Modern techniques make use of the properties of supercritical fluids and manipulate the principles of solubility.
The term micronization usually refers to the reduction of average particle diameters to the micrometer range, but can also describe further reduction to the nanometer scale. Common applications include the production of active chemical ingredients, foodstuff ingredients, and pharmaceuticals. These chemicals need to be micronized to increase efficacy.
Traditional techniques
Traditional micronization techniques are based on friction to reduce particle size. Such methods include milling, bashing and grinding. A typical industrial mill is composed of a cylindrical metallic drum that usually contains steel spheres. As the drum rotates the spheres inside collide with the particles of the solid, thus crushing them towards smaller diameters. In the case of grinding, the solid particles are formed when the grinding units of the device rub against each other while particles of the solid are trapped in between.
Methods like crushing and cutting are also used for reducing particle diameter, but produce more rough particles compared to the two previous techniques (and are therefore the early stages of the micronization process). Crushing employs hammer-like tools to break the solid into smaller particles by means of impact. Cutting uses sharp blades to cut the rough solid pieces into smaller ones.
Modern techniques
Modern methods use supercritical fluids in the micronization process. These methods use supercritical fluids to induce a state of supersaturation, which leads to precipitation of individual particles. The most widely applied techniques of this category include the RESS process (Rapid Expansion of Supercritical Solutions), the SAS method (Supercritical Anti-Solven
|
https://en.wikipedia.org/wiki/Heliotropism
|
Heliotropism, a form of tropism, is the diurnal or seasonal motion of plant parts (flowers or leaves) in response to the direction of the Sun.
The habit of some plants to move in the direction of the Sun, a form of tropism, was already known by the Ancient Greeks. They named one of those plants after that property Heliotropium, meaning "sun turn". The Greeks assumed it to be a passive effect, presumably the loss of fluid on the illuminated side, that did not need further study. Aristotle's logic that plants are passive and immobile organisms prevailed. In the 19th century, however, botanists discovered that growth processes in the plant were involved, and conducted increasingly in-depth experiments. A. P. de Candolle called this phenomenon in any plant heliotropism (1832). It was renamed phototropism in 1892, because it is a response to light rather than to the sun, and because the phototropism of algae in lab studies at that time strongly depended on the brightness (positive phototropic for weak light, and negative phototropic for bright light, like sunlight). A botanist studying this subject in the lab, at the cellular and subcellular level, or using artificial light, is more likely to employ the more abstract word phototropism, a term which includes artificial light as well as natural sunlight. The French scientist Jean-Jacques d'Ortous de Mairan was one of the first to study heliotropism when he experimented with the Mimosa pudica plant. The phenomenon was studied by Charles Darwin and published in his penultimate 1880 book The Power of Movement in Plants, a work which included other stimuli to plant movement such as gravity, moisture and touch.
Floral heliotropism
Heliotropic flowers track the Sun's motion across the sky from east to west. Daisies or Bellis perennis close their petals at night but open in the morning light and then follow the sun as the day progresses. During the night, the flowers may assume a random orientation, while at dawn they turn ag
|
https://en.wikipedia.org/wiki/Michael%20Jackson%27s%20Moonwalker
|
Michael Jackson's Moonwalker is the name of several video games based on the 1988 Michael Jackson film Moonwalker. Sega developed two beat 'em ups, released in 1990; one released in arcades and another released for the Sega Genesis and Master System consoles. U.S. Gold also published various games for home computers the same year. Each of the games' plots loosely follows the "Smooth Criminal" segment of the film, in which Jackson rescues kidnapped children from the evil Mr. Big, and incorporates synthesized versions of some of the musician's songs. Following Moonwalker, Jackson collaborated with Sega on several other video games.
Sega arcade version
is an arcade video game by Sega (programming) and Triumph International (audiovisuals), with the help of Jackson which was released on the Sega System 18 hardware. The arcade has distinctively different gameplay from its computer and console counterparts, focusing more on beat 'em up gameplay elements rather than platform gameplay.
Gameplay
The game is essentially a beat-em-up that is drawn using isometric video game graphics, although Jackson attacks with magic powers instead of physical contact, and has the ability to shoot short-ranged magic power at enemies. The magic power can be charged by holding the attack button to increase the range and damage of the magic power. If up close to enemies, Jackson executes a spinning melee attack using magic power.
If the cabinet supports it, up to three people can play simultaneously. All three players play as Jackson, dressed in his suit (white for player 1, red for player 2, black for player 3).
Jackson's special attack is termed "Dance Magic". There are three different dance routines that may be performed, and the player starts with one to three of these attacks per credit (depending on how the machine is set up).
Bubbles, Michael's real-life pet chimpanzee, appears in each level. Once collected or rescued, Bubbles transforms Michael into a robotic version, with th
|
https://en.wikipedia.org/wiki/Master%20control
|
Master control is the technical hub of a broadcast operation common among most over-the-air television stations and television networks. It is distinct from a production control room (PCR) in television studios where the activities such as switching from camera to camera are coordinated. A transmission control room (TCR) is usually smaller in size and is a scaled down version of centralcasting.
Master control is the final point before a signal is transmitted over-the-air for terrestrial television or cablecast, satellite provider for broadcast, or sent on to a cable television operator. Television master control rooms include banks of video monitors, satellite receivers, videotape machines, video servers, transmission equipment, and, more recently, computer broadcast automation equipment for recording and playback of television programming.
Master control is generally staffed with one or two master control operators around-the-clock to ensure continuous operation. Master control operators are responsible for monitoring the quality and accuracy of the on-air product, ensuring the transmission meets government regulations, troubleshooting equipment malfunctions, and preparing programming for playout. Regulations include both technical ones (such as those against over-modulation and dead air), as well as content ones (such as indecency and station ID).
Many television networks and radio networks or station groups have consolidated facilities and now operate multiple stations from one regional master control or centralcasting center. An example of this centralized broadcast programming system on a large scale is NBC's "hub-spoke project" that enables a single "hub" to have control of dozens of stations' automation systems and to monitor their air signals, thus reducing or eliminating some responsibilities of local employees at their owned-and-operated (O&O) stations.
Outside the United States, the Canadian Broadcasting Corporation (CBC) manages four radio network
|
https://en.wikipedia.org/wiki/Basic%20access%20authentication
|
In the context of an HTTP transaction, basic access authentication is a method for an HTTP user agent (e.g. a web browser) to provide a user name and password when making a request. In basic HTTP authentication, a request contains a header field in the form of Authorization: Basic <credentials>, where credentials is the Base64 encoding of ID and password joined by a single colon :.
It was originally implemented by Ari Luotonen at CERN in 1993 and defined in the HTTP 1.0 specification in 1996.
It is specified in from 2015, which obsoletes from 1999.
Features
HTTP Basic authentication (BA) implementation is the simplest technique for enforcing access controls to web resources because it does not require cookies, session identifiers, or login pages; rather, HTTP Basic authentication uses standard fields in the HTTP header.
Security
The BA mechanism does not provide confidentiality protection for the transmitted credentials. They are merely encoded with Base64 in transit and not encrypted or hashed in any way. Therefore, basic authentication is typically used in conjunction with HTTPS to provide confidentiality.
Because the BA field has to be sent in the header of each HTTP request, the web browser needs to cache credentials for a reasonable period of time to avoid constantly prompting the user for their username and password. Caching policy differs between browsers.
HTTP does not provide a method for a web server to instruct the client to "log out" the user. However, there are a number of methods to clear cached credentials in certain web browsers. One of them is redirecting the user to a URL on the same domain, using credentials that are intentionally incorrect. However, this behavior is inconsistent between various browsers and browser versions. Microsoft Internet Explorer offers a dedicated JavaScript method to clear cached credentials:
<script>document.execCommand('ClearAuthenticationCache');</script>
In modern browsers, cached credentials for basic auth
|
https://en.wikipedia.org/wiki/Centipede%20game
|
In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round, but after an additional switch the potential payoff will be higher. Therefore, although at each round a player has an incentive to take the pot, it would be better for them to wait. Although the traditional centipede game had a limit of 100 rounds (hence the name), any game with this structure but a different number of rounds is called a centipede game.
The unique subgame perfect equilibrium (and every Nash equilibrium) of these games results in the first player taking the pot on the first round of the game; however, in empirical tests, relatively few players do so, and as a result, achieve a higher payoff than in the subgame perfect and Nash equilibria. These results are taken to show that subgame perfect equilibria and Nash equilibria fail to predict human play in some circumstances. The Centipede game is commonly used in introductory game theory courses and texts to highlight the concept of backward induction and the iterated elimination of dominated strategies, which show a standard way of providing a solution to the game.
Play
One possible version of a centipede game could be played as follows:
The addition of coins is taken to be an externality, as it is not contributed by either player.
Formal Definition
The centipede game may be written as where and . Players and alternate, starting with player , and may on each turn play a move from with a maximum of rounds. The game terminates when is played for the first time, otherwise upon moves, if is never played.
Suppose the game ends on round with player making t
|
https://en.wikipedia.org/wiki/Gluconic%20acid
|
Gluconic acid is an organic compound with molecular formula C6H12O7 and condensed structural formula HOCH2(CHOH)4CO2H. A white solid, it is forms the gluconate anion in neutral aqueous solution. The salts of gluconic acid are known as "gluconates". Gluconic acid, gluconate salts, and gluconate esters occur widely in nature because such species arise from the oxidation of glucose. Some drugs are injected in the form of gluconates.
Chemical structure
The chemical structure of gluconic acid consists of a six-carbon chain, with five hydroxyl groups positioned in the same way as in the open-chained form of glucose, terminating in a carboxylic acid group. It one of the 16 stereoisomers of 2,3,4,5,6-pentahydroxyhexanoic acid.
Production
Gluconic acid is typically produced by the aerobic oxidation of glucose in the presence of the enzyme glucose oxidase. The conversion affords gluconolactone and hydrogen peroxide. The lactone spontaneously hydrolyzes to gluconic acid in water.
Variations of glucose (or other carbohydrate-containing substrate) oxidation using fermentation. or noble metal catalysis.
Gluconic acid was first prepared by Hlasiwetz and Habermann in 1870 and involved the chemical oxidation of glucose. In 1880, Boutroux prepared and isolated gluconic acid using the glucose fermentation.
Occurrence and uses
Gluconic acid occurs naturally in fruit, honey, and wine. As a food additive (E574), it is now known as an acidity regulator.
The gluconate anion chelates Ca2+, Fe2+, , Al3+, and other metals, including lanthanides and actinides. It is also used in cleaning products, where it dissolves mineral deposits, especially in alkaline solution.
Zinc gluconate injections are used to neuter male dogs.
Gluconate is also used in building and construction as a concrete admixture (retarder) to slow down the cement hydration reactions, and to delay the cement setting time. It allows for a longer time to lay the concrete, or to spread the cement hydration heat o
|
https://en.wikipedia.org/wiki/Radiant%20intensity
|
In radiometry, radiant intensity is the radiant flux emitted, reflected, transmitted or received, per unit solid angle, and spectral intensity is the radiant intensity per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. These are directional quantities. The SI unit of radiant intensity is the watt per steradian (), while that of spectral intensity in frequency is the watt per steradian per hertz () and that of spectral intensity in wavelength is the watt per steradian per metre ()—commonly the watt per steradian per nanometre (). Radiant intensity is distinct from irradiance and radiant exitance, which are often called intensity in branches of physics other than radiometry. In radio-frequency engineering, radiant intensity is sometimes called radiation intensity.
Mathematical definitions
Radiant intensity
Radiant intensity, denoted Ie,Ω ("e" for "energetic", to avoid confusion with photometric quantities, and "Ω" to indicate this is a directional quantity), is defined as
where
∂ is the partial derivative symbol;
Φe is the radiant flux emitted, reflected, transmitted or received;
Ω is the solid angle.
In general, Ie,Ω is a function of viewing angle θ and potentially azimuth angle. For the special case of a Lambertian surface, Ie,Ω follows the Lambert's cosine law Ie,Ω = I0 cos θ.
When calculating the radiant intensity emitted by a source, Ω refers to the solid angle into which the light is emitted. When calculating radiance received by a detector, Ω refers to the solid angle subtended by the source as viewed from that detector.
Spectral intensity
Spectral intensity in frequency, denoted Ie,Ω,ν, is defined as
where ν is the frequency.
Spectral intensity in wavelength, denoted Ie,Ω,λ, is defined as
where λ is the wavelength.
Radio-frequency engineering
Radiant intensity is used to characterize the emission of radiation by an antenna:
where
Ee is the irradiance of the antenna;
r is the dist
|
https://en.wikipedia.org/wiki/Pullulan
|
Pullulan is a polysaccharide polymer consisting of maltotriose units, also known as α-1,4- ;α-1,6-glucan'. Three glucose units in maltotriose are connected by an α-1,4 glycosidic bond, whereas consecutive maltotriose units are connected to each other by an α-1,6 glycosidic bond. Pullulan is produced from starch by the fungus Aureobasidium pullulans. Pullulan is mainly used by the cell to resist desiccation and predation. The presence of this polysaccharide also facilitates diffusion of molecules both into and out of the cell.
As an edible, mostly tasteless polymer, the chief commercial use of pullulan is in the manufacture of edible films that are used in various breath freshener or oral hygiene products such as Listerine Cool Mint of Johnson and Johnson (USA) and Meltz Super Thin Mints of Avery Bio-Tech Private Ltd. (India). Pullulan and HPMC can also be used as a vegetarian substitute for drug capsules, rather than gelatine. As a food additive, it is known by the E number E1204.
Pullulan has also be explored as natural polymeric biomaterials to fabricated injectable scaffold for bone tissue engineering, cartilage tissue engineering, and intervertebral disc regeneration.
See also
Pullulanase
Desiccation
References
Characteristics of pullulan based edible films (abstract)
Pullulan
Pullulan chemical diagram
Biotechnology products
Food additives
Polysaccharides
Packaging materials
E-number additives
Further reading
Saman Zarei; Faramarz Khodaiyan; Seyed Saeid Hosseini; John F. Kennedy. (October 2020) "Pullulan Production Using Molasses and Corn Steep Liquor as Agroindustrial Wastes: Physiochemical, Thermal and Rheological Properties". Applied Food Biotechnology, Vol. 7 No. 4 (2020), 18 August 2020 , Page 263-272. https://doi.org/10.22037/afb.v7i4.29747
|
https://en.wikipedia.org/wiki/Geraniol
|
Geraniol is a monoterpenoid and an alcohol. It is the primary component of citronella oil and is a primary component of rose oil, palmarosa oil. It is a colorless oil, although commercial samples can appear yellow. It has low solubility in water, but it is soluble in common organic solvents. The functional group derived from geraniol (in essence, geraniol lacking the terminal −OH) is called geranyl.
Uses and occurrence
In addition to rose oil, palmarosa oil, and citronella oil, it also occurs in small quantities in geranium, lemon, and many other essential oils. With a rose-like scent, it is commonly used in perfumes. It is used in flavors such as peach, raspberry, grapefruit, red apple, plum, lime, orange, lemon, watermelon, pineapple, and blueberry.
Geraniol is produced by the scent glands of honeybees to mark nectar-bearing flowers and locate the entrances to their hives. It is also commonly used as an insect repellent, especially for mosquitoes.
The scent of geraniol is reminiscent of, but chemically unrelated to, 2-ethoxy-3,5-hexadiene, also known as geranium taint, a wine fault resulting from fermentation of sorbic acid by lactic acid bacteria.
Geranyl pyrophosphate is important in biosynthesis of other terpenes such as myrcene and ocimene. It is also used in the biosynthesis pathway of many cannabinoids in the form of CBGA.
Reactions
In acidic solutions, geraniol is converted to the cyclic terpene α-terpineol. The alcohol group undergoes expected reactions. It can be converted to the tosylate, which is a precursor to the chloride. Geranyl chloride also arises by the Appel reaction by treating geraniol with triphenylphosphine and carbon tetrachloride. It can be hydrogenated. It can be oxidized to the aldehyde geranial.
Health and safety
Geraniol is classified as D2B (Toxic materials causing other effects) using the Workplace Hazardous Materials Information System (WHMIS).
History
Geraniol was first isolated in pure form in 1871 by the German chemi
|
https://en.wikipedia.org/wiki/Wiener%20filter
|
In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant (LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, and additive noise. The Wiener filter minimizes the mean square error between the estimated random process and the desired process.
Description
The goal of the Wiener filter is to compute a statistical estimate of an unknown signal using a related signal as an input and filtering that known signal to produce the estimate as an output. For example, the known signal might consist of an unknown signal of interest that has been corrupted by additive noise. The Wiener filter can be used to filter out the noise from the corrupted signal to provide an estimate of the underlying signal of interest. The Wiener filter is based on a statistical approach, and a more statistical account of the theory is given in the minimum mean square error (MMSE) estimator article.
Typical deterministic filters are designed for a desired frequency response. However, the design of the Wiener filter takes a different approach. One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks the linear time-invariant filter whose output would come as close to the original signal as possible. Wiener filters are characterized by the following:
Assumption: signal and (additive) noise are stationary linear stochastic processes with known spectral characteristics or known autocorrelation and cross-correlation
Requirement: the filter must be physically realizable/causal (this requirement can be dropped, resulting in a non-causal solution)
Performance criterion: minimum mean-square error (MMSE)
This filter is frequently used in the process of deconvolution; for this application, see Wiener deconvolution.
Wiener filter solutions
Let be an unknown signal which must be estimated from a measurement signal . W
|
https://en.wikipedia.org/wiki/Ernest%20Nagel
|
Ernest Nagel (November 16, 1901 – September 20, 1985) was an American philosopher of science. Along with Rudolf Carnap, Hans Reichenbach, and Carl Hempel, he is sometimes seen as one of the major figures of the logical positivist movement. His 1961 book The Structure of Science is considered a foundational work in the logic of scientific explanation.
Life and career
Nagel was born in Nové Mesto nad Váhom (now in Slovakia, then Vágújhely and part of the Austro-Hungarian Empire) to Jewish parents. His mother, Frida Weiss, was from the nearby town of Vrbové (or Verbo). He emigrated to the United States at the age of 10 and became a U.S. citizen in 1919. He received a BSc from the City College of New York in 1923, and earned his PhD from Columbia University in 1931, with a dissertation on the concept of measurement. Except for one year (1966-1967) at Rockefeller University, he spent his entire academic career at Columbia. He became the first John Dewey Professor of Philosophy there in 1955. And then University Professor from 1967 until his retirement in 1970, after which he continued to teach. In 1977, he was one of the few philosophers elected to the National Academy of Sciences.
His work concerned the philosophy of mathematical fields such as geometry and probability, quantum mechanics, and the status of reductive and inductive theories of science. His book The Structure of Science (1961) practically inaugurated the field of analytic philosophy of science. He expounded the different kinds of explanation in different fields, and was sceptical about attempts to unify the nature of scientific laws or explanations. He was the first to propose that by positing analytic equivalencies (or "bridge laws") between the terms of different sciences, one could eliminate all ontological commitments except those required by the most basic science. He also upheld the view that social sciences are scientific, and should adopt the same standards as natural sciences.
Nagel wrote An I
|
https://en.wikipedia.org/wiki/SCSI%20initiator%20and%20target
|
In computer data storage, a SCSI initiator is the endpoint that initiates a SCSI session, that is, sends a SCSI command. The initiator usually does not provide any Logical Unit Numbers (LUNs).
On the other hand, a SCSI target is the endpoint that does not initiate sessions, but instead waits for initiators' commands and provides required input/output data transfers. The target usually provides to the initiators one or more LUNs, because otherwise no read or write command would be possible.
Detailed information
Typically, a computer is an initiator and a data storage device is a target. As in a client–server architecture, an initiator is analogous to the client, and a target is analogous to the server. Each SCSI address (each identifier on a SCSI bus) displays behavior of initiator, target, or (rarely) both at the same time. There is nothing in the SCSI protocol that prevents an initiator from acting as a target or vice versa.
SCSI initiators are sometimes wrongly called controllers.
Other protocols
Initiator and target terms are applicable not only to traditional parallel SCSI, but also to Fibre Channel Protocol (FCP), iSCSI (see iSCSI target), HyperSCSI, (in some sense) SATA, ATA over Ethernet (AoE), InfiniBand, DSSI and many other storage networking protocols.
Address versus port
In most of these protocols, an address (whether it is initiator or target) is roughly equivalent to physical device's port. Situations where a single physical port hosts multiple addresses, or where a single address is accessible from one device's multiple ports are not very common, . Even when using multipath I/O to achieve fault tolerance, the device driver switches between different targets or initiators statically bound on physical ports, instead of sharing a static address between physical ports.
See also
Master/slave (technology)
SCSI architectural model
iSCSI target
SCSI
Computer storage devices
|
https://en.wikipedia.org/wiki/Discrete-time%20Fourier%20transform
|
In mathematics, the discrete-time Fourier transform (DTFT), also called the finite Fourier transform, is a form of Fourier analysis that is applicable to a sequence of values.
The DTFT is often used to analyze samples of a continuous function. The term discrete-time refers to the fact that the transform operates on discrete data, often samples whose interval has units of time. From uniformly spaced samples it produces a function of frequency that is a periodic summation of the continuous Fourier transform of the original continuous function. Under certain theoretical conditions, described by the sampling theorem, the original continuous function can be recovered perfectly from the DTFT and thus from the original discrete samples. The DTFT itself is a continuous function of frequency, but discrete samples of it can be readily calculated via the discrete Fourier transform (DFT) (see ), which is by far the most common method of modern Fourier analysis.
Both transforms are invertible. The inverse DTFT is the original sampled data sequence. The inverse DFT is a periodic summation of the original sequence. The fast Fourier transform (FFT) is an algorithm for computing one cycle of the DFT, and its inverse produces one cycle of the inverse DFT.
Definition
The discrete-time Fourier transform of a discrete sequence of real or complex numbers , for all integers , is a Trigonometric series, which produces a periodic function of a frequency variable. When the frequency variable, ω, has normalized units of radians/sample, the periodicity is , and the DTFT series is:
The discrete-time Fourier transform is analogous to a Fourier series, except instead of starting with a periodic function of time and producing discrete sequence over frequency, it starts with a discrete sequence in time and produces a periodic function in frequency. The utility of this frequency domain function is rooted in the Poisson summation formula. Let be the Fourier transform of any function,
|
https://en.wikipedia.org/wiki/Spectrohelioscope
|
A spectrohelioscope is a type of solar telescope designed by George Ellery Hale in 1924 to allow the Sun to be viewed in a selected wavelength of light. The name comes from Latin- and Greek-based words: "Spectro," referring to the optical spectrum, "helio," referring to the Sun, and "scope," as in telescope.
The basic spectrohelioscope is a complex machine that uses a spectroscope to scan the surface of the Sun. The image from the objective lens is focused on a narrow slit revealing only a thin portion of the Sun's surface. The light is then passed through a prism or diffraction grating to spread the light into a spectrum. The spectrum is then focused on another slit that allows only a narrow part of the spectrum (the desired wavelength of light for viewing) to pass. The light is finally focused on an eyepiece so the surface of the Sun can be seen. The view, however, would be only a narrow strip of the Sun's surface. The slits are moved in unison to scan across the whole surface of the Sun giving a full image. Independently nodding mirrors can be used instead of moving slits to produce the same scan: the first mirror selects a slice of the Sun, the second selects the desired wavelength.
The spectroheliograph is a similar device, but images the Sun at a particular wavelength photographically and is still in use in professional observatories.
See also
Helioscope
Heliometer
References
External links
An amateur-made spectrohelioscope
Measuring instruments
Spectroscopy
Telescope types
Astronomical instruments
|
https://en.wikipedia.org/wiki/Short-rate%20model
|
A short-rate model, in the context of interest rate derivatives, is a mathematical model that describes the future evolution of interest rates by describing the future evolution of the short rate, usually written .
The short rate
Under a short rate model, the stochastic state variable is taken to be the instantaneous spot rate. The short rate, , then, is the (continuously compounded, annualized) interest rate at which an entity can borrow money for an infinitesimally short period of time from time . Specifying the current short rate does not specify the entire yield curve. However, no-arbitrage arguments show that, under some fairly relaxed technical conditions, if we model the evolution of as a stochastic process under a risk-neutral measure , then the price at time of a zero-coupon bond maturing at time with a payoff of 1 is given by
where is the natural filtration for the process. The interest rates implied by the zero coupon bonds form a yield curve, or more precisely, a zero curve. Thus, specifying a model for the short rate specifies future bond prices. This means that instantaneous forward rates are also specified by the usual formula
Short rate models are often classified as endogenous and exogenous. Endogenous short rate models are short rate models where the term structure of interest rates, or of zero-coupon bond prices , is an output of the model, so it is "inside the model" (endogenous) and is determined by the model parameters. Exogenous short rate models are models where such term structure is an input, as the model involves some time dependent functions or shifts that allow for inputing a given market term structure, so that the term structure comes from outside (exogenous).
Particular short-rate models
Throughout this section represents a standard Brownian motion under a risk-neutral probability measure and its differential. Where the model is lognormal, a variable is assumed to follow an Ornstein–Uhlenbeck process and is assumed to foll
|
https://en.wikipedia.org/wiki/Concrete%20mixer
|
A concrete mixer (also cement mixer) is a device that homogeneously combines cement, aggregate (e.g. sand or gravel), and water to form concrete. A typical concrete mixer uses a revolving drum to mix the components. For smaller volume works, portable concrete mixers are often used so that the concrete can be made at the construction site, giving the workers ample time to use the concrete before it hardens. An alternative to a machine is mixing concrete by hand. This is usually done in a wheelbarrow; however, several companies have recently begun to sell modified tarps for this purpose.
The concrete mixer was invented by Columbus, Ohio industrialist Gebhardt Jaeger.
History
One of the first concrete mixers ever was developed in 1900 by T.L. Smith in Milwaukee. The mixer already exhibited the still common basic construction with a tiltable conical drum (as double cone at that time) with blades. 1925, at least two mixers, built 25 years ago, were still in use (serial numbers 37 and 82). The Smith Mascot in essence has the same construction of the small mixers used today. In the 1920s, the T.L. Smith Company in Milwaukee built the world's largest concrete mixers. Mixers of this company were used e. g. for the construction of the Wilson Dam (six 2-yard and two 4-yard mixers, at the time the largest single installation of the largest concrete mixers in the world), the first stadium of the Ohio State University and the Exchequer Dam.
Industrial mixers
Today's market increasingly requires consistent homogeneity and short mixing times for the industrial production of ready-mix concrete, and more so for precast/prestressed concrete. This has resulted in refinement of mixing technologies for concrete production. Different styles of stationary mixers have been developed, each with its own inherent strengths targeting different parts of the concrete production market. The most common mixers used today fall into three categories:
Twin-shaft mixers, known for their high inte
|
https://en.wikipedia.org/wiki/Gliwice%20Radio%20Tower
|
The Gliwice Radio Tower is a transmission tower in the Szobiszowice district of Gliwice, Upper Silesia, Poland.
Structure
The Gliwice Radio Tower is 111 m tall, with a wooden framework of impregnated siberian larch linked by brass connectors. It was nicknamed "the Silesian Eiffel Tower" by the local population. The tower has four platforms, which are 40.4 m, 55.3 m, 80.0 m and 109.7 m above ground. The top platform measures 2.13 x 2.13 m. A ladder with 365 steps provides access to the top.
The tower is the tallest wooden structure in Europe. The tower was originally designed to carry aerials for medium wave broadcasting, but that transmitter is no longer in service because the final stage is missing. Today, the Gliwice Radio Tower carries multiple transceiver antennas for mobile phone services and a low-power FM transmitter broadcasting on 93.4 MHz.
History
The tower was erected from 1 August 1934 as Sendeturm Gleiwitz (Gleiwitz Radio Tower), when the territory was part of Germany. It was operated by the Reichssender Breslau (former Schlesische Funkstunde broadcasting corporation) of the Reichs-Rundfunk-Gesellschaft radio network. The tower was modeled on the Mühlacker radio transmitter, it replaced a smaller transmitter in Gleiwitz situated nearby on Raudener Straße and went in service on 23 December 1935.
On 31 August 1939, the German SS staged a 'Polish' attack on Gleiwitz radio station, which was later used as justification for the invasion of Poland. The transmission facility was not demolished in World War II. From 4 October 1945 until the inauguration of the new transmitter in Ruda Śląska in 1955 the Gliwice transmitter was used for medium-wave transmissions by the Polish state broadcaster Polskie Radio. After 1955, it was used to jam medium-wave stations (such as Radio Free Europe) broadcasting Polish-language programmes from Western Europe.
Transmitted programmes
Radio
See also
Gleiwitz incident
List of tallest wooden buildings
Trivia
The shape of
|
https://en.wikipedia.org/wiki/Analysis%20paralysis
|
Analysis paralysis (or paralysis by analysis) describes an individual or group process where overanalyzing or overthinking a situation can cause forward motion or decision-making to become "paralyzed", meaning that no solution or course of action is decided upon within a natural time frame. A situation may be deemed too complicated and a decision is never made, or made much too late, due to anxiety that a potentially larger problem may arise. A person may desire a perfect solution, but may fear making a decision that could result in error, while on the way to a better solution. Equally, a person may hold that a superior solution is a short step away, and stall in its endless pursuit, with no concept of diminishing returns. On the opposite end of the time spectrum is the phrase extinct by instinct, which is making a fatal decision based on hasty judgment or a gut reaction.
Analysis paralysis is when the fear of either making an error or forgoing a superior solution outweighs the realistic expectation or potential value of success in a decision made in a timely manner. This imbalance results in suppressed decision-making in an unconscious effort to preserve existing options. An overload of options can overwhelm the situation and cause this "paralysis", rendering one unable to come to a conclusion. It can become a larger problem in critical situations where a decision needs to be reached, but a person is not able to provide a response fast enough, potentially causing a bigger issue than they would have had, had they made a decision.
History
The basic idea has been expressed through narrative a number of times. In one "Aesop's fable" that is recorded even before Aesop's time, The Fox and the Cat, the fox boasts of "hundreds of ways of escaping" while the cat has "only one". When they hear the hounds approaching, the cat scampers up a tree while "the fox in his confusion was caught up by the hounds". The fable ends with the moral, "Better one safe way than a hundred on
|
https://en.wikipedia.org/wiki/Reclaimer
|
A reclaimer is a large machine used in bulk material handling applications. A reclaimer's function is to recover bulk material such as ores and cereals from a stockpile. A stacker is used to stack the material.
Reclaimers are volumetric machines and are rated in m3/h (cubic meters per hour) for capacity, which is often converted to t/h (tonnes per hour) based on the average bulk density of the material being reclaimed. Reclaimers normally travel on a rail between stockpiles in the stockyard. A bucket wheel reclaimer can typically move in three directions: horizontally along the rail; vertically by "luffing" its boom and rotationally by slewing its boom. Reclaimers are generally electrically powered by means of a trailing cable.
Reclaimer types
Bucket wheel reclaimers use "bucket wheels" to remove material from the pile they are reclaiming. Scraper reclaimers use a series of scrapers on a chain to reclaim the material.
The reclaimer structure can be of a number of types, including portal and bridge. Reclaimers are named based on their type, for example, "Bridge reclaimer." Portal and bridge reclaimers can both use either bucket wheels or scrapers to reclaim the product. Bridge type reclaimers blend the stacked product as it is reclaimed.
Whenever material is laid down during any reclaiming process, it creates a pile. Blending bed stacker reclaimers form such piles in a circular fashion. They do this by taking reclaimed material and passing it through a conveyor system that rotates around the center of the pile to create a circle. This allows the pile to be evenly spread out during the reclaiming process and allows for the oldest material in the pile to be reclaimed before the newer material. During this process, a harrow tool is used to cut through the reclaimed material so that the material can be combined. Some Blending bed reclaimers are equipped with rakes to ensure that no material gets stuck in the machine. These rakes are made with various materials a
|
https://en.wikipedia.org/wiki/Diapause
|
In animal dormancy, diapause is the delay in development in response to regular and recurring periods of adverse environmental conditions. It is a physiological state with very specific initiating and inhibiting conditions. The mechanism is a means of surviving predictable, unfavorable environmental conditions, such as temperature extremes, drought, or reduced food availability. Diapause is observed in all the life stages of arthropods, especially insects.
Activity levels of diapausing stages can vary considerably among species. Diapause may occur in a completely immobile stage, such as the pupae and eggs, or it may occur in very active stages that undergo extensive migrations, such as the adult monarch butterfly, Danaus plexippus. In cases where the insect remains active, feeding is reduced and reproductive development is slowed or halted.
Embryonic diapause, a somewhat similar phenomenon, occurs in over 130 species of mammals, possibly even in humans, and in the embryos of many of the oviparous species of fish in the order Cyprinodontiformes.
Phases of insect diapause
Diapause in insects is a dynamic process consisting of several distinct phases. While diapause varies considerably from one taxon of insects to another, these phases can be characterized by particular sets of metabolic processes and responsiveness of the insect to certain environmental stimuli. For example, Sepsis cynipsea flies primarily use temperature to determine when to enter diapause. Diapause can occur during any stage of development in arthropods, but each species exhibits diapause in specific phases of development. Reduced oxygen consumption is typical as is reduced movement and feeding. In Polistes exclamans, a social wasp, only the queen is said to be able to undergo diapause.
Comparison of diapause periods
The sensitive stage is the period when stimulus must occur to trigger diapause in the organism. Examples of sensitive stage/diapause periods in various insects:
Induction
The indu
|
https://en.wikipedia.org/wiki/Nutraceutical
|
A nutraceutical is a pharmaceutical alternative which claims physiological benefits. In the US, nutraceuticals are largely unregulated, as they exist in the same category as dietary supplements and food additives by the FDA, under the authority of the Federal Food, Drug, and Cosmetic Act. The word "nutraceutical" is a portmanteau term, blending the words "nutrition" and "pharmaceutical".
Regulation
Nutraceuticals are treated differently in different jurisdictions.
Canada
Under Canadian law, a nutraceutical can either be marketed as a food or as a drug; the terms "nutraceutical" and "functional food" have no legal distinction, referring to "a product isolated or purified from foods that is generally sold in medicinal forms not usually associated with food [and] is demonstrated to have a physiological benefit or provide protection against chronic disease."
United States
The term "nutraceutical" is not defined by US law. Depending on its ingredients and the claims with which it is marketed, a product is regulated as a drug, dietary supplement, food ingredient, or food.
Other sources
In the global market, there are significant product quality issues. Nutraceuticals from the international market may claim to use organic or exotic ingredients, yet the lack of regulation may compromise the safety and effectiveness of products. Companies looking to create a wide profit margin may create unregulated products overseas with low-quality or ineffective ingredients.
Classification of nutraceuticals
Nutraceuticals are products derived from food sources that are purported to provide extra health benefits, in addition to the basic nutritional value found in foods. Depending on the jurisdiction, products may claim to prevent chronic diseases, improve health, delay the aging process, increase life expectancy, or support the structure or function of the body.
Dietary supplements
In the United States, the Dietary Supplement Health and Education Act (DSHEA) of 1994 defined the t
|
https://en.wikipedia.org/wiki/Speakon%20connector
|
The Speakon (stylized speakON) is a trademarked name for an electrical connector, originally manufactured by Neutrik, mostly used in professional audio systems for connecting loudspeakers to amplifiers. Other manufacturers make compatible products, often under the name "speaker twist connector".
Speakon connectors are rated for at least 30 A RMS continuous current, higher than 1/4-inch TS phone connectors, two-pole twist lock, and XLR connectors for loudspeakers.
Design
A Speakon connector is designed with a locking system that may be designed for soldered or screw-type connections. Line connectors (male) mate with (female) panel connectors and typically a cable will have identical connectors at both ends. If it is needed to join cables, a coupler can be used (which essentially consists of two panel connectors mounted on the ends of a plastic tube). The design was conceived in 1987.
Speakon connectors are designed to be unambiguous in their use in speaker cables. With 1/4" speaker jacks and XLR connections, it is possible for users to erroneously use low-current shielded microphone or instrument cables in a high-current speaker application. Speakon cables are intended solely for use in high current audio applications.
Speakon connectors arrange their contacts in two concentric rings, with the inner contacts named +1, +2, etc. and the outer contacts connectors (in the four-pole and eight-pole version only)
named −1, −2, etc. The phase convention is that positive voltage on the + contact causes air to be pushed away from the speaker.
Speakon connectors are made in two, four and eight-pole configurations. The two-pole line connector will mate with the four-pole panel connector, connecting to +1 and −1; but the reverse combination will not work. The eight-pole connector is physically larger to accommodate the extra poles. The four-pole connector is the most common at least from the availability of ready-made leads, as it allows for things like bi-amping (two of
|
https://en.wikipedia.org/wiki/Sturm%27s%20theorem
|
In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of located in an interval in terms of the number of changes of signs of the values of the Sturm sequence at the bounds of the interval. Applied to the interval of all the real numbers, it gives the total number of real roots of .
Whereas the fundamental theorem of algebra readily yields the overall number of complex roots, counted with multiplicity, it does not provide a procedure for calculating them. Sturm's theorem counts the number of distinct real roots and locates them in intervals. By subdividing the intervals containing some roots, it can isolate the roots into arbitrarily small intervals, each containing exactly one root. This yields the oldest real-root isolation algorithm, and arbitrary-precision root-finding algorithm for univariate polynomials.
For computing over the reals, Sturm's theorem is less efficient than other methods based on Descartes' rule of signs. However, it works on every real closed field, and, therefore, remains fundamental for the theoretical study of the computational complexity of decidability and quantifier elimination in the first order theory of real numbers.
The Sturm sequence and Sturm's theorem are named after Jacques Charles François Sturm, who discovered the theorem in 1829.
The theorem
The Sturm chain or Sturm sequence of a univariate polynomial with real coefficients is the sequence of polynomials such that
for , where is the derivative of , and is the remainder of the Euclidean division of by The length of the Sturm sequence is at most the degree of .
The number of sign variations at of the Sturm sequence of is the number of sign changes–ignoring zeros—in the sequence of real numbers
This number of sign variations is denoted here .
Sturm's theorem states that, if is a
|
https://en.wikipedia.org/wiki/Pine%20tar
|
Pine tar is a form of wood tar produced by the high temperature carbonization of pine wood in anoxic conditions (dry distillation or destructive distillation). The wood is rapidly decomposed by applying heat and pressure in a closed container; the primary resulting products are charcoal and pine tar.
Pine tar consists primarily of aromatic hydrocarbons, tar acids, and tar bases. Components of tar vary according to the pyrolytic process (e.g. method, duration, temperature) and origin of the wood (e.g. age of pine trees, type of soil, and moisture conditions during tree growth). The choice of wood, design of kiln, burning, and collection of the tar can vary. Only pine stumps and roots are used in the traditional production of pine tar.
Pine tar has a long history as a wood preservative, as a wood sealant for maritime use, in roofing construction and maintenance, in soaps, and in the treatment of carbuncles and skin diseases, such as psoriasis, eczema, and rosacea. It is used in baseball to enhance the grip of a hitter's bat; it is also sometimes used by pitchers to improve their grip on the ball, in violation of the rules.
History
Pine tar has long been used in Scandinavian nations as a preservative for wood which may be exposed to harsh conditions, including outdoor furniture and ship decking and rigging. The high-grade pine tar used in this application is often called Stockholm tar since, for many years, a single company held a royal monopoly on its export out of Stockholm, Sweden. It is also known as "Archangel Tar". Tar and pitch for maritime use was in such demand that it became an important export for the American colonies, which had extensive pine forests. North Carolinians became known as "Tar Heels."
Use
Pine tar was used as a preservative on the bottoms of traditional Nordic-style skis until modern synthetic materials replaced wood in their construction. It also helped waxes adhere, which aided such skis’ grip and glide.
Pine tar is widely used as a
|
https://en.wikipedia.org/wiki/Windows%20Nashville
|
Nashville (previously Cleveland), popularly known as Windows 96 by contemporary press, was the codename for a cancelled release of Microsoft Windows scheduled to be released in 1996, between "Chicago" (Windows 95) and "Memphis" (Windows 98, at the time scheduled for release in 1996, later 1997). Nashville was intended to be a minor release focusing on a tighter integration between Windows and Internet Explorer, in order to better compete with Netscape Navigator.
Microsoft claimed that Nashville would add Internet integration features to the Windows 95 and NT 4.0 desktop, building on the new features in the Internet Explorer 3.0 web browser (due for release a few months before Nashville). Touted features included a combined file manager and web browser, the ability to seamlessly open Microsoft Office documents from within Internet Explorer using ActiveX technology and a way to place dynamic web pages directly on the desktop in place of the regular static wallpaper.
A leaked build had version number 4.10.999 (in comparison to Windows 95's 4.00.950, Windows 95 OSR2's 4.00.1111, Windows 98's 4.10.1998, Windows 98 Second Edition's 4.10.2222 A, and Windows ME's 4.90.3000). The project was eventually cancelled as a full release of Windows, with Windows 95 OSR2 being shipped as an interim release instead. The codename "Nashville" was then reused for the Windows Desktop Update that shipped with Internet Explorer 4.0 and delivered most of the features promised for Nashville. The Athena PIM application would be released as Microsoft Internet Mail and News in 1996 along with IE3, which would later be renamed to Outlook Express in 1997 with IE4.
See also
List of Microsoft codenames
Notes
References
Microsoft confidential
Comes v. Microsoft. Plaintiff's Exhibit 2013: "Desktop Operating Systems Mission—Draft". Microsoft Confidential (February 4, 1994).
Comes v. Microsoft. Plaintiff's Exhibit 3208: "Desktop Operating Systems Mission Memo". Microsoft Confidential.
Comes v. Micr
|
https://en.wikipedia.org/wiki/IBM%204758
|
The IBM 4758 PCI Cryptographic Coprocessor is a secure cryptoprocessor implemented on a high-security, tamper resistant, programmable PCI expansion card. Specialized cryptographic electronics, microprocessor, memory, and random number generator housed within a tamper-responding environment provide a highly secure subsystem in which data processing and cryptography can be performed.
IBM supplies two cryptographic-system implementations, and toolkits for custom application development:
The PKCS#11, version 2.01 implementation creates a high-security solution for application programs developed for this industry-standard API.
The IBM Common Cryptographic Architecture implementation provides many functions of special interest in the finance industry and a base on which custom processing and cryptographic functions can be added.
As of June 2005, the 4758 was discontinued and was replaced by an improved, faster model called the IBM 4764.
References
External links
IBM CryptoCards
How to hack one that's running an old CCA version
Satan’s Computer Revisited: API Security explains flaws
Puff65537's IBM 4758 teardown
Teardown of an IBM 4758 that is FIPS-140 level 4 Certified
Cryptographic hardware
4758
|
https://en.wikipedia.org/wiki/Generative%20model
|
In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsistent, but three major types can be distinguished, following :
A generative model is a statistical model of the joint probability distribution on given observable variable X and target variable Y;
A discriminative model is a model of the conditional probability of the target Y, given an observation x; and
Classifiers computed without using a probability model are also referred to loosely as "discriminative".
The distinction between these last two classes is not consistently made; refers to these three classes as generative learning, conditional learning, and discriminative learning, but only distinguish two classes, calling them generative classifiers (joint distribution) and discriminative classifiers (conditional distribution or no distribution), not distinguishing between the latter two classes. Analogously, a classifier based on a generative model is a generative classifier, while a classifier based on a discriminative model is a discriminative classifier, though this term also refers to classifiers that are not based on a model.
Standard examples of each, all of which are linear classifiers, are:
generative classifiers:
naive Bayes classifier and
linear discriminant analysis
discriminative model:
logistic regression
In application to classification, one wishes to go from an observation x to a label y (or probability distribution on labels). One can compute this directly, without using a probability distribution (distribution-free classifier); one can estimate the probability of a label given an observation, (discriminative model), and base classification on that; or one can estimate the joint distribution (generative model), from that compute the conditional probability , and then base classification on
|
https://en.wikipedia.org/wiki/Tame%20group
|
In mathematical group theory, a tame group is a certain kind of group defined in model theory.
Formally, we define a bad field as a structure of the form (K, T), where K is an algebraically closed field and T is an infinite, proper, distinguished subgroup of K, such that (K, T) is of finite Morley rank in its full language. A group G is then called a tame group if no bad field is interpretable in G.
References
A. V. Borovik, Tame groups of odd and even type, pp. 341–-366, in Algebraic Groups and their Representations, R. W. Carter and J. Saxl, eds. (NATO ASI Series C: Mathematical and Physical Sciences, vol. 517), Kluwer Academic Publishers, Dordrecht, 1998.
Model theory
Infinite group theory
Properties of groups
|
https://en.wikipedia.org/wiki/Tiny%20Internet%20Interface
|
The Tiny Internet Interface (known as TINI or MxTNI) is a microcontroller that includes the facilities necessary to connect to the Internet. The MxTNI platform is a microcontroller-based development platform that executes code for embedded web servers. The platform is a combination of broad-based I/O, a full TCP/IP stack, and an extensible Java runtime environment that simplifies development of network-connected equipment.
MxTNI was developed by Dallas Semiconductor, now Maxim Integrated Products. The interface was originally known as "TINI". The company renamed it MxTNI in December 2011.
References
External links
MxTNI definition.
MxTNI Application Notes
Microcontrollers
|
https://en.wikipedia.org/wiki/Help%20authoring%20tool
|
A Help Authoring Tool or HAT is a software program used by technical writers to create online help systems.
Functions
The basic functions of a Help Authoring Tool (HAT) can be divided into the following categories:
File input
HATs obtain their source text either by importing it from a file produced by another program, or by allowing the author to create the text within the tool by using an editor. File formats that can be imported vary from HAT to HAT. Acceptable file formats can include ASCII, HTML, OpenOffice Writer and Microsoft Word, and compiled Help formats such as Microsoft WinHelp and Microsoft Compiled HTML Help.
Help output
The output from a HAT can be either a compiled Help file in a format such as WinHelp (*.HLP) or Microsoft Compiled HTML Help (*.CHM), or noncompiled file formats such as Adobe PDF, XML, HTML or JavaHelp.
Auxiliary functions
Some HATs provide extra functions such as:
Automatic or assisted Index generation
Automatic Table of Contents
Spelling checker
Image editing
Image hotspot editing
Import and export of text in XML files, for exchange with computer-assisted translation programs
Common help authoring tools
Some common HATs include:
HelpNDoc
Adobe RoboHelp
HelpSmith
Doc-To-Help
MadCap Flare
Help & Manual
Sandcastle
AsciiDoc
Related software
Technical writers often use content management systems and version control systems to manage their work.
See also
List of help authoring tools
User assistance
Technical communication
Technical communication tools
Online help
|
https://en.wikipedia.org/wiki/Sguil
|
Sguil (pronounced sgweel or squeal) is a collection of free software components for Network Security Monitoring (NSM) and event driven analysis of IDS alerts. The sguil client is written in Tcl/Tk and can be run on any operating system that supports these. Sguil integrates alert data from Snort, session data from SANCP, and full content data from a second instance of Snort running in packet logger mode.
Sguil is an implementation of a Network Security Monitoring system. NSM is defined as "collection, analysis, and escalation of indications and warnings to detect and respond to intrusions."
Sguil is released under the GPL 3.0.
Tools that make up Sguil
See also
Sagan
Intrusion detection system (IDS)
Intrusion prevention system (IPS)
Network intrusion detection system (NIDS)
Metasploit Project
nmap
Host-based intrusion detection system comparison
References
External links
Sguil Homepage
Computer network security
Linux security software
Free network management software
Software that uses Tk (software)
|
https://en.wikipedia.org/wiki/M-325
|
In the history of cryptography, M-325, also known as SIGFOY, was an American rotor machine designed by William F. Friedman and built in 1944. Between 1944 and 1946, more than 1,100 machines were deployed within the United States Foreign Service. Its use was discontinued in 1946 because of faults in operation. Friedman applied for a patent on the M-325 on 11 August 1944; it was and was granted on 17 March 1959 (US patent #2,877,565).
Like the Enigma, the M-325 contains three intermediate rotors and a reflecting rotor.
See also
Hebern rotor machine
SIGABA
References
Further reading
Louis Kruh, Converter M-325(T), Cryptologia 1, 1977, pp143–149.
External links
Operating and Keying Instructions for Converter M-325(T) Headquarters, Army Security Agency, July 1948, scanned and transcribed by Bob Lord.
Friedman M-325 — information and photographs.
Rotor machines
Cryptographic hardware
World War II American electronics
|
https://en.wikipedia.org/wiki/Boss%20key
|
A boss key, or boss button, is a special keyboard shortcut used in PC games or other programs to hide the program quickly, possibly displaying a special screen that appears to be a normal productivity program (such as a spreadsheet application). One of the earliest implementations was by Friendlyware, a suite of entertainment and general interest programs written in BASIC and sold with the original IBM AT and XT computers from 1982 to 1985. When activated (by pressing F10), an ASCII bar graph with generic "Productivity" and "Time" labels appeared. Pressing F10 again would return to the Friendlyware application.
In PC games
The nominal purpose of the boss key is to make it appear to superiors and coworkers that employees are doing their job when they are actually playing games or using the Internet for non work-related tasks. It was a fairly common feature in early computer games for personal computers, because at the time people often did not have home computers and playing at work was their only option. Most boss keys were used to show dummy DOS prompts. The use has faded somewhat, as modern multitasking operating systems have evolved. However, some programs still retain a boss key feature, such as instant messaging clients or their add-ons or comic book viewers like MComix.
The boss key was first used in the Apple II game "Bezare", published by Southwestern Data Systems. The idea of it was proposed by Roger Wagner (founder of Southwestern Data Systems, and later Roger Wagner Publishing) on a hang-gliding trip in Mexico in March, 1981, in a conversation between Roger Wagner and Doug Carlston (of Broderbund Software). Steve Wozniak, Andy Hertzfeld and a number of other early personal computing pioneers were also part of that event. Wes Cherry, the author of the original Microsoft Solitaire, had included a boss key to display a fake spreadsheet or random C code, but was asked by his superiors to remove this on release.
Another early example of the boss key is in
|
https://en.wikipedia.org/wiki/Fhourstones
|
In computer science, Fhourstones is an integer benchmark that efficiently solves positions in the game of Connect-4. It was written by John Tromp in 1996-2008, and is incorporated into the Phoronix Test Suite. The measurements are reported as the number of game positions searched per second.
Available in both ANSI-C and Java, it is quite portable and compact (under 500 lines of source), and uses 50Mb of memory.
The benchmark involves (after warming up on three easier positions) solving the entire game, which takes about ten minutes on contemporary PCs, scoring between 1000 and 12,000 kpos/sec. It has been described as more realistic than some other benchmarks.
Fhourstones was named as a pun on Dhrystone (itself a pun on Whetstone), as "dhry" sounds the same as "drei", German for "three": fhourstones is an increment on dhrystones.
References
External links
Fhourstones homepage
Benchmarks (computing)
|
https://en.wikipedia.org/wiki/Philips%2068070
|
The SCC68070 is a Philips Semiconductors-branded, Motorola 68000-based 16/32-bit processor produced under license. While marketed externally as a high-performance microcontroller, it has been almost exclusively used combined with the Philips SCC66470 VSC (Video- and Systems Controller) in the Philips CD-i interactive entertainment product line.
Additions to the Motorola 68000 core include:
Operation from 4 - 17.5 MHz
Inclusion of a minimal, segmented MMU supporting up to 16 MB of memory
Built-in DMA controller
I²C bus controller
UART
16-bit counter/timer unit
2 match/count/capture registers allowing the implementation of a pulse generator, event counter or reference timer
Clock generator
Differences from the Motorola 68000 core include these:
Instruction execution timing is completely different
Interrupt handling has been simplified
The SCC68070 has MC68010 style bus-error recovery. They are not compatible, so exception error processing is different.
The SCC68070 lacks a dedicated address generation unit (AGU), so operations requiring address calculation run slower due to contention with the shared ALU. This means that most instructions take more cycles to execute, for some instructions significantly more, than a 68000.
The MMU is not compatible with the Motorola 68451 or any other "standard" Motorola MMU, so operating system code dealing with memory protection and address translation is not generally portable. Enabling the MMU also costs a wait state on each memory access.
While the SCC68070 is mostly binary compatible with the Motorola 68000, there is no equivalent chip in the Motorola 680x0 series. In particular, the SCC68070 is not a follow-on to the Motorola 68060.
Even though the SCC68070 is a 32-bit processor internally, it has a 24-bit address bus, giving it a theoretical 16MB maximum RAM. However, this is not possible, as all of the on-board peripherals are mapped internally.
External links
xs4all.nl/~ganswijk/chipdir/reg/68070.tx
|
https://en.wikipedia.org/wiki/Spoof%20%28game%29
|
Spoof is a strategy game, typically played as a gambling game, often in bars and pubs where the loser buys the other participants a round of drinks. The exact origin of the game is unknown, but one scholarly paper addressed it, and more general n-coin games, in 1959. It is an example of a zero-sum game. The version with three coins is sometimes known under the name Three Coin.
Gameplay
Spoof is played by any number of players in a series of rounds. In each round the objective is to guess the aggregate number of coins held in concealment by all the players, with each player being allowed to conceal up to three coins in their hand. (Some versions of the game may vary this number.) The coins may be of any denomination, and the values of the coins are irrelevant: in fact, any suitable objects could be used in place of coins, e.g. matches.
For the first round an initial player is selected in some fashion, such as spinning a burnt match to see who it points at.
At the beginning of every round each player conceals a quantity of coins, or no coins at all, in their closed fist, extended into the circle of play. The initial player calls what they think is the total number of coins in play. Play proceeds clockwise around the circle until each player has ventured a call regarding the total number of coins, and no player can call the same total as any other player. The call of "Spoof!" is sometimes used to mean "zero".
After all players have made their calls, they open their fists and display their coins for the group to count the total. It is illegal for a player to open their hand without making a call. The player who has correctly guessed the total number of coins withdraws from the game and the remainder of the group proceeds to the next round. If no player guesses the total correctly, the entire group continues play in the next round. The starting player for each subsequent round is the next remaining player, clockwise from the starter of the previous round.
P
|
https://en.wikipedia.org/wiki/Online%20help
|
Online help is topic-oriented, procedural or reference information delivered through computer software. It is a form of user assistance. The purpose of most online help is to assist in using a software application, web application or operating system. However, it can also present information on a broad range of subjects. Online help linked to the application's state (what the user is doing) is called Context-sensitive help.
Benefits
Online help has largely replaced live customer support. Before its availability, support could only be given through printed documentation, postal mail, or telephone.
Platforms
Online help is created using help authoring tools or component content management systems. It is delivered in a wide variety of formats, some proprietary and some open-standard, including:
Hypertext Markup Language (HTML), which includes HTML Help, HTML-based Help, JavaHelp, and Oracle Help
Adobe Portable Document Format (PDF)
Online help is also provided via live chat systems, one step removed from telephone calls. This allows the support person to conduct several support sessions simultaneously, thus reducing costs. The transcript is immediately available and can be sent to the customer after the session ends.
The chat feature also reduces the intense negativity that can be directed at customer support personnel, requiring the customer to calm down and articulate their thoughts more clearly.
DITA and DocBook
The Open Source tool DocBook XSL can also generate help files and is a resource for single source publishing. From one source, DocBook can generate PDF, JavaHelp, WebHelp, eBook and many more formats (even .chm files if required). The same with DITA, which is even favored for that purpose.
Microsoft help platforms
Microsoft develops the platforms for delivering help systems in the Microsoft Windows operating system.
Other platforms
See also
Balloon help
Darwin Information Typing Architecture
DocBook
Frequently Asked Questions
List of help authoring
|
https://en.wikipedia.org/wiki/Polyelectrolyte
|
Polyelectrolytes are polymers whose repeating units bear an electrolyte group. Polycations and polyanions are polyelectrolytes. These groups dissociate in aqueous solutions (water), making the polymers charged. Polyelectrolyte properties are thus similar to both electrolytes (salts) and polymers (high molecular weight compounds) and are sometimes called polysalts. Like salts, their solutions are electrically conductive. Like polymers, their solutions are often viscous. Charged molecular chains, commonly present in soft matter systems, play a fundamental role in determining structure, stability and the interactions of various molecular assemblies. Theoretical approaches to describing their statistical properties differ profoundly from those of their electrically neutral counterparts, while technological and industrial fields exploit their unique properties. Many biological molecules are polyelectrolytes. For instance, polypeptides, glycosaminoglycans, and DNA are polyelectrolytes. Both natural and synthetic polyelectrolytes are used in a variety of industries.
Charge
Acids are classified as either weak or strong (and bases similarly may be either weak or strong). Similarly, polyelectrolytes can be divided into "weak" and "strong" types. A "strong" polyelectrolyte is one that dissociates completely in solution for most reasonable pH values. A "weak" polyelectrolyte, by contrast, has a dissociation constant (pKa or pKb) in the range of ~2 to ~10, meaning that it will be partially dissociated at intermediate pH. Thus, weak polyelectrolytes are not fully charged in solution, and moreover their fractional charge can be modified by changing the solution pH, counter-ion concentration, or ionic strength.
The physical properties of polyelectrolyte solutions are usually strongly affected by this degree of charging. Since the polyelectrolyte dissociation releases counter-ions, this necessarily affects the solution's ionic strength, and therefore the Debye length. This in tu
|
https://en.wikipedia.org/wiki/MPEG%20user%20data
|
The MPEG user data feature provides a means to inject application-specific data into an MPEG elementary stream. User data can be inserted on three different levels:
The sequence level
The group of pictures (GOP) level
The picture data level
Applications that process MPEG data do not need to be able to understand data encapsulated in this way, but should be able to preserve it.
Examples of information embedded in MPEG streams as user data are:
Aspect ratio information
"Hidden" information per the Active Format Descriptor specification
Closed captioning per the EIA-708 standard
External links
ATSC
DVB
ATSC
Digital television
MPEG
|
https://en.wikipedia.org/wiki/Streamium
|
Streamium was a line of IP-enabled entertainment products by Dutch electronics multi-national Philips Consumer Electronics. Streamium products use Wi-Fi to stream multimedia content from desktop computers or Internet-based services to home entertainment devices. A Streamium device plugged into the local home network will be able to see multimedia files that are in different UPnP-enabled computers, PDAs and other networking devices that run UPnP AV MediaServer software.
Streamium products may also support internet radio, internet photo sharing and movie trailers services directly. Subscriptions to web-based services requiring subscriptions would be managed through the 'Club Philips' portal.
In all cases, using a computer with RSS receiver together with a UPnP AV MediaServer, it is possible to play back audio/video podcast. Some of the popular feeds include BBC live, Geekbrief, Reuters, Metacafe, YouTube. Although in most cases this video podcaster uses codec formats not supported by Streamium, it's still possible by using software codec transcoders on the PC to convert them to MPEG format.
Philips Media Manager, is—since SimpleCenter version 4— a free open source UPnP AV MediaServer for Windows and Macintosh that is bundled with Streamium. Version 3 of SimpleCenter, was initially developed for inclusion with the Streamium line of products. Since Streamium devices also support photos and videos, SimpleCenter ships with video and image support, under the name 'Philips Media Manager' (PMM).
History
In 2000 Philips' consumer electronics division (business unit Audio) invented the Streamium brand for a "Connected Home". A number of products were released between January 2000 and June 2003. In 2003 the "Connected Home" would be broadened to the "Connected Planet" accompanied by an attempt to steer product development and industrialization from Eindhoven and to include other business units. The "Connected Planet" was less successful, leaving a limited number of product
|
https://en.wikipedia.org/wiki/Internal%20transcribed%20spacer
|
Internal transcribed spacer (ITS) is the spacer DNA situated between the small-subunit ribosomal RNA (rRNA) and large-subunit rRNA genes in the chromosome or the corresponding transcribed region in the polycistronic rRNA precursor transcript.
Across life domains
In bacteria and archaea, there is a single ITS, located between the 16S and 23S rRNA genes. Conversely, there are two ITSs in eukaryotes: ITS1 is located between 18S and 5.8S rRNA genes, while ITS2 is between 5.8S and 28S (in opisthokonts, or 25S in plants) rRNA genes. ITS1 corresponds to the ITS in bacteria and archaea, while ITS2 originated as an insertion that interrupted the ancestral 23S rRNA gene.
Organization
In bacteria and archaea, the ITS occurs in one to several copies, as do the flanking 16S and 23S genes. When there are multiple copies, these do not occur adjacent to one another. Rather, they occur in discrete locations in the circular chromosome. It is not uncommon in bacteria to carry tRNA genes in the ITS.
In eukaryotes, genes encoding ribosomal RNA and spacers occur in tandem repeats that are thousands of copies long, each separated by regions of non-transcribed DNA termed intergenic spacer (IGS) or non-transcribed spacer (NTS).
Each eukaryotic ribosomal cluster contains the 5' external transcribed spacer (5' ETS), the 18S rRNA gene, the ITS1, the 5.8S rRNA gene, the ITS2, the 26S or 28S rRNA gene, and finally the 3' ETS.
During rRNA maturation, ETS and ITS pieces are excised. As non-functional by-products of this maturation, they are rapidly degraded.
Use in phylogenetic inference
Sequence comparison of the eukaryotic ITS regions is widely used in taxonomy and molecular phylogeny because of several favorable properties:
It is routinely amplified thanks to its small size associated to the availability of highly conserved flanking sequences.
It is easy to detect even from small quantities of DNA due to the high copy number of the rRNA clusters.
It undergoes rapid concerted evolut
|
https://en.wikipedia.org/wiki/Adams%20operation
|
In mathematics, an Adams operation, denoted ψk for natural numbers k, is a cohomology operation in topological K-theory, or any allied operation in algebraic K-theory or other types of algebraic construction, defined on a pattern introduced by Frank Adams. The basic idea is to implement some fundamental identities in symmetric function theory, at the level of vector bundles or other representing object in more abstract theories.
Adams operations can be defined more generally in any λ-ring.
Adams operations in K-theory
Adams operations ψk on K theory (algebraic or topological) are characterized by the following properties.
ψk are ring homomorphisms.
ψk(l)= lk if l is the class of a line bundle.
ψk are functorial.
The fundamental idea is that for a vector bundle V on a topological space X, there is an analogy between Adams operators and exterior powers, in which
ψk(V) is to Λk(V)
as
the power sum Σ αk is to the k-th elementary symmetric function σk
of the roots α of a polynomial P(t). (Cf. Newton's identities.) Here Λk denotes the k-th exterior power. From classical algebra it is known that the power sums are certain integral polynomials Qk in the σk. The idea is to apply the same polynomials to the Λk(V), taking the place of σk. This calculation can be defined in a K-group, in which vector bundles may be formally combined by addition, subtraction and multiplication (tensor product). The polynomials here are called Newton polynomials (not, however, the Newton polynomials of interpolation theory).
Justification of the expected properties comes from the line bundle case, where V is a Whitney sum of line bundles. In this special case the result of any Adams operation is naturally a vector bundle, not a linear combination of ones in K-theory. Treating the line bundle direct factors formally as roots is something rather standard in algebraic topology (cf. the Leray–Hirsch theorem). In general a mechanism for reducing to that case comes from the splitting princi
|
https://en.wikipedia.org/wiki/Cohomology%20operation
|
In mathematics, the cohomology operation concept became central to algebraic topology, particularly homotopy theory, from the 1950s onwards, in the shape of the simple definition that if F is a functor defining a cohomology theory, then a cohomology operation should be a natural transformation from F to itself. Throughout there have been two basic points:
the operations can be studied by combinatorial means; and
the effect of the operations is to yield an interesting bicommutant theory.
The origin of these studies was the work of Pontryagin, Postnikov, and Norman Steenrod, who first defined the Pontryagin square, Postnikov square, and Steenrod square operations for singular cohomology, in the case of mod 2 coefficients. The combinatorial aspect there arises as a formulation of the failure of a natural diagonal map, at cochain level. The general theory of the Steenrod algebra of operations has been brought into close relation with that of the symmetric group.
In the Adams spectral sequence the bicommutant aspect is implicit in the use of Ext functors, the derived functors of Hom-functors; if there is a bicommutant aspect, taken over the Steenrod algebra acting, it is only at a derived level. The convergence is to groups in stable homotopy theory, about which information is hard to come by. This connection established the deep interest of the cohomology operations for homotopy theory, and has been a research topic ever since. An extraordinary cohomology theory has its own cohomology operations, and these may exhibit a richer set on constraints.
Formal definition
A cohomology operation of type
is a natural transformation of functors
defined on CW complexes.
Relation to Eilenberg–MacLane spaces
Cohomology of CW complexes is representable by an Eilenberg–MacLane space, so by the Yoneda lemma a cohomology operation of type is given by a homotopy class of maps . Using representability once again, the cohomology operation is given by an element of .
Symbolically,
|
https://en.wikipedia.org/wiki/X-machine
|
The X-machine (XM) is a theoretical model of computation introduced by Samuel Eilenberg in 1974.<ref
name="Eil74">S. Eilenberg (1974)
Automata, Languages and Machines, Vol. A.
Academic Press, London.</ref>
The X in "X-machine" represents the fundamental data type on which the machine operates; for example, a machine that operates on databases (objects of type database) would be a database-machine. The X-machine model is structurally the same as the finite-state machine, except that the symbols used to label the machine's transitions denote relations of type X→X. Crossing a transition is equivalent to applying the relation that labels it (computing a set of changes to the data type X), and traversing a path in the machine corresponds to applying all the associated relations, one after the other.
Original theory
Eilenberg's original X-machine was a completely general theoretical model of computation (subsuming the Turing machine, for example), which admitted deterministic, non-deterministic and non-terminating computations. His seminal work published many variants of the basic X-machine model, each of which generalized the finite-state machine in a slightly different way.
In the most general model, an X-machine is essentially a "machine for manipulating objects of type X". Suppose that X is some datatype, called the fundamental datatype, and that Φ is a set of (partial) relations φ: X → X. An X-machine is a finite-state machine whose arrows are labelled by relations in Φ. In any given state, one or more transitions may be enabled if the domain of the associated relation φi accepts (a subset of) the current values stored in X. In each cycle, all enabled transitions are assumed to be taken. Each recognised path through the machine generates a list φ1 ... φn of relations. We call the composition φ1 o ... o φn of these relations the path relation corresponding to that path. The behaviour of the X-machine is defined to be the union of all the behaviours compu
|
https://en.wikipedia.org/wiki/Failure%20cause
|
Failure causes are defects in design, process, quality, or part application, which are the underlying cause of a failure or which initiate a process which leads to failure. Where failure depends on the user of the product or process, then human error must be considered.
Component failure / failure modes
A part failure mode is the way in which a component failed "functionally" on the component level. Often a part has only a few failure modes. For example, a relay may fail to open or close contacts on demand. The failure mechanism that caused this can be of many different kinds, and often multiple factors play a role at the same time. They include corrosion, welding of contacts due to an abnormal electric current, return spring fatigue failure, unintended command failure, dust accumulation and blockage of mechanism, etc. Seldom only one cause (hazard) can be identified that creates system failures. The real root causes can in theory in most cases be traced back to some kind of human error, e.g. design failure, operational errors, management failures, maintenance induced failures, specification failures, etc.
Failure scenario
A scenario is the complete identified possible sequence and combination of events, failures (failure modes), conditions, system states, leading to an end (failure) system state. It starts from causes (if known) leading to one particular end effect (the system failure condition). A failure scenario is for a system the same as the failure mechanism is for a component. Both result in a failure mode (state) of the system / component.
Rather than the simple description of symptoms that many product users or process participants might use, the term failure scenario / mechanism refers to a rather complete description, including the preconditions under which failure occurs, how the thing was being used, proximate and ultimate/final causes (if known), and any subsidiary or resulting failures that result.
The term is part of the engineering lexicon, es
|
https://en.wikipedia.org/wiki/Clarence%20Birdseye
|
Clarence Birdseye (December 9, 1886 – October 7, 1956) was an American inventor, entrepreneur, and naturalist, considered the founder of the modern frozen food industry. He founded the frozen food company Birds Eye. Among his inventions during his career was the double belt freezer.
One of nine children, Birdseye grew up in New York City before heading to Amherst College and began his scientific career with the U.S. government. A biography of his life was published by Doubleday over a half century after his death.
Early life and education
Clarence Birdseye was the sixth of nine children of Clarence Frank Birdseye, a lawyer in an insurance firm, and Ada Jane Underwood. His first years were spent in New York, New York, where his family owned a townhouse in Cobble Hill. From childhood, Birdseye was obsessed with natural science and with taxidermy, which he taught himself by correspondence. At the age of eleven he advertised his courses in the subject. When he was fourteen, the family moved to the suburb of Montclair, New Jersey, where Birdseye graduated from Montclair High School. He matriculated at Amherst College, where his father and elder brother had earned degrees. There he excelled at science, although an average student in other subjects. His obsession with collecting insects led his college classmates to nickname him "Bugs".
In the summer after his freshman year, Birdseye worked for the United States Department of Agriculture (USDA) in New Mexico and Arizona as an “assistant naturalist”, at a time when the agency was concerned with helping farmers and ranchers get rid of predators, chiefly coyotes.
In 1908, family finances forced Birdseye to withdraw from college after his second year. In 1917, Birdseye's father and elder brother Kellogg went to prison for defrauding their employer; whether this was related to Birdseye's withdrawal from Amherst is unclear.
Birdseye was once again hired by the USDA, this time for a project surveying animals in the America
|
https://en.wikipedia.org/wiki/Geodetic%20airframe
|
A geodetic airframe is a type of construction for the airframes of aircraft developed by British aeronautical engineer Barnes Wallis in the 1930s (who sometimes spelt it "geodesic"). Earlier, it was used by Prof. Schütte for the Schütte Lanz Airship SL 1 in 1909. It makes use of a space frame formed from a spirally crossing basket-weave of load-bearing members. The principle is that two geodesic arcs can be drawn to intersect on a curving surface (the fuselage) in a manner that the torsional load on each cancels out that on the other.
Early examples
The "diagonal rider" structural element was used by Joshua Humphreys in the first US Navy sail frigates in 1794. Diagonal riders are viewable in the interior hull structure of the preserved USS Constitution on display in Boston Harbor. The structure was a pioneering example of placing "non-orthogonal" structural components within an otherwise conventional structure for its time. The "diagonal riders" were included in these American naval vessels' construction as one of five elements to reduce the problem of hogging in the ship's hull, and did not make up the bulk of the vessel's structure, they do not constitute a completely "geodetic" space frame.
Calling any diagonal wood brace (as used on gates, buildings, ships or other structures with cantilevered or diagonal loads) an example of geodesic design is a misnomer. In a geodetic structure, the strength and structural integrity, and indeed the shape, come from the diagonal "braces" - the structure does not need the "bits in between" for part of its strength (implicit in the name space frame) as does a more conventional wooden structure.
Aeroplanes
The earliest-known use of a geodetic airframe design for any aircraft was for the pre-World War I Schütte-Lanz SL1 rigid airship's envelope structure] of 1911, with the airship capable of up to a 38.3 km/h (23.8 mph) top airspeed.
The Latécoère 6 was a French four-engined biplane bomber of the early 1920s. It was of advanc
|
https://en.wikipedia.org/wiki/Solid%20Logic%20Technology
|
Solid Logic Technology (SLT) was IBM's method for hybrid packaging of electronic circuitry introduced in 1964 with the IBM System/360 series of computers and related machines. IBM chose to design custom hybrid circuits using discrete, flip chip-mounted, glass-encapsulated transistors and diodes, with silk-screened resistors on a ceramic substrate, forming an SLT module. The circuits were either encapsulated in plastic or covered with a metal lid. Several of these SLT modules (20 in the image on the right) were then mounted on a small multi-layer printed circuit board to make an SLT card. Each SLT card had a socket on one edge that plugged into pins on the computer's backplane (the exact reverse of how most other companies' modules were mounted).
IBM considered monolithic integrated circuit technology too immature at the time. SLT was a revolutionary technology for 1964, with much higher circuit densities and improved reliability over earlier packaging techniques such as the Standard Modular System. It helped propel the IBM System/360 mainframe family to overwhelming success during the 1960s. SLT research produced ball chip assembly, wafer bumping, trimmed thick-film resistors, printed discrete functions, chip capacitors and one of the first volume uses of hybrid thick-film technology.
SLT replaced the earlier Standard Modular System, although some later SMS cards held SLT modules.
Details
SLT used silicon planar glass-encapsulated transistors and diodes.
SLT uses dual diode chips and individual transistor chips each approximately square. The chips are mounted on a square substrate with silk-screened resistors and printed connections. The whole is encapsulated to form a square module. Up to 36 modules are mounted on each card, though a few card types had just discrete components and no modules. Cards plug into boards which are connected to form gates which form frames.
SLT voltage levels, logic low to logic high, varied by circuit speed:
High speed (5-10 ns)
|
https://en.wikipedia.org/wiki/Horn-satisfiability
|
In formal logic, Horn-satisfiability, or HORNSAT, is the problem of deciding whether a given set of propositional Horn clauses is satisfiable or not. Horn-satisfiability and Horn clauses are named after Alfred Horn.
Basic definitions and terminology
A Horn clause is a clause with at most one positive literal, called the head of the clause, and any number of negative literals, forming the body of the clause. A Horn formula is a propositional formula formed by conjunction of Horn clauses.
The problem of Horn satisfiability is solvable in linear time.
The problem of deciding the truth of quantified Horn formulas can be also solved in polynomial time.
A polynomial-time algorithm for Horn satisfiability is based on the rule of unit propagation: if the formula contains a clause composed of a single literal (a unit clause), then all clauses containing (except the unit clause itself) are removed, and all clauses containing have this literal removed. The result of the second rule may itself be a unit clause, which is propagated in the same manner. If there are no unit clauses, the formula can be satisfied by simply setting all remaining variables negative. The formula is unsatisfiable if this transformation generates a pair of opposite unit clauses and . Horn satisfiability is actually one of the "hardest" or "most expressive" problems which is known to be computable in polynomial time, in the sense that it is a P-complete problem.
This algorithm also allows determining a truth assignment of satisfiable Horn formulae: all variables contained in a unit clause are set to the value satisfying that unit clause; all other literals are set to false. The resulting assignment is the minimal model of the Horn formula, that is, the assignment having a minimal set of variables assigned to true, where comparison is made using set containment.
Using a linear algorithm for unit propagation, the algorithm is linear in the size of the formula.
A generalization of the class of Ho
|
https://en.wikipedia.org/wiki/Diploblasty
|
Diploblasty is a condition of the blastula in which there are two primary germ layers: the ectoderm and endoderm.
Diploblastic organisms are organisms which develop from such a blastula, and include cnidaria and ctenophora, formerly grouped together in the phylum Coelenterata, but later understanding of their differences resulted in their being placed in separate phyla.
The endoderm allows them to develop true tissue. This includes tissue associated with the gut and associated glands. The ectoderm, on the other hand, gives rise to the epidermis, the nervous tissue, and if present, nephridia.
Simpler animals, such as sea sponges, have one germ layer and lack true tissue organization.
All the more complex animals (from flat worms to humans) are triploblastic with three germ layers (a mesoderm as well as ectoderm and endoderm). The mesoderm allows them to develop true organs.
Groups of diploblastic animals alive today include jellyfish, corals, sea anemones and comb jellies.
See also
Triploblasty
Developmental biology
References
|
https://en.wikipedia.org/wiki/Econet
|
Econet was Acorn Computers's low-cost local area network system, intended for use by schools and small businesses. It was widely used in those areas, and was supported by a large number of different computer and server systems produced both by Acorn and by other companies.
Econet software was later mostly superseded by Acorn Universal Networking (AUN), though some suppliers were still offering bridging kits to interconnect old and new networks. AUN was in turn superseded by the Acorn Access+ software.
Implementation history
Econet was specified in 1980, and first developed for the Acorn Atom and Acorn System 2/3/4 computers in 1981. Also in that year the BBC Micro was released, initially with provision for floppy disc and Econet interface ports, but without the necessary supporting ICs fitted, optionally to be added in a post sale upgrade.
In 1982, the Tasmania Department of Education requested a tender for the supply of personal computers to their schools. Earlier that year Barson Computers, Acorn's Australian computer distributor, had released the BBC Microcomputer with floppy disc storage as part of a bundle. Acorn's Hermann Hauser and Chris Curry agreed to allow it to be also offered with Econet fitted, as they had previously done with the disc interface. As previously with the Disc Filing System, they stipulated that Barson would need to adapt the network filing system from the System 2 without assistance from Acorn. Barson's engineers applied a few modifications to fix bugs on the early BBC Micro motherboards, which were adopted by Acorn in later releases. With both floppy disc and networking available, the BBC Micro was approved for use in schools by all state and territory education authorities in Australia and New Zealand, and quickly overtook the Apple II as the computer of choice in private schools.
With no other supporting documentation available, the head of Barson's Acorn division, Rob Napier, published Networking with the BBC Microcomputer, the
|
https://en.wikipedia.org/wiki/Internet%20privacy
|
Internet privacy involves the right or mandate of personal privacy concerning the storage, re-purposing, provision to third parties, and display of information pertaining to oneself via the Internet. Internet privacy is a subset of data privacy. Privacy concerns have been articulated from the beginnings of large-scale computer sharing and especially relate to mass surveillance.
Privacy can entail either personally identifiable information (PII) or non-PII information such as a site visitor's behavior on a website. PII refers to any information that can be used to identify an individual. For example, age and physical address alone could identify who an individual is without explicitly disclosing their name, as these two factors are unique enough to identify a specific person typically. Other forms of PII may include GPS tracking data used by apps, as the daily commute and routine information can be enough to identify an individual.
It has been suggested that the "appeal of online services is to broadcast personal information on purpose." On the other hand, in his essay "The Value of Privacy", security expert Bruce Schneier says, "Privacy protects us from abuses by those in power, even if we're doing nothing wrong at the time of surveillance."
Levels of privacy
Internet and digital privacy are viewed differently from traditional expectations of privacy. Internet privacy is primarily concerned with protecting user information. Law Professor Jerry Kang explains that the term privacy expresses space, decision, and information. In terms of space, individuals have an expectation that their physical spaces (e.g. homes, cars) not be intruded. Information privacy is in regard to the collection of user information from a variety of sources.
In the United States, the 1997 Information Infrastructure Task Force (IITF) created under President Clinton defined information privacy as "an individual's claim to control the terms under which personal information — information ident
|
https://en.wikipedia.org/wiki/GameSpy%20Arcade
|
GameSpy Arcade was a shareware multiplayer game server browsing utility. GameSpy Arcade allowed players to view and connect to available multiplayer games, and chat with other users of the service. It was initially released by GameSpy Industries, on November 13, 2000, to replace the aging GameSpy3D and Mplayer.com program. Version 2.0.5 was the latest offering of the software, boasting additional features such as increased speed and advanced server sorting abilities.
History
Discussing GameSpy Arcade's history, Ian Birnbaum of PC Gamer US wrote, "GameSpy began in 1996 as a fan-hosted server for the original Quake. By the early 2000s, GameSpy was the online multiplayer platform, adding dozens of games every year." The service closed in May 2014.
Features
GameSpy Arcade included various other features which enhance its overall functionality:
Ability for users to have their own profiles.
Scanning a user's hard disk for Arcade compatible games.
A basic web browser.
Voice Chat
Buddy Instant Messaging
Game Staging rooms
Dedicated Server browsing
User Rooms
Subscription
GameSpy Arcade was free. However the software maintained shareware status as there were three different subscription levels. These levels provided benefits, each according to their price.
Basic users
"Basic users" could be created for free at the GameSpy website. They were able to display and connect to games, as well as chat to users inside the game lobby. Advertisements were shown inside the program for these users.
GameSpy Arcade
"GameSpy Arcade", the first pay subscription service ($4.95/Month, $24.95/Year), provided extended functionality such as no advertisements and in-game voice chat, as well as priority service for technicians to assist in solving problems with the software.
Founders Club
"Founders Club", the second pay subscription service ($9.95/Month, $79.95/Year), was the highest possible level of membership to GameSpy arcade. This subscription had benefits which applied across the
|
https://en.wikipedia.org/wiki/StrataCom
|
StrataCom, Inc. was a supplier of Asynchronous Transfer Mode (ATM) and Frame Relay high-speed wide area network (WAN) switching equipment. StrataCom was founded in Cupertino, California, United States, in January 1986, by 26 former employees of the failing Packet Technologies, Inc. StrataCom produced the first commercial cell switch, also known as a fast-packet switch. ATM was one of the technologies underlying the world's communications systems in the 1990s.
Origins of the IPX at Packet Technologies
Internet pioneer Paul Baran was an employee of Packet Technologies and provided a spark of invention at the initiation of the Integrated Packet Exchange (IPX) project (StrataCom's IPX communication system is unrelated to Novell's IPX Internetwork Packet Exchange protocol).
The IPX was initially known as the PacketDAX, which was a play on words of Digital access and cross-connect system (or DACS). A rich collection of inventions were contained in the IPX, and many were provided by the other members of the development team. The names on the original three IPX patents are Paul Baran, Charles Corbalis, Brian Holden, Jim Marggraff, Jon Masatsugu, David Owen and Pete Stonebridge. StrataCom's implementation of ATM was pre-standard and used 24 byte cells instead of standards-based ATM's 53 byte cells. However, many of concepts and details found in the ATM set of standards were derived directly from StrataCom's technology, including the use of CRC-based framing on its links.
The IPX development
The IPX's first use was as a 4-1 voice compression system. It implemented Voice-Activity-Detection (VAD) and ADPCM, which together, gave 4-1 compression allowing 96 telephone calls to be fit into the space of 24. The IPX was also used as an enterprise voice-data networking system as well as a global enterprise networking system. McGraw-Hill's Data communications Magazine included the IPX in its list of "20 Most Significant Communications Products of the Last 20 Years" in a 1992
|
https://en.wikipedia.org/wiki/Gyromagnetic%20ratio
|
In physics, the gyromagnetic ratio (also sometimes known as the magnetogyric ratio in other disciplines) of a particle or system is the ratio of its magnetic moment to its angular momentum, and it is often denoted by the symbol , gamma. Its SI unit is the radian per second per tesla (rad⋅s−1⋅T−1) or, equivalently, the coulomb per kilogram (C⋅kg−1).
The term "gyromagnetic ratio" is often used as a synonym for a different but closely related quantity, the -factor. The -factor only differs from the gyromagnetic ratio in being dimensionless.
For a classical rotating body
Consider a nonconductive charged body rotating about an axis of symmetry. According to the laws of classical physics, it has both a magnetic dipole moment due to the movement of charge and an angular momentum due to the movement of mass arising from its rotation. It can be shown that as long as its charge and mass density and flow are distributed identically and rotationally symmetric, its gyromagnetic ratio is
where is its charge and is its mass.
The derivation of this relation is as follows. It suffices to demonstrate this for an infinitesimally narrow circular ring within the body, as the general result then follows from an integration. Suppose the ring has radius , area , mass , charge , and angular momentum . Then the magnitude of the magnetic dipole moment is
For an isolated electron
An isolated electron has an angular momentum and a magnetic moment resulting from its spin. While an electron's spin is sometimes visualized as a literal rotation about an axis, it cannot be attributed to mass distributed identically to the charge. The above classical relation does not hold, giving the wrong result by the absolute value of the electron's -factor, which is denoted :
where is the Bohr magneton.
The gyromagnetic ratio due to electron spin is twice that due to the orbiting of an electron.
In the framework of relativistic quantum mechanics,
where is the fine-structure constant. Here the sma
|
https://en.wikipedia.org/wiki/Robocrane
|
The Robocrane is a kind of manipulator resembling a Stewart platform but using an octahedral assembly of cables instead of struts. Like the Stewart platform, the Robocrane has six degrees of freedom (x, y, z, pitch, roll, & yaw).
It was developed by Dr. James S. Albus of the US National Institute of Standards and Technology (NIST), using the Real-Time Control System which is a hierarchical control system. Given its unusual ability to "fly" tools around a work site, it has many possible applications, including stone carving, ship building, bridge construction, inspection, pipe or beam fitting and welding.
Albus invented and developed a new generation of robot cranes based on six cables and six winches configured as a Stewart platform. The NIST RoboCraneTM has the capacity to lift and precisely manipulate heavy loads over large volumes with fine control in all six degrees of freedom. Laboratory RoboCranes have demonstrated the ability to manipulate tools such as saws, grinders, and welding torches, and to lift and precisely position heavy objects such as steel beams and cast iron pipe. In 1992, the RoboCrane was selected by Construction Equipment magazine as one of the 100 most significant new products of the year for construction and related industries. It was also selected by Popular Science magazine for the "Best of What's New" award as one of the 100 top products, technologies, and scientific achievements of 1992.
A version of the RoboCrane has been commercially developed for the United States Air Force to enable rapid paint stripping, inspection, and repainting of very large military aircraft such as the C-5 Galaxy. RoboCrane is expected to save the United States Air Force $8 million annually at each of its maintenance facilities. This project was recognized in 2008 by a National Laboratories Award for technology transfer. Potential future applications of the RoboCrane include ship building, construction of high rise buildings, highways, bridges, tunnels, an
|
https://en.wikipedia.org/wiki/CSR%20plc
|
CSR plc (formerly Cambridge Silicon Radio) was a multinational fabless semiconductor company headquartered in Cambridge, United Kingdom. Its main products were connectivity, audio, imaging and location chips. CSR was listed on the London Stock Exchange and was a constituent of the FTSE 250 Index until it was acquired by Qualcomm in August 2015. Under Qualcomm's ownership, the company was renamed Qualcomm Technologies International, Ltd.
History
The company was founded in 1998 and split away from Cambridge Consultants as Cambridge Silicon Radio or CSR in 1999. The founding directors, who were all at Cambridge Consultants at the time were Phil O'Donovan, James Collier and Glenn Collinson. It was first listed on the London Stock Exchange in 2004.
In 2005 the company acquired Clarity Technologies, a leading clear voice capture (CVC) business and UbiNetics, a 3G wireless (WCDMA/UMTS/HSDPA) technology company. In 2007, CSR acquired Nordnav, a Swedish-based GPS software company, and CPS, a Cambridge-based GPS software company producing Enhanced GPS in partnership with Motorola.
In February 2009, CSR announced it was merging with SiRF, the biggest global supplier of GPS chips, in a share deal worth $136 million; in July 2010, CSR announced the acquisition of Belfast-based APT Licensing Ltd. (APT) and its aptX audio technology and in February 2011, CSR announced it was merging with Zoran, a video and imaging technology company.
In May 2012, CSR acquired Direct Digital Feedback Amplifier (DDFA) technology, a proprietary, highly scalable digital Class-D audio amplifier technology; in June 2012, CSR announced that it had acquired the MAPX (formerly MAP-X) audio product line from Trident Microsystems, Inc and in July 2012, Samsung Electronics agreed to acquire CSR's mobile phone connectivity (Bluetooth and Wi-Fi) and location (GPS/GNSS) businesses and associated IP for US$310 million (£198 million). As part of the deal Samsung acquired a stake of 4.9% in CSR.
In June 201
|
https://en.wikipedia.org/wiki/Sequence%20point
|
In C and C++, a sequence point defines any point in a computer program's execution at which it is guaranteed that all side effects of previous evaluations will have been performed, and no side effects from subsequent evaluations have yet been performed. They are a core concept for determining the validity of and, if valid, the possible results of expressions. Adding more sequence points is sometimes necessary to make an expression defined and to ensure a single valid order of evaluation.
With C11 and C++11, usage of the term sequence point has been replaced by sequencing. There are three possibilities:
An expression's evaluation can be sequenced before that of another expression, or equivalently the other expression's evaluation is sequenced after that of the first.
The expressions' evaluation is indeterminately sequenced, meaning one is sequenced before the other, but which is unspecified.
The expressions' evaluation is unsequenced.
The execution of unsequenced evaluations can overlap, leading to potentially catastrophic undefined behavior if they share state. This situation can arise in parallel computations, causing race conditions, but undefined behavior can also result in single-threaded situations. For example, a[i] = i++; (where is an array and is an integer) has undefined behavior.
Examples of ambiguity
Consider two functions f() and g(). In C and C++, the + operator is not associated with a sequence point, and therefore in the expression f()+g() it is possible that either f() or g() will be executed first. The comma operator introduces a sequence point, and therefore in the code f(),g() the order of evaluation is defined: first f() is called, and then g() is called.
Sequence points also come into play when the same variable is modified more than once within a single expression. An often-cited example is the C expression i=i++, which apparently both assigns i its previous value and increments i. The final value of i is ambiguous, because, depending on
|
https://en.wikipedia.org/wiki/Cray-2
|
The Cray-2 is a supercomputer with four vector processors made by Cray Research starting in 1985. At 1.9 GFLOPS peak performance, it was the fastest machine in the world when it was released, replacing the Cray X-MP in that spot. It was, in turn, replaced in that spot by the Cray Y-MP in 1988.
The Cray-2 was the first of Seymour Cray's designs to successfully use multiple CPUs. This had been attempted in the CDC 8600 in the early 1970s, but the emitter-coupled logic (ECL) transistors of the era were too difficult to package into a working machine. The Cray-2 addressed this through the use of ECL integrated circuits, packing them in a novel 3D wiring that greatly increased circuit density.
The dense packaging and resulting heat loads were a major problem for the Cray-2. This was solved in a unique fashion by forcing the electrically inert Fluorinert liquid through the circuitry under pressure and then cooling it outside the processor box. The unique "waterfall" cooler system came to represent high-performance computing in the public eye and was found in many informational films and as a movie prop for some time.
Unlike the original Cray-1, the Cray-2 had difficulties delivering peak performance. Other machines from the company, like the X-MP and Y-MP, outsold the Cray-2 by a wide margin. When Cray began development of the Cray-3, the company chose to develop the Cray C90 series instead. This is the same sequence of events that occurred when the 8600 was being developed, and as in that case, Cray left the company.
Initial design
With the successful launch of his famed Cray-1, Seymour Cray turned to the design of its successor. By 1979 he had become fed up with management interruptions in what was now a large company, and as he had done in the past, decided to resign his management post and move to form a new lab. As with his original move to Chippewa Falls, Wisconsin from Control Data HQ in Minneapolis, Minnesota, Cray management understood his needs and support
|
https://en.wikipedia.org/wiki/Tropical%20sprue
|
Tropical sprue is a malabsorption disease commonly found in tropical regions, marked with abnormal flattening of the villi and inflammation of the lining of the small intestine. It differs significantly from coeliac sprue. It appears to be a more severe form of environmental enteropathy.
Signs and symptoms
The illness usually starts with an attack of acute diarrhoea, fever and malaise following which, after a variable period, the patient settles into the chronic phase of diarrhoea, steatorrhoea, weight loss, anorexia, malaise, and nutritional deficiencies.
The symptoms of tropical sprue are:
Diarrhoea
Steatorrhoea or fatty stool (often foul-smelling and whitish in colour)
Indigestion
Cramps
Weight loss and malnutrition
Fatigue
Left untreated, nutrient and vitamin deficiencies may develop in patients with tropical sprue. These deficiencies may have these symptoms:
Vitamin A deficiency: hyperkeratosis or skin scales
Vitamin B12 and folic acid deficiencies: anaemia, numbness, and tingling sensation
Vitamin D and calcium deficiencies: spasm, bone pain, muscle weakness
Vitamin K deficiency: bruises
Cause
The cause of tropical sprue is not known. It may be caused by persistent bacterial, viral, amoebal, or parasitic infections. Folic acid deficiency, effects of malabsorbed fat on intestinal motility, and persistent small intestinal bacterial overgrowth may combine to cause the disorder. A link between small intestinal bacterial overgrowth and tropical sprue has been proposed to be involved in the aetiology of post-infectious irritable bowel syndrome (IBS). Intestinal immunologic dysfunction, including deficiencies in secretory immunoglobulin A (IgA), may predispose people to malabsorption and bacterial colonization, so tropical sprue may be triggered in susceptible individuals following an acute enteric infection.
Diagnosis
Diagnosis of tropical sprue can be complicated because many diseases have similar symptoms. The following investigation results are
|
https://en.wikipedia.org/wiki/Biological%20immortality
|
Biological immortality (sometimes referred to as bio-indefinite mortality) is a state in which the rate of mortality from senescence is stable or decreasing, thus decoupling it from chronological age. Various unicellular and multicellular species, including some vertebrates, achieve this state either throughout their existence or after living long enough. A biologically immortal living being can still die from means other than senescence, such as through injury, poison, disease, predation, lack of available resources, or changes to environment.
This definition of immortality has been challenged in the Handbook of the Biology of Aging, because the increase in rate of mortality as a function of chronological age may be negligible at extremely old ages, an idea referred to as the late-life mortality plateau. The rate of mortality may cease to increase in old age, but in most cases that rate is typically very high.
The term is also used by biologists to describe cells that are not subject to the Hayflick limit on how many times they can divide.
Cell lines
Biologists chose the word "immortal" to designate cells that are not subject to the Hayflick limit, the point at which cells can no longer divide due to DNA damage or shortened telomeres. Prior to Leonard Hayflick's theory, Alexis Carrel hypothesized that all normal somatic cells were immortal.
The term "immortalization" was first applied to cancer cells that expressed the telomere-lengthening enzyme telomerase, and thereby avoided apoptosis—i.e. cell death caused by intracellular mechanisms. Among the most commonly used cell lines are HeLa and Jurkat, both of which are immortalized cancer cell lines. These cells have been and still are widely used in biological research such as creation of the polio vaccine, sex hormone steroid research, and cell metabolism. Embryonic stem cells and germ cells have also been described as immortal.
Immortal cell lines of cancer cells can be created by induction of oncogenes or l
|
https://en.wikipedia.org/wiki/Weighted%20geometric%20mean
|
In statistics, the weighted geometric mean is a generalization of the geometric mean using the weighted arithmetic mean.
Given a sample and weights , it is calculated as:
The second form above illustrates that the logarithm of the geometric mean is the weighted arithmetic mean of the logarithms of the individual values.
If all the weights are equal, the weighted geometric mean simplifies to the ordinary unweighted geometric mean.
See also
Average
Central tendency
Summary statistics
Weighted arithmetic mean
Weighted harmonic mean
External links
Non-Newtonian calculus website
Means
Mathematical analysis
Non-Newtonian calculus
|
https://en.wikipedia.org/wiki/Photopigment
|
Photopigments are unstable pigments that undergo a chemical change when they absorb light. The term is generally applied to the non-protein chromophore moiety of photosensitive chromoproteins, such as the pigments involved in photosynthesis and photoreception. In medical terminology, "photopigment" commonly refers to the photoreceptor proteins of the retina.
Photosynthetic pigments
Photosynthetic pigments convert light into biochemical energy. Examples for photosynthetic pigments are chlorophyll, carotenoids and phycobilins. These pigments enter a high-energy state upon absorbing a photon which they can release in the form of chemical energy. This can occur via light-driven pumping of ions across a biological membrane (e.g. in the case of the proton pump bacteriorhodopsin) or via excitation and transfer of electrons released by photolysis (e.g. in the photosystems of the thylakoid membranes of plant chloroplasts). In chloroplasts, the light-driven electron transfer chain in turn drives the pumping of protons across the membrane.
Photoreceptor pigments
The pigments in photoreceptor proteins either change their conformation or undergo photoreduction when they absorb a photon. This change in the conformation or redox state of the chromophore then affects the protein conformation or activity and triggers a signal transduction cascade.
Examples of photoreceptor pigments include:
retinal (in rhodopsin)
flavin (in cryptochrome)
bilin (in phytochrome)
Photopigments of the vertebrate retina
In medical terminology, the term photopigment is applied to opsin-type photoreceptor proteins, specifically rhodopsin and photopsins, the photoreceptor proteins in the retinal rods and cones of vertebrates that are responsible for visual perception, but also melanopsin and others.
See also
Biological pigment
References
Molecular biology
Pigments
Photosynthetic pigments
Sensory receptors
|
https://en.wikipedia.org/wiki/Cyrillic%20numerals
|
Cyrillic numerals are a numeral system derived from the Cyrillic script, developed in the First Bulgarian Empire in the late 10th century. It was used in the First Bulgarian Empire and by South and East Slavic peoples. The system was used in Russia as late as the early 18th century, when Peter the Great replaced it with Arabic numerals as part of his civil script reform initiative. Cyrillic numbers played a role in Peter the Great's currency reform plans, too, with silver wire kopecks issued after 1696 and mechanically minted coins issued between 1700 and 1722 inscribed with the date using Cyrillic numerals. By 1725, Russian Imperial coins had transitioned to Arabic numerals. The Cyrillic numerals may still be found in books written in the Church Slavonic language.
General description
The system is a quasi-decimal alphabetic numeral system, equivalent to the Ionian numeral system but written with the corresponding graphemes of the Cyrillic script. The order is based on the original Greek alphabet rather than the standard Cyrillic alphabetical order.
A separate letter is assigned to each unit (1, 2, ... 9), each multiple of ten (10, 20, ... 90), and each multiple of one hundred (100, 200, ... 900). To distinguish numbers from text, a titlo () is sometimes drawn over the numbers, or they are set apart with dots. The numbers are written as pronounced in Slavonic, generally from the high value position to the low value position, with the exception of 11 through 19, which are written and pronounced with the ones unit before the tens; for example, ЗІ (17) is "семнадсять" (literally seven-on-ten, cf. the English seven-teen).
Examples:
() – 1706
() – 7118
A long titlo may be used for long runs of numbers: .
To evaluate a Cyrillic number, the values of all the figures are added up: for example, ѰЗ is 700 + 7, making 707. If the number is greater than 999 (ЦЧѲ), the thousands sign (҂) is used to multiply the number's value: for example, ҂Ѕ is 6000, while ҂Л҂В is pars
|
https://en.wikipedia.org/wiki/Augmented%20truncated%20cube
|
In geometry, the augmented truncated cube is one of the Johnson solids (). As its name suggests, it is created by attaching a square cupola () onto one octagonal face of a truncated cube.
References
Norman W. Johnson, "Convex Solids with Regular Faces", Canadian Journal of Mathematics, 18, 1966, pages 169–200. Contains the original enumeration of the 92 solids and the conjecture that there are no others.
The first proof that there are only 92 Johnson solids.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Biaugmented%20truncated%20cube
|
In geometry, the biaugmented truncated cube is one of the Johnson solids (). As its name suggests, it is created by attaching two square cupolas () onto two parallel octagonal faces of a truncated cube.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Augmented%20truncated%20dodecahedron
|
In geometry, the augmented truncated dodecahedron is one of the Johnson solids (). As its name suggests, it is created by attaching a pentagonal cupola () onto one decagonal face of a truncated dodecahedron.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Parabiaugmented%20truncated%20dodecahedron
|
In geometry, the parabiaugmented truncated dodecahedron is one of the Johnson solids (). As its name suggests, it is created by attaching two pentagonal cupolas () onto two parallel decagonal faces of a truncated dodecahedron.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Metabiaugmented%20truncated%20dodecahedron
|
In geometry, the metabiaugmented truncated dodecahedron is one of the Johnson solids (). As its name suggests, it is created by attaching two pentagonal cupolas () onto two nonadjacent, nonparallel decagonal faces of a truncated dodecahedron.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Triaugmented%20truncated%20dodecahedron
|
In geometry, the triaugmented truncated dodecahedron is one of the Johnson solids (); of them, it has the greatest volume in proportion to the cube of the side length. As its name suggests, it is created by attaching three pentagonal cupolas () onto three nonadjacent decagonal faces of a truncated dodecahedron.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Median%20voter%20theorem
|
The median voter theorem is a proposition relating to ranked preference voting put forward by Duncan Black in 1948. It states that if voters and policies are distributed along a one-dimensional spectrum, with voters ranking alternatives in order of proximity, then any voting method which satisfies the Condorcet criterion will elect the candidate closest to the median voter. In particular, a majority vote between two options will do so.
The theorem is associated with public choice economics and statistical political science. Partha Dasgupta and Eric Maskin have argued that it provides a powerful justification for voting methods based on the Condorcet criterion. Plott's majority rule equilibrium theorem extends this to two dimensions.
A loosely related assertion had been made earlier (in 1929) by Harold Hotelling. It is not a true theorem and is more properly known as the median voter theory or median voter model. It says that in a representative democracy, politicians will converge to the viewpoint of the median voter.
Statement and proof of the theorem
Assume that there is an odd number of voters and at least two candidates, and assume that opinions are distributed along a spectrum. Assume that each voter ranks the candidates in an order of proximity such that the candidate closest to the voter receives their first preference, the next closest receives their second preference, and so forth. Then there is a median voter and we will show that the election will be won by the candidate who is closest to him or her.
The Condorcet criterion is defined as being satisfied by any voting method which ensures that a candidate who is preferred to every other candidate by a majority of the electorate will be the winner, and this is precisely the case with Charles here; so it follows that Charles will win any election conducted using a method satisfying the Condorcet criterion.
Hence under any voting method which satisfies the Condorcet criterion the winner will be the can
|
https://en.wikipedia.org/wiki/Kleene%20fixed-point%20theorem
|
In the mathematical areas of order and lattice theory, the Kleene fixed-point theorem, named after American mathematician Stephen Cole Kleene, states the following:
Kleene Fixed-Point Theorem. Suppose is a directed-complete partial order (dcpo) with a least element, and let be a Scott-continuous (and therefore monotone) function. Then has a least fixed point, which is the supremum of the ascending Kleene chain of
The ascending Kleene chain of f is the chain
obtained by iterating f on the least element ⊥ of L. Expressed in a formula, the theorem states that
where denotes the least fixed point.
Although Tarski's fixed point theorem
does not consider how fixed points can be computed by iterating f from some seed (also, it pertains to monotone functions on complete lattices), this result is often attributed to Alfred Tarski who proves it for additive functions Moreover, Kleene Fixed-Point Theorem can be extended to monotone functions using transfinite iterations.
Proof
We first have to show that the ascending Kleene chain of exists in . To show that, we prove the following:
Lemma. If is a dcpo with a least element, and is Scott-continuous, then
Proof. We use induction:
Assume n = 0. Then since is the least element.
Assume n > 0. Then we have to show that . By rearranging we get . By inductive assumption, we know that holds, and because f is monotone (property of Scott-continuous functions), the result holds as well.
As a corollary of the Lemma we have the following directed ω-chain:
From the definition of a dcpo it follows that has a supremum, call it What remains now is to show that is the least fixed-point.
First, we show that is a fixed point, i.e. that . Because is Scott-continuous, , that is . Also, since and because has no influence in determining the supremum we have: . It follows that , making a fixed-point of .
The proof that is in fact the least fixed point can be done by showing that any element in is smaller than an
|
https://en.wikipedia.org/wiki/TDM%20over%20IP
|
In computer networking and telecommunications, TDM over IP (TDMoIP) is the emulation of time-division multiplexing (TDM) over a packet-switched network (PSN). TDM refers to a T1, E1, T3 or E3 signal, while the PSN is based either on IP or MPLS or on raw Ethernet. A related technology is circuit emulation, which enables transport of TDM traffic over cell-based (ATM) networks.
TDMoIP is a type of pseudowire (PW). However, unlike other traffic types that can be carried over pseudowires (e.g. ATM, Frame Relay and Ethernet), TDM is a real-time bit stream, leading to TDMoIP having unique characteristics. In addition, conventional TDM networks have numerous special features, in particular those required in order to carry voice-grade telephony channels. These features imply signaling systems that support a wide range of telephony features, a rich standardization literature and well-developed Operations and Management (OAM) mechanisms. All of these factors must be taken into account when emulating TDM over PSNs.
One critical issue in implementing TDM PWs is clock recovery. In native TDM networks the physical layer carries highly accurate timing information along with the TDM data, but when emulating TDM over PSNs this synchronization is absent. TDM timing standards can be exacting and conformance with these may require innovative mechanisms to adaptively reproduce the TDM timing.
Another issue that must be addressed is TDMoIP packet loss concealment (PLC). Since TDM data is delivered at a constant rate over a dedicated channel, the native service may have bit errors but data is never lost in transit. All PSNs suffer to some degree from packet loss and this must be compensated when delivering TDM over a PSN.
In December 2007 TDMoIP was approved as an IETF RFC 5087 authored by Dr. Yaakov Stein, Ronen Shashua, Ron Insler, and Motti Anavi of RAD Data Communications.
Background
Communications service providers and enterprise customers are interested in deployment of voice a
|
https://en.wikipedia.org/wiki/PCF%20theory
|
PCF theory is the name of a mathematical theory, introduced by Saharon , that deals with the cofinality of the ultraproducts of ordered sets. It gives strong upper bounds on the cardinalities of power sets of singular cardinals, and has many more applications as well. The abbreviation "PCF" stands for "possible cofinalities".
Main definitions
If A is an infinite set of regular cardinals, D is an ultrafilter on A, then
we let denote the cofinality of the ordered set of functions
where the ordering is defined as follows:
if .
pcf(A) is the set of cofinalities that occur if we consider all ultrafilters on A, that is,
Main results
Obviously, pcf(A) consists of regular cardinals. Considering ultrafilters concentrated on elements of A, we get that
. Shelah proved, that if , then pcf(A) has a largest element, and there are subsets of A such that for each ultrafilter D on A, is the least element θ of pcf(A) such that . Consequently, .
Shelah also proved that if A is an interval of regular cardinals (i.e., A is the set of all regular cardinals between two cardinals), then pcf(A) is also an interval of regular cardinals and |pcf(A)|<|A|+4.
This implies the famous inequality
assuming that ℵω is strong limit.
If λ is an infinite cardinal, then J<λ is the following ideal on A. B∈J<λ if holds for every ultrafilter D with B∈D. Then J<λ is the ideal generated by the sets . There exist scales, i.e., for every λ∈pcf(A) there is a sequence of length λ of elements of which is both increasing and cofinal mod J<λ. This implies that the cofinality of under pointwise dominance is max(pcf(A)).
Another consequence is that if λ is singular and no regular cardinal less than λ is Jónsson, then also λ+ is not Jónsson. In particular, there is a Jónsson algebra on ℵω+1, which settles an old conjecture.
Unsolved problems
The most notorious conjecture in pcf theory states that |pcf(A)|=|A| holds for every set A of regular cardinals with |A|<min(A). This would imply that if ℵω is stro
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.