source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/ToolTalk
|
ToolTalk is an interapplication communications system developed by Sun Microsystems (SunSoft) in order to allow applications to communicate with each other at runtime.
Applications supporting ToolTalk can construct "high-level" messages and hand them off to the system's ToolTalk server, which determines the proper recipients and (after applying permission checks) forwards the message to them. Although originally available only on SunOS and Solaris, ToolTalk was chosen as the application framework for the Common Desktop Environment (CDE) and thus became part of a number of Unix distributions as well as OpenVMS.
While ToolTalk had "object oriented" and "procedural" messages and a complex "pattern" structure which allowed dispatch of messages to processes based on object names, message names, and parameter types, actual desktop protocols never took full advantage of its power. Simpler pattern-matching systems like Apple Computer's AppleScript system did just as well.
The D-Bus standard has superseded ToolTalk in some Unix-like desktop environments.
References
Inter-process communication
Sun Microsystems software
|
https://en.wikipedia.org/wiki/Jtest
|
Jtest is an automated Java software testing and static analysis product developed by Parasoft. The product includes technology for data-flow analysis, unit test-case generation and execution, static analysis, and more. Jtest is used by companies such as Cisco Systems and TransCore. It is also used by Lockheed Martin for the F-35 Joint Strike Fighter program (JSF).
Awards
Jtest received the Dr. Dobb's Journal Jolt Award for Excellence in 2000.
It was granted a Codie award from the Software and Information Industry Association for "Best Software Testing Solution" in 2005 and 2007. It also won "Technology of the Year" award as "Best Application Test Tool" from InfoWorld two years in a row in 2006 and 2007.
See also
Automated testing
List of unit testing frameworks
List of tools for static code analysis
Regression testing
Software testing
System testing
Test case
Test-driven development
xUnit, a family of unit testing frameworks
References
External links
Jtest page
Abstract interpretation
Computer security software
Extreme programming
Java platform
Java development tools
Security testing tools
Software review
Software testing tools
Static program analysis tools
Unit testing
Unit testing frameworks
|
https://en.wikipedia.org/wiki/Object%20orgy
|
In computer programming, an object orgy is a situation in which objects are insufficiently encapsulated via information hiding, allowing unrestricted access to their internals. This is a common failure (or anti-pattern) in object-oriented design or object-oriented programming, and it can lead to increased maintenance needs and problems, and even unmaintainable complexity.
Consequences
The results of an object orgy are mainly a loss of the benefits of encapsulation, including:
Unrestricted access makes it hard for a reader to reason about the behaviour of an object. This is because direct access to its internal state means any other part of the system can manipulate it, increasing the amount of code to examine, and creating means for future abuse.
As a consequence of the difficulty of reasoning, design by contract is effectively impossible.
If much code takes advantage of the lack of encapsulation, the result is a scarcely maintainable maze of interactions, commonly known as a rat's nest or spaghetti code.
The original design is obscured by the excessively broad interfaces to objects.
The broad interfaces make it harder to re-implement a class without disturbing the rest of the system. This is especially hard when clients of a class are developed by a different team or organisation.
Forms
Encapsulation may be weakened in several ways, including:
By declaring internal members public, or by providing free access to data via public mutator methods (setter or getter).
By providing non-public access. For example, see: Java access modifiers and accessibility levels in C#
In C++, via some of the above means, and by declaring friend classes or functions.
An object may also make its internal data accessible by passing references to them as arguments to methods or constructors of other classes, which may retain references.
In contrast, objects holding references to one another, though sometimes described as a form of object orgy, does not by itself breach encapsu
|
https://en.wikipedia.org/wiki/Microgrid
|
A microgrid is a local electrical grid with defined electrical boundaries, acting as a single and controllable entity. It is able to operate in grid-connected and in island mode. A 'Stand-alone microgrid' or 'isolated microgrid' only operates off-the-grid and cannot be connected to a wider electric power system.
A grid-connected microgrid normally operates connected to and synchronous with the traditional wide area synchronous grid (macrogrid), but is able to disconnect from the interconnected grid and to function autonomously in "island mode" as technical or economic conditions dictate. In this way, they improve the security of supply within the microgrid cell, and can supply emergency power, changing between island and connected modes. This kind of grids are called 'islandable microgrids'.
A stand-alone microgrid has its own sources of electricity, supplemented with an energy storage system. They are used where power transmission and distribution from a major centralized energy source is too far and costly to operate. They offer an option for rural electrification in remote areas and on smaller geographical islands. A stand-alone microgrid can effectively integrate various sources of distributed generation (DG), especially renewable energy sources (RES).
Control and protection are difficulties to microgrids, as all ancillary services for system stabilization must be generated within the microgrid and low short-circuit levels can be challenging for selective operation of the protection systems. An important feature is also to provide multiple useful energy needs, such as heating and cooling besides electricity, since this allows energy carrier substitution and increased energy efficiency due to waste heat utilization for heating, domestic hot water, and cooling purposes (cross sectoral energy usage).
Definitions
The United States Department of Energy Microgrid Exchange Group defines a microgrid as a group of interconnected loads and distributed energy resources
|
https://en.wikipedia.org/wiki/Gerard%20J.%20Holzmann
|
Gerard J. Holzmann (born 1951) is a Dutch-American computer scientist and researcher at Bell Labs and NASA, best known as the developer of the SPIN model checker.
Biography
Holzmann was born in Amsterdam, Netherlands and received an Engineer's degree in electrical engineering from the Delft University of Technology in 1976. He subsequently also received his PhD degree from Delft University in 1979 under Willem van der Poel and J.L. de Kroes with a thesis entitled Coordination problems in multiprocessing systems. After receiving a Fulbright Scholarship he was a post-graduate student at the University of Southern California for another year, where he worked with Per Brinch Hansen.
In 1980 he started at Bell Labs in Murray Hill for a year. Back in the Netherlands he was assistant professor at the Delft University of Technology for two years. In 1983 he returned to Bell Labs where he worked in the Computing Science Research Center (the former Unix research group). In 2003 he joined NASA, where he leads the NASA JPL Laboratory for Reliable Software in Pasadena, California and is a JPL fellow.
In 1981 Holzmann was awarded the Prof. Bahler Prize by the Royal Dutch Institute of Engineers, the Software System Award (for SPIN) in 2001 by the Association for Computing Machinery (ACM), the Paris Kanellakis Theory and Practice Award in 2005, and the NASA Exceptional Engineering Achievement Medal in October 2012. Holzmann was elected a member of the US National Academy of Engineering in 2005 for the creation of model-checking systems for software verification. In 2011 he was inducted as a Fellow of the Association for Computing Machinery. In 2015 he was awarded the IEEE Harlan D. Mills Award.
Work
Holzmann is known for the development of the SPIN model checker (SPIN is short for Simple Promela Interpreter) in the 1980s at Bell Labs. This device can verify the correctness of concurrent software, since 1991 freely available.
Books
Publications, a selection:
The Spin Mode
|
https://en.wikipedia.org/wiki/Brun%27s%20theorem
|
In number theory, Brun's theorem states that the sum of the reciprocals of the twin primes (pairs of prime numbers which differ by 2) converges to a finite value known as Brun's constant, usually denoted by B2 . Brun's theorem was proved by Viggo Brun in 1919, and it has historical importance in the introduction of sieve methods.
Asymptotic bounds on twin primes
The convergence of the sum of reciprocals of twin primes follows from bounds on the density of the sequence of twin primes.
Let denote the number of primes p ≤ x for which p + 2 is also prime (i.e. is the number of twin primes with the smaller at most x). Then, we have
That is, twin primes are less frequent than prime numbers by nearly a logarithmic factor.
This bound gives the intuition that the sum of the reciprocals of the twin primes converges, or stated in other words, the twin primes form a small set. In explicit terms, the sum
either has finitely many terms or has infinitely many terms but is convergent: its value is known as Brun's constant.
If it were the case that the sum diverged, then that fact would imply that there are infinitely many twin primes. Because the sum of the reciprocals of the twin primes instead converges, it is not possible to conclude from this result that there are finitely many or infinitely many twin primes. Brun's constant could be an irrational number only if there are infinitely many twin primes.
Numerical estimates
The series converges extremely slowly. Thomas Nicely remarks that after summing the first billion (109) terms, the relative error is still more than 5%.
By calculating the twin primes up to 1014 (and discovering the Pentium FDIV bug along the way), Nicely heuristically estimated Brun's constant to be 1.902160578. Nicely has extended his computation to 1.6 as of 18 January 2010 but this is not the largest computation of its type.
In 2002, Pascal Sebah and Patrick Demichel used all twin primes up to 1016 to give the estimate that B2 ≈ 1.902160583104. Henc
|
https://en.wikipedia.org/wiki/Call%20sign
|
In broadcasting and radio communications, a call sign (also known as a call name or call letters—and historically as a call signal—or abbreviated as a call) is a unique identifier for a transmitter station. A call sign can be formally assigned by a government agency, informally adopted by individuals or organizations, or even cryptographically encoded to disguise a station's identity.
The use of call signs as unique identifiers dates to the landline railroad telegraph system. Because there was only one telegraph line linking all railroad stations, there needed to be a way to address each one when sending a telegram. In order to save time, two-letter identifiers were adopted for this purpose. This pattern continued in radiotelegraph operation; radio companies initially assigned two-letter identifiers to coastal stations and stations on board ships at sea. These were not globally unique, so a one-letter company identifier (for instance, 'M' and two letters as a Marconi station) was later added. By 1912, the need to quickly identify stations operated by multiple companies in multiple nations required an international standard; an ITU prefix would be used to identify a country, and the rest of the call sign an individual station in that country.
Transportation
Maritime
Merchant and naval vessels are assigned call signs by their national licensing authorities. In the case of states such as Liberia or Panama, which are flags of convenience for ship registration, call signs for larger vessels consist of the national prefix plus three letters (for example, 3LXY, and sometimes followed by a number, i.e. 3LXY2). United States merchant vessels are given call signs beginning with the letters "W" or "K" while US naval ships are assigned call signs beginning with "N". Originally, both ships and broadcast stations were assigned call signs in this series consisting of three or four letters. Ships equipped with Morse code radiotelegraphy, or life boat radio sets, Aviation ground
|
https://en.wikipedia.org/wiki/Carboxymethyl%20cellulose
|
Carboxymethyl cellulose (CMC) or cellulose gum is a cellulose derivative with carboxymethyl groups (-CH2-COOH) bound to some of the hydroxyl groups of the glucopyranose monomers that make up the cellulose backbone. It is often used as its sodium salt, sodium carboxymethyl cellulose. It used to be marketed under the name Tylose, a registered trademark of SE Tylose.
Preparation
Carboxymethyl cellulose is synthesized by the alkali-catalyzed reaction of cellulose with chloroacetic acid. The polar (organic acid) carboxyl groups render the cellulose soluble and chemically reactive. Fabrics made of cellulose—e.g. cotton or viscose rayon—may also be converted into CMC.
Following the initial reaction, the resultant mixture produces approximately 60% CMC and 40% salts (sodium chloride and sodium glycolate); this product is the so-called technical CMC, which is used in detergents. An additional purification process is used to remove salts to produce pure CMC, which is used for alimentary and pharmaceutical applications. An intermediate "semi-purified" grade is also produced, typically used in paper applications such as the restoration of archival documents.
Structure and properties
The functional properties of CMC depend on the degree of substitution of the cellulose structure [i.e., how many of the hydroxyl groups have been converted to carboxymethylene(oxy) groups in the substitution reaction], as well as the chain length of the cellulose backbone structure and the degree of clustering of the carboxymethyl substituents.
Uses
Introduction
Carboxymethyl cellulose (CMC) is used in a variety of applications ranging from food production to medical treatments. It is commonly used as a viscosity modifier or thickener, and to stabilize emulsions in various products, both food and non-food. It is used primarily because it has high viscosity, is nontoxic, and is generally considered to be hypoallergenic, as the major source fiber is either softwood pulp or cotton linter. Non
|
https://en.wikipedia.org/wiki/Evolutionary%20grade
|
A grade is a taxon united by a level of morphological or physiological complexity. The term was coined by British biologist Julian Huxley, to contrast with clade, a strictly phylogenetic unit.
Phylogenetics
In order to fully understand evolutionary grades, one must first get a better understanding of phylogenetics: the study of the evolutionary history and relationships among or within groups of organisms. These relationships are determined by phylogenetic inference methods that focus on observed heritable traits, such as DNA sequences, protein amino acid sequences, or morphology. The result of such an analysis is a phylogenetic tree—a diagram containing a hypothesis of relationships that reflects the evolutionary history of a group of organisms.
Definition of an evolutionary grade
An evolutionary grade is a group of species united by morphological or physiological traits, that has given rise to another group that has major differences from the ancestral group's condition, and is thus not considered part of the ancestral group, while still having enough similarities that we can group them under the same clade. The ancestral group will not be phylogenetically complete (i.e. is not a clade), and so will represent a paraphyletic taxon.
The most commonly cited example is that of reptiles. In the early 19th century, the French naturalist Latreille was the first to divide tetrapods into the four familiar classes of amphibians, reptiles, birds, and mammals. In this system, reptiles are characterized by traits such as laying membranous or shelled eggs, having skin covered in scales or scutes, and having a 'cold-blooded' metabolism. However, the ancestors of mammals and birds also had these traits and so birds and mammals can be said to "have evolved from reptiles", making the reptiles, when defined by these traits, a grade rather than a clade. In microbiology, taxa that are thus seen as excluded from their evolutionary grade parent group are called taxa in disguise.
Par
|
https://en.wikipedia.org/wiki/Handle%20decompositions%20of%203-manifolds
|
In mathematics, a handle decomposition of a 3-manifold allows simplification of the original 3-manifold into pieces which are easier to study.
Heegaard splittings
An important method used to decompose into handlebodies is the Heegaard splitting, which gives us a decomposition in two handlebodies of equal genus.
Examples
As an example: lens spaces are orientable 3-spaces and allow decomposition into two solid tori, which are genus-one-handlebodies. The genus one non-orientable space is a space which is the union of two solid Klein bottles and corresponds to the twisted product of the 2-sphere and the 1-sphere: .
Orientability
Each orientable 3-manifold is the union of exactly two orientable handlebodies; meanwhile, each non-orientable one needs three orientable handlebodies.
Heegaard genus
The minimal genus of the glueing boundary determines what is known as the Heegaard genus. For non-orientable spaces an interesting invariant is the tri-genus.
References
J.C. Gómez Larrañaga, W. Heil, V.M. Núñez. Stiefel-Whitney surfaces and decompositions of 3-manifolds into handlebodies, Topology Appl. 60 (1994), 267-280.
J.C. Gómez Larrañaga, W. Heil, V.M. Núñez. Stiefel-Whitney surfaces and the trigenus of non-orientable 3-manifolds, Manuscripta Math. 100 (1999), 405-422.
3-manifolds
Topology
|
https://en.wikipedia.org/wiki/Proof%20complexity
|
In logic and theoretical computer science, and specifically proof theory and computational complexity theory, proof complexity is the field aiming to understand and analyse the computational resources that are required to prove or refute statements. Research in proof complexity is predominantly concerned with proving proof-length lower and upper bounds in various propositional proof systems. For example, among the major challenges of proof complexity is showing that the Frege system, the usual propositional calculus, does not admit polynomial-size proofs of all tautologies. Here the size of the proof is simply the number of symbols in it, and a proof is said to be of polynomial size if it is polynomial in the size of the tautology it proves.
Systematic study of proof complexity began with the work of Stephen Cook and Robert Reckhow (1979) who provided the basic definition of a propositional proof system from the perspective of computational complexity. Specifically Cook and Reckhow observed that proving proof size lower bounds on stronger and stronger propositional proof systems can be viewed as a step towards separating NP from coNP (and thus P from NP), since the existence of a propositional proof system that admits polynomial size proofs for all tautologies is equivalent to NP=coNP.
Contemporary proof complexity research draws ideas and methods from many areas in computational complexity, algorithms and mathematics. Since many important algorithms and algorithmic techniques can be cast as proof search algorithms for certain proof systems, proving lower bounds on proof sizes in these systems implies run-time lower bounds on the corresponding algorithms. This connects proof complexity to more applied areas such as SAT solving.
Mathematical logic can also serve as a framework to study propositional proof sizes. First-order theories and, in particular, weak fragments of Peano arithmetic, which come under the name of bounded arithmetic, serve as uniform versions of
|
https://en.wikipedia.org/wiki/Sun%20outage
|
A Sun outage, Sun transit, or Sun fade is an interruption in or distortion of geostationary satellite signals caused by interference (background noise) of the Sun when it falls directly behind a satellite which an Earth station is trying to receive data from or transmit data to. It usually occurs briefly to such satellites twice per year and such Earth stations install temporary or permanent guards to their receiving systems to prevent equipment damage.
Sun outages occur before the March equinox (in February and March) and after the September equinox (in September and October) for the Northern Hemisphere, and occur after the March equinox and before the September equinox for the Southern Hemisphere. At these times, the apparent path of the Sun across the sky takes it directly behind the line of sight between an Earth station and a satellite. The Sun radiates strongly across the entire spectrum, including the microwave frequencies used to communicate with satellites (C band, Ku band, and Ka band), so the Sun swamps the signal from the satellite. The effects of a Sun outage range from partial degradation (increase in the error rate) to the total destruction of the signal. The effect sweeps from north to south from approximately 20 February to 20 April, and from south to north from approximately 20 August to 20 October, affecting any specific location for less than 12 minutes a day for a few consecutive days.
Effect on Indian stock exchanges
In India, the BSE (Bombay Stock Exchange) and NSE (National Stock Exchange) use VSATs (Very Small Aperture Terminals) for members (e.g. stockbrokers) to connect to their trading systems. VSATs depend upon satellites for connectivity between the terminals/systems. Hence, these exchanges are, with considerable predictability, affected by the annual Sun outages. Both typically close from 11:45 to 12:30 during "Sun outages" — times vary depending on the Earth's orbit and satellites' exact locations. The interference to satellites' s
|
https://en.wikipedia.org/wiki/Apple%2080-Column%20Text%20Card
|
The Apple 80-Column Text Card is an expansion card for the Apple IIe computer to give it the option of displaying 80 columns of text instead of 40 columns. Two models were available; the cheaper 80-column card has just enough extra RAM to double the video memory capacity, and the Extended 80-Column Text Card has an additional 64 kilobytes of RAM, bringing the computer's total RAM to 128 KB.
VisiCalc and Disk II made the Apple II very popular in small businesses, which asked the company for 80-column support, but Apple delayed improving the Apple II because for three years it expected that the unsuccessful Apple III would be the company's business computer. The 80-column cards were the alternative. The cards go in the IIe's Auxiliary Slot, which exists in addition to the seven standard Apple II peripheral slots present on all expandable Apple II series machines. Although in a separate slot, the card is closely associated with slot #3 of the seven standard slots, using some of the hardware and firmware functions that would have otherwise been allocated to slot 3, because third-party 80-column cards such as the Sup'R'Terminal had traditionally been placed in slot 3 on the earlier Apple II and Apple II Plus machines. Therefore the user can enter 80-column mode by issuing the command PR#3 or IN#3 in the BASIC prompt.
The "extended" version of the card features a jumper block (J1) that when installed enables the double high-resolution capability. Since early "Revision A" Apple IIe motherboards are incapable of supporting the bank switching needed for the enhanced graphics mode, the block needs to be removed to disable the feature.
As with many Apple II products, third party cards were also produced that perform a similar function, and some types of 80-column cards were available for the older Apple II models, which do not have a dedicated slot for this card.
Soon after the release of the Apple IIe, 80-column text support became a basic requirement of many software pa
|
https://en.wikipedia.org/wiki/TV80
|
The Sinclair TV80, also known as the Flat Screen Pocket TV or FTV1, was a pocket television released by Sinclair Research in September 1983. Unlike Sinclair's earlier attempts at a portable television, the TV80 used a flat CRT with a side-mounted electron gun instead of a conventional CRT; the picture was made to appear larger than it was by the use of a Fresnel lens. It was a commercial failure, and did not recoup the £4m it cost to develop; only 15,000 units were sold. New Scientist warned that the technology used by the device would be short-lived, in view of the liquid crystal display technology being developed by Casio.
References
External links
Television technology
Sinclair Research
Products introduced in 1984
|
https://en.wikipedia.org/wiki/Zoomed%20video%20port
|
In computing, a zoomed video port (often simply ZV port) is a unidirectional video bus allowing a device in a PC card slot to transfer video data directly into a VGA frame buffer, so as to allow laptops to display real-time video. The standard was created by the PCMCIA to allow devices such as TV tuners, video inputs and MPEG coprocessors to fit into a PC card form factor and provide a cheap solution for both the laptop manufacturer and consumer.
The ZV port is a direct connection between the PC card slot and VGA controller. Video data is transferred in real time without any buffering, removing the need for bus mastering or arbitration.
The ZV port was invented as an alternative to such methods as the VAFC (VESA Advanced Feature connector).
References
Definition and technical detail from winbookcorp.com
The Zoomed Video (ZV) Port for PC Cards (PCMCIA dissolved in 2009; Internet Archive of site is circa 2008.)
Computing input devices
Computer buses
|
https://en.wikipedia.org/wiki/Robust%20Header%20Compression
|
Robust Header Compression (ROHC) is a standardized method to compress the IP, UDP, UDP-Lite, RTP, and TCP headers of Internet packets.
The need for header compression
In streaming applications, the overhead of IP, UDP, and RTP is 40 bytes for IPv4, or 60 bytes for IPv6. For VoIP, this corresponds to around 60% of the total amount of data sent. Such large overheads may be tolerable in local wired links where capacity is often not an issue, but are excessive for wide area networks and wireless systems where bandwidth is scarce.
ROHC compresses these 40 bytes or 60 bytes of overhead typically into only one or three bytes, by placing a compressor before the link that has limited capacity, and a decompressor after that link. The compressor converts the large overhead to only a few bytes, while the decompressor does the opposite.
The ROHC compression scheme differs from other compression schemes, such as IETF and , by the fact that it performs well over links where the packet loss rate is high, such as wireless links.
Main ROHC compression principles
The ROHC protocol takes advantage of information redundancy in the headers of the following:
one single network packet (e.g. the payload lengths in IP and UDP headers)
several network packets that belong to one single stream (e.g. the IP addresses)
Redundant information is transmitted in the first packets only. The next packets contain variable information, e.g. identifiers or sequence numbers. These fields are transmitted in a compressed form to save more bits.
For better performance, the packets are classified into streams before being compressed. This classification takes advantage of inter-packet redundancy. The classification algorithm is not defined by the ROHC protocol itself but left to the equipment vendor's implementation. Once a stream of packets is classified, it is compressed according to the compression profile that fits best. A compression profile defines the way to compress the different fields in
|
https://en.wikipedia.org/wiki/Unbalanced%20line
|
In telecommunications and electrical engineering in general, an unbalanced line is a pair of conductors intended to carry electrical signals, which have unequal impedances along their lengths and to ground and other circuits. Examples of unbalanced lines are coaxial cable or the historic earth return system invented for the telegraph, but rarely used today. Unbalanced lines are to be contrasted with balanced lines, such as twin-lead or twisted pair which use two identical conductors to maintain impedance balance throughout the line. Balanced and unbalanced lines can be interfaced using a device called a balun.
The chief advantage of the unbalanced line format is cost efficiency. Multiple unbalanced lines can be provided in the same cable with one conductor per line plus a single common return conductor, typically the cable shielding. Likewise, multiple microstrip circuits can all use the same ground plane for the return path. This compares well with balanced cabling which requires two conductors for each line, nearly twice as many. Another benefit of unbalanced lines is that they do not require more expensive, balanced driver and receiver circuits to operate correctly.
Unbalanced lines are sometimes confused with single-ended signalling, but these are entirely separate concepts. The former is a cabling scheme while the latter is a signalling scheme. However, single-ended signalling is commonly sent over unbalanced lines. Unbalanced lines are not to be confused with single-wire transmission lines which do not use a return path at all.
General description
Any line that has a different impedance of the return path may be considered an unbalanced line. However, unbalanced lines usually consist of a conductor that is considered the signal line and another conductor that is grounded, or is ground itself. The ground conductor often takes the form of a ground plane or the screen of a cable. The ground conductor may be, and often is, common to multiple independe
|
https://en.wikipedia.org/wiki/Tier%202%20network
|
A Tier 2 network is an Internet service provider which engages in the practice of peering with other networks, but which also purchases IP transit to reach some portion of the Internet.
Tier 2 providers are the most common Internet service providers, as it is much easier to purchase transit from a Tier 1 network than to peer with them and attempt to become a Tier 1 carrier.
The term Tier 3 is sometimes also used to describe networks who solely purchase IP transit from other networks to reach the Internet.
List of large or important Tier 2 networks
See also
Peering point
Network access point
References
Internet architecture
|
https://en.wikipedia.org/wiki/Piophila
|
Piophila is a genus of small flies which includes the species known as the cheese fly. Both Piophila species feed on carrion, including human corpses.
Description
Piophila are small dark flies with unmarked wings. The setulae (fine hairs) on the thorax are confined to three distinct rows.
Species
There are two species in the genus Piophila:
Piophila casei (Linnaeus, 1758), the cheese fly
Piophila megastigmata J. McAlpine, 1978
References
Tephritoidea genera
Piophilidae
Space-flown life
|
https://en.wikipedia.org/wiki/Change%20control%20board
|
In software development, projects and programs, a change control board (CCB) is a committee that consists of Subject Matter Experts (SME, e.g. software engineers, testing experts, etc.) and Managers (e.g. Quality Assurance managers), who decide whether to implement proposed changes to a project. The main objective of a CCB is to ensure the client accepts the project. Factors affecting a CCB's decision can include the project's phase of development, budget, schedule, and quality goals.
Change control (see Scope management) is also part of Requirements engineering. CCBs are most associated with the waterfall method of software development, but can be seen as having analogues in some implementations of Agile software development.
The Change Control Board will review any proposed changes from the original baseline requirements that were agreed upon with the client. If any change is agreed upon by the committee, the change is communicated to the project team and the client, and the requirement is baselined with the change. The authority of the Change Control Board may vary from project to project (see e.g. Consensus-based decision making), but decisions reached by the Change Control Board are often accepted as final and binding.
A typical Change Control Board might consist of the development manager, the test lead, and a product manager. Less commonly, the client might directly advocate their interests as part of the Change Control Board.
See also
Change management (ITSM)
Change-advisory board
Project management
Requirements engineering
Configuration management
References
Software development process
|
https://en.wikipedia.org/wiki/Work%20at%20home%20parent
|
A work at home parent is someone who conducts remote work from home and integrates parenting into his or her working time and workspace. They are sometimes referred to as a WAHM (work at home mom) or a WAHD (work at home dad).
People work from home for a variety of reasons, including lower business expenses, personal health limitations, eliminating commuting, or to have a more flexible schedule. This flexibility can give workers more options when planning tasks, business and non-business, including parenting duties. While some remote workers opt for childcare outside the home, others integrate child rearing into their working time and workspace. The latter are considered work-at-home parents.
Many WAHPs start home businesses to care for their children while still creating income. The desire to care for one's own children, the incompatibility of a 9-to-5 work day with school hours or sick days, and the expense of childcare prompt many parents to change or leave their jobs in the workforce to be available to their children. Many WAHPs build a business schedule that can be integrated with their parenting duties.
Integrating business and parenting
An integration of parenting and business can take place in one or more of four key ways: combined uses of time, combined uses of space, normalizing children in business, and flexibility.
Combining uses of time involves some level of human multitasking, such as taking children on business errands, and the organized scheduling of business activities during child's down times and vice versa. The WAHP combines uses of space by creating a home (or mobile) office that accommodates the child's presence.
Normalizing acknowledges the child's presence in the business environment. This can include letting key business partners know that parenting is a priority, establishing routines and rules for children in the office, and even having children help with business when appropriate.
Finally, the WAHP can utilize the inherent flexibi
|
https://en.wikipedia.org/wiki/Quaternionic%20projective%20space
|
In mathematics, quaternionic projective space is an extension of the ideas of real projective space and complex projective space, to the case where coordinates lie in the ring of quaternions Quaternionic projective space of dimension n is usually denoted by
and is a closed manifold of (real) dimension 4n. It is a homogeneous space for a Lie group action, in more than one way. The quaternionic projective line is homeomorphic to the 4-sphere.
In coordinates
Its direct construction is as a special case of the projective space over a division algebra. The homogeneous coordinates of a point can be written
where the are quaternions, not all zero. Two sets of coordinates represent the same point if they are 'proportional' by a left multiplication by a non-zero quaternion c; that is, we identify all the
.
In the language of group actions, is the orbit space of by the action of , the multiplicative group of non-zero quaternions. By first projecting onto the unit sphere inside one may also regard as the orbit space of by the action of , the group of unit quaternions. The sphere then becomes a principal Sp(1)-bundle over :
This bundle is sometimes called a (generalized) Hopf fibration.
There is also a construction of by means of two-dimensional complex subspaces of , meaning that lies inside a complex Grassmannian.
Topology
Homotopy theory
The space , defined as the union of all finite 's under inclusion, is the classifying space BS3. The homotopy groups of are given by These groups are known to be very complex and in particular they are non-zero for infinitely many values of . However, we do have that
It follows that rationally, i.e. after localisation of a space, is an Eilenberg–Maclane space . That is (cf. the example K(Z,2)). See rational homotopy theory.
In general, has a cell structure with one cell in each dimension which is a multiple of 4, up to . Accordingly, its cohomology ring is , where is a 4-dimensional generator. This is analogous to
|
https://en.wikipedia.org/wiki/Photobleaching
|
In optics, photobleaching (sometimes termed fading) is the photochemical alteration of a dye or a fluorophore molecule such that it is permanently unable to fluoresce. This is caused by cleaving of covalent bonds or non-specific reactions between the fluorophore and surrounding molecules. Such irreversible modifications in covalent bonds are caused by transition from a singlet state to the triplet state of the fluorophores. The number of excitation cycles to achieve full bleaching varies. In microscopy, photobleaching may complicate the observation of fluorescent molecules, since they will eventually be destroyed by the light exposure necessary to stimulate them into fluorescing. This is especially problematic in time-lapse microscopy.
However, photobleaching may also be used prior to applying the (primarily antibody-linked) fluorescent molecules, in an attempt to quench autofluorescence. This can help improve the signal-to-noise ratio.
Photobleaching may also be exploited to study the motion and/or diffusion of molecules, for example via the FRAP, in which movement of cellular components can be confirmed by observing a recovery of fluorescence at the site of photobleaching, or FLIP techniques, in which multiple rounds of photobleaching is done so that the spread of fluorescence loss can be observed in cell.
Loss of activity caused by photobleaching can be controlled by reducing the intensity or time-span of light exposure, by increasing the concentration of fluorophores, by reducing the frequency and thus the photon energy of the input light, or by employing more robust fluorophores that are less prone to bleaching (e.g. Cyanine Dyes, Alexa Fluors or DyLight Fluors, AttoDyes, Janelia Dyes and others). To a reasonable approximation, a given molecule will be destroyed after a constant exposure (intensity of emission X emission time X number of cycles) because, in a constant environment, each absorption-emission cycle has an equal probability of causing photobleach
|
https://en.wikipedia.org/wiki/Compaq%20Portable%20II
|
The Compaq Portable II is the fourth product in the Compaq Portable series to be brought out by Compaq Computer Corporation. Released in 1986 at a price of US$3499, the Portable II much improved upon its predecessor, the Compaq 286, which had been Compaq's version of the PC AT in the original Compaq Portable chassis; Portable 286 came equipped with 6/8-MHz 286 and a high-speed 20-MB hard drive, while the Portable II included an 8 MHz processor, and was lighter and smaller than the previous Compaq Portables. There were four models of the Compaq Portable II. The basic Model 1 shipped one 5.25" floppy drive and 256 KB of RAM. The Model 2 added a second 5.25" floppy drive and sold for $3599. The Model 3 shipped with a 10MB hard disk in addition to one 5.25" floppy drive and 640 KB of RAM for $4799 at launch. The Model 4 would upgrade the Model 3 with a 20MB hard drive and sold for $4999. There also may have been a 4.1 MB hard drive included at one point. The Compaq Portable II was significantly lighter than its predecessors, the Model 1 weighed just 23.6 pounds compared to the 30.5 pounds the Compaq Portable 286 weighed. Compaq only shipped the system with a small demo disk, MS-DOS 3.1 had to be purchased separately.
There are at least two reported cases of improperly serviced computers exploding when the non-rechargeable lithium battery on the motherboard was connected to the power supply. There were no recorded injuries. The Compaq Portable II was succeeded by the Compaq Portable III in 1987.
Hardware
The Compaq Portable II had room for additional after market upgrades. Compaq manufactured four memory expansion boards, 512 KB and 2048 KB ISA memory cards and 512 KB and 1536 KB memory boards that attached to the back of the motherboard. With 640 KB installed on the motherboard and both the ISA card and the expansion board, the computer could be upgraded with up to a maximum of 4.2MB of RAM. The motherboard also had space for an optional 80287 math coprocessor. There
|
https://en.wikipedia.org/wiki/Mercurial
|
Mercurial is a distributed revision control tool for software developers. It is supported on Microsoft Windows and Unix-like systems, such as FreeBSD, macOS, and Linux.
Mercurial's major design goals include high performance and scalability, decentralization, fully distributed collaborative development, robust handling of both plain text and binary files, and advanced branching and merging capabilities, while remaining conceptually simple. It includes an integrated web-interface. Mercurial has also taken steps to ease the transition for users of other version control systems, particularly Subversion. Mercurial is primarily a command-line driven program, but graphical user interface extensions are available, e.g. TortoiseHg, and several IDEs offer support for version control with Mercurial. All of Mercurial's operations are invoked as arguments to its driver program hg (a reference to Hg – the chemical symbol of the element mercury).
Olivia Mackall originated Mercurial and served as its lead developer until late 2016. Mercurial is released as free software under the GPL-2.0-or-later license. It is mainly implemented using the Python programming language, but includes a binary diff implementation written in C.
History
Mackall first announced Mercurial on 19 April 2005. The impetus for this was the announcement earlier that month by Bitmover that they were withdrawing the free version of BitKeeper because of the development of SourcePuller.
BitKeeper had been used for the version control requirements of the Linux kernel project. Mackall decided to write a distributed version control system as a replacement for use with the Linux kernel. This project started a few days after the now well-known Git project was initiated by Linus Torvalds with similar aims.
The Linux kernel project decided to use Git rather than Mercurial, but Mercurial is now used by many other projects (see below).
In an answer on the Mercurial mailing list, Olivia Mackall explained how the name
|
https://en.wikipedia.org/wiki/ISO/IEC%2010967
|
ISO/IEC 10967, Language independent arithmetic (LIA), is a series of
standards on computer arithmetic. It is compatible with ISO/IEC/IEEE 60559:2011,
more known as IEEE 754-2008, and much of the
specifications are for IEEE 754 special values
(though such values are not required by LIA itself, unless the parameter iec559 is true).
It was developed by the working group ISO/IEC JTC1/SC22/WG11, which was disbanded in 2011.
LIA consists of three parts:
Part 1: Integer and floating point arithmetic, second edition published 2012.
Part 2: Elementary numerical functions, first edition published 2001.
Part 3: Complex integer and floating point arithmetic and complex elementary numerical functions, first edition published 2006.
Parts
Part 1
Part 1 deals with the basic integer and floating point datatypes (for multiple radices, including 2 and 10),
but unlike IEEE 754-2008 not the representation of the values. Part 1 also
deals with basic arithmetic, including comparisons, on values of such
datatypes. The parameter iec559 is expected to be
true for most implementations of LIA-1.
Part 1 was revised, to the second edition, to become more in line with the specifications
in parts 2 and 3.
Part 2
Part 2 deals with some additional "basic" operations on integer and floating point
datatype values, but focuses primarily on specifying requirements on numerical
versions of elementary functions. Much of the specifications in LIA-2 are inspired
by the specifications in Ada for elementary functions.
Part 3
Part 3 generalizes parts 1 and 2 to deal with imaginary and complex
datatypes and arithmetic and elementary functions on such values.
Much of the specifications in LIA-3 are inspired by the specifications
for imaginary and complex datatypes and operations in
C, Ada and
Common Lisp.
Bindings
Each of the parts provide suggested bindings for a number of
programming languages. These are not part of the LIA standards,
just suggestions, and are not complete. Authors of a programming
l
|
https://en.wikipedia.org/wiki/ICL%202900%20Series
|
The ICL 2900 Series was a range of mainframe computer systems announced by the British manufacturer ICL on 9 October 1974. The company had started development under the name "New Range" immediately on its formation in 1968. The range was not designed to be compatible with any previous machines produced by the company, nor for compatibility with any competitor's machines: rather, it was conceived as a synthetic option, combining the best ideas available from a variety of sources.
In marketing terms, the 2900 Series was superseded by Series 39 in the mid-1980s; however, Series 39 was essentially a new set of machines implementing the 2900 Series architecture, as were subsequent ICL machines branded "Trimetra".
Origins
When ICL was formed in 1968 as a result of the merger of
International Computers and Tabulators (ICT) with English Electric, Leo Marconi and Elliott Automation, the company
considered several options for its future product line. These included enhancements to either ICT's 1900 Series or the English Electric System 4,
and a development based on J. K. Iliffe's Basic Language Machine. The option finally selected was the so-called Synthetic Option: a new design conceptualized from scratch.
As the name implies, the design was influenced by many sources, including earlier ICL machines. The design of Burroughs mainframes was influential, although ICL rejected the concept of optimising the design for one high-level language. The Multics system provided other ideas, notably in the area of protection. However, the biggest single outside influence was probably the MU5 machine developed at Manchester University.
Architectural concepts
The virtual machine
The 2900 Series architecture uses the concept of a virtual machine as the set of resources available to a program. The concept of a virtual machine in the 2900 Series architecture differs from the term as used in other environments. Because each program runs in its own virtual machine, the concept may be like
|
https://en.wikipedia.org/wiki/L-notation
|
L-notation is an asymptotic notation analogous to big-O notation, denoted as for a bound variable tending to infinity. Like big-O notation, it is usually used to roughly convey the rate of growth of a function, such as the computational complexity of a particular algorithm.
Definition
It is defined as
where c is a positive constant, and is a constant .
L-notation is used mostly in computational number theory, to express the complexity of algorithms for difficult number theory problems, e.g. sieves for integer factorization and methods for solving discrete logarithms. The benefit of this notation is that it simplifies the analysis of these algorithms. The expresses the dominant term, and the takes care of everything smaller.
When is 0, then
is a polylogarithmic function (a polynomial function of ln n);
When is 1 then
is a fully exponential function of ln n (and thereby polynomial in n).
If is between 0 and 1, the function is subexponential of ln n (and superpolynomial).
Examples
Many general-purpose integer factorization algorithms have subexponential time complexities. The best is the general number field sieve, which has an expected running time of
for . The best such algorithm prior to the number field sieve was the quadratic sieve which has running time
For the elliptic curve discrete logarithm problem, the fastest general purpose algorithm is the baby-step giant-step algorithm, which has a running time on the order of the square-root of the group order n. In L-notation this would be
The existence of the AKS primality test, which runs in polynomial time, means that the time complexity for primality testing is known to be at most
where c has been proven to be at most 6.
History
L-notation has been defined in various forms throughout the literature. The first use of it came from Carl Pomerance in his paper "Analysis and comparison of some integer factoring algorithms". This form had only the parameter: the in the formula was for
|
https://en.wikipedia.org/wiki/Base36
|
Base36 is a binary-to-text encoding scheme that represents binary data in an ASCII string format by translating it into a radix-36 representation. The choice of 36 is convenient in that the digits can be represented using the Arabic numerals 0–9 and the Latin letters A–Z (the ISO basic Latin alphabet).
Each base36 digit needs less than 6 bits of information to be represented.
Conversion
Signed 32- and 64-bit integers will only hold at most 6 or 13 base-36 digits, respectively (that many base-36 digits can overflow the 32- and 64-bit integers). For example, the 64-bit signed integer maximum value of "9223372036854775807" is "" in base-36.
Similarly, the 32-bit signed integer maximum value of "2147483647" is "" in base-36.
Standard implementations
The C standard library since C89 supports base36 numbers via the strtol and strtoul functions
In the Common Lisp standard (ANSI INCITS 226-1994), functions like parse-integer support a radix of 2 to 36.
Java SE supports conversion from/to String to different bases from 2 up to 36. For example, and
Just like Java, JavaScript also supports conversion from/to String to different bases from 2 up to 36.
PHP, like Java, supports conversion from/to String to different bases from 2 up to 36 using the base_convert function, available since PHP 4.
Go supports conversion to string to different bases from 2 up to 36 using the built-in strconv.FormatInt(), and strconv.FormatUint() functions, and conversions from string encoded in different bases from 2 up to 36 using the built-in strconv.ParseInt(), and strconv.ParseUint() functions.
Python allows conversions of strings from base 2 to base 36.
See also
Uuencoding
References
External links
A discussion about the proper name for base 36 at the Wordwizard Clubhouse
The Prime Lexicon, a list of words that are prime numbers in base 36
A Binary-Octal-Decimal-Hexadecimal-Base36 converter written in PHP
A C# base 36 encoder and decoder
sample in C# that demonstrates
|
https://en.wikipedia.org/wiki/Sofrito
|
(Spanish, ), (Catalan, ), (Italian, ), or (Portuguese, ) is a basic preparation in Mediterranean, Latin American, Spanish, Italian and Portuguese cooking. It typically consists of aromatic ingredients cut into small pieces and sautéed or braised in cooking oil for a long period of time over a low heat.
In modern Spanish cuisine, consists of garlic, onion and peppers cooked in olive oil, and optionally tomatoes or carrots. This is known as , or sometimes as in Portuguese-speaking nations, where only garlic, onions, and olive oil are considered essential, tomato and bay laurel leaves being the other most common ingredients.
Mediterranean
The earliest mentioned recipe of , from around the middle of the 14th century, was made with only onion and oil.
In Italian cuisine, chopped onions, carrots and celery is , and then, slowly cooked in olive oil, becomes . It may also contain garlic, shallot, or leek.
In Greek Cuisine, Sofrito refers to a dish that is found almost exclusively in Corfu. It is served less commonalty in other regions of Greece and is often referred to as 'Corfu Sofrito' outside of Corfu. It is made with veal or beef, slowly cooked with garlic, wine, herbs sugar and wine vinegar to produce an umami sauce with softened meat. It is usually served with rice and potatoes.
Latin America
In Cuban cuisine, is prepared in a similar fashion, but the main components are Spanish onions, garlic, and green or red bell peppers. is also often used instead of or in addition to bell peppers. It is a base for beans, stews, rices, and other dishes, including and . Other secondary components include tomato sauce, dry white wine, cumin, bay leaf, and cilantro. (a kind of spicy, cured sausage), (salt pork) and ham are added for specific recipes, such as beans.
In Dominican cuisine, is also called , and is a liquid mixture containing vinegar, water, and sometimes tomato juice. A or is used for rice, stews, beans, and other dishes. A typical Dominican is made
|
https://en.wikipedia.org/wiki/Pair%20of%20pants%20%28mathematics%29
|
In mathematics, a pair of pants is a surface which is homeomorphic to the three-holed sphere. The name comes from considering one of the removed disks as the waist and the two others as the cuffs of a pair of pants.
Pairs of pants are used as building blocks for compact surfaces in various theories. Two important applications are to hyperbolic geometry, where decompositions of closed surfaces into pairs of pants are used to construct the Fenchel-Nielsen coordinates on Teichmüller space, and in topological quantum field theory where they are the simplest non-trivial cobordisms between 1-dimensional manifolds.
Pants and pants decomposition
Pants as topological surfaces
A pair of pants is any surface that is homeomorphic to a sphere with three holes, which formally is the result of removing from the sphere three open disks with pairwise disjoint closures. Thus a pair of pants is a compact surface of genus zero with three boundary components.
The Euler characteristic of a pair of pants is equal to −1, and the only other surface with this property is the punctured torus (a torus minus an open disk).
Pants decompositions
The importance of the pairs of pants in the study of surfaces stems from the following property: define the complexity of a connected compact surface of genus with boundary components to be , and for a non-connected surface take the sum over all components. Then the only surfaces with negative Euler characteristic and complexity zero are disjoint unions of pairs of pants. Furthermore, for any surface and any simple closed curve on which is not homotopic to a boundary component, the compact surface obtained by cutting along has a complexity that is strictly less than . In this sense, pairs of pants are the only "irreducible" surfaces among all surfaces of negative Euler characteristic.
By a recursion argument, this implies that for any surface there is a system of simple closed curves which cut the surface into pairs of pants. This is ca
|
https://en.wikipedia.org/wiki/Mid-range
|
In statistics, the mid-range or mid-extreme is a measure of central tendency of a sample defined as the arithmetic mean of the maximum and minimum values of the data set:
The mid-range is closely related to the range, a measure of statistical dispersion defined as the difference between maximum and minimum values.
The two measures are complementary in sense that if one knows the mid-range and the range, one can find the sample maximum and minimum values.
The mid-range is rarely used in practical statistical analysis, as it lacks efficiency as an estimator for most distributions of interest, because it ignores all intermediate points, and lacks robustness, as outliers change it significantly. Indeed, for many distributions it is one of the least efficient and least robust statistics. However, it finds some use in special cases: it is the maximally efficient estimator for the center of a uniform distribution, trimmed mid-ranges address robustness, and as an L-estimator, it is simple to understand and compute.
Robustness
The midrange is highly sensitive to outliers and ignores all but two data points. It is therefore a very non-robust statistic, having a breakdown point of 0, meaning that a single observation can change it arbitrarily. Further, it is highly influenced by outliers: increasing the sample maximum or decreasing the sample minimum by x changes the mid-range by while it changes the sample mean, which also has breakdown point of 0, by only It is thus of little use in practical statistics, unless outliers are already handled.
A trimmed midrange is known as a – the n% trimmed midrange is the average of the n% and (100−n)% percentiles, and is more robust, having a breakdown point of n%. In the middle of these is the midhinge, which is the 25% midsummary. The median can be interpreted as the fully trimmed (50%) mid-range; this accords with the convention that the median of an even number of points is the mean of the two middle points.
These trimmed mid
|
https://en.wikipedia.org/wiki/Programming%20complexity
|
Programming complexity (or software complexity) is a term that includes software properties that affect internal interactions. Several commentators distinguish between the terms "complex" and "complicated". Complicated implies being difficult to understand, but ultimately knowable. Complex, by contrast, describes the interactions between entities. As the number of entities increases, the number of interactions between them increases exponentially, making it impossible to know and understand them all. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions, thus increasing the risk of introducing defects when changing the software. In more extreme cases, it can make modifying the software virtually impossible.
The idea of linking software complexity to software maintainability has been explored extensively by Professor Manny Lehman, who developed his Laws of Software Evolution. He and his co-author Les Belady explored numerous Software Metrics that could be used to measure the state of software, eventually concluding that the only practical solution is to use deterministic complexity models.
Measures
Several measures of software complexity have been proposed. Many of these, although yielding a good representation of complexity, do not lend themselves to easy measurement. Some of the more commonly used metrics are
McCabe's cyclomatic complexity metric
Halstead's software science metrics
Henry and Kafura introduced "Software Structure Metrics Based on Information Flow" in 1981, which measures complexity as a function of "fan-in" and "fan-out". They define fan-in of a procedure as the number of local flows into that procedure plus the number of data structures from which that procedure retrieves information. Fan-out is defined as the number of local flows out of that procedure plus the number of data structures that the procedure updates. Local flows relate to data passed to, and from procedures that cal
|
https://en.wikipedia.org/wiki/Heat%20gun
|
A heat gun is a device used to emit a stream of hot air, usually at temperatures between , with some hotter models running around , which can be held by hand. Heat guns usually have the form of an elongated body pointing at what is to be heated, with a handle fixed to it at right angles and a pistol grip trigger in the same pistol form factor as many other power tools.
Though it shares similarities to a hair dryer, it is not meant as a substitute for the latter, which safely spreads out the heat out across its nozzle to prevent scalp burning and has a limited temperature range, while heat guns have a concentrated element and nozzle, along with higher temperatures, which can easily scald the scalp or catch the hair on fire.
Construction
A heat gun comprises a source of heat, usually an electrically heated element or a propane/liquified petroleum gas, a mechanism to move the hot air such as an electric fan, unless gas pressure is sufficient; a nozzle to direct the air, which may be a simple tube pointing in one direction, or specially shaped for purposes such as concentrating the heat on a small area or thawing a pipe but not the wall behind; a housing to contain the components and keep the operator safe; a mechanism to switch it on and off and control the temperature such as a trigger; a handle; and a built-in or external stand if the gun is to be used hands-free. Gas-powered soldering irons sometimes have interchangeable hot air blower tips to produce a very narrow stream of hot air suitable for working with surface-mount devices and shrinking heat-shrink tubing.
Focused infrared heaters are also used for localised heating.
Usage
Heat guns are used in physics, materials science, chemistry, engineering, and other laboratory and workshop settings. Different types of heat gun operating at different temperatures and with different airflow can be used to strip paint, shrink heat shrink tubing, shrink film, and shrink wrap packaging, dry out damp wood, bend and weld
|
https://en.wikipedia.org/wiki/Computational%20model
|
A computational model uses computer programs to simulate and study complex systems using an algorithmic or mechanistic approach and is widely used in a diverse range of fields spanning from physics, engineering, chemistry and biology to economics, psychology, cognitive science and computer science.
The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by adjusting the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Operation theories of the model can be derived/deduced from these computational experiments.
Examples of common computational models are weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, Computational Engineering Models (CEM), and neural network models.
See also
Computational Engineering
Computational cognition
Reversible computing
Agent-based model
Artificial neural network
Computational linguistics
Computational human modeling
Decision field theory
Dynamical systems model of cognition
Membrane computing
Ontology (information science)
Programming language theory
Microscale and macroscale models
References
Models of computation
Mathematical modeling
|
https://en.wikipedia.org/wiki/Windows%20Firewall
|
Windows Firewall (officially called Microsoft Defender Firewall in Windows 10 version 2004 and later) is a firewall component of Microsoft Windows. It was first included in Windows XP SP2 and Windows Server 2003 SP1. Before the release of Windows XP Service Pack 2, it was known as the "Internet Connection Firewall."
Overview
When Windows XP was originally shipped in October 2001, it included a limited firewall called "Internet Connection Firewall". It was disabled by default due to concerns with backward compatibility, and the configuration screens were buried away in network configuration screens that many users never looked at. As a result, it was rarely used. In mid-2003, the Blaster worm attacked a large number of Windows machines, taking advantage of flaws in the RPC Windows service. Several months later, the Sasser worm did something similar. The ongoing prevalence of these worms through 2004 resulted in unpatched machines being infected within a matter of minutes. Because of these incidents, as well as other criticisms that Microsoft was not being active in protecting customers from threats, Microsoft decided to significantly improve both the functionality and the interface of Windows XP's built-in firewall, rebrand it as Windows Firewall, and switched it on by default since Windows XP SP2.
One of three profiles is activated automatically for each network interface:
Public assumes that the network is shared with the World and is the most restrictive profile.
Private assumes that the network is isolated from the Internet and allows more inbound connections than public. A network is never assumed to be private unless designated as such by a local administrator.
Domain profile is the least restrictive. It allows more inbound connections to allow for file sharing etc. The domain profile is selected automatically when connected to a network with a domain trusted by the local computer.
Security log capabilities are included, which can record IP addresses and o
|
https://en.wikipedia.org/wiki/Algebra%20i%20Logika
|
Algebra i Logika (English: Algebra and Logic) is a peer-reviewed Russian mathematical journal founded in 1962 by Anatoly Ivanovich Malcev, published by the Siberian Fund for Algebra and Logic at Novosibirsk State University. An English translation of the journal is published by Springer-Verlag as Algebra and Logic since 1968. It published papers presented at the meetings of the "Algebra and Logic" seminar at the Novosibirsk State University. The journal is edited by academician Yury Yershov.
The journal is reviewed cover-to-cover in Mathematical Reviews and Zentralblatt MATH.
Abstracting and Indexing
Algebra i Logika is indexed and abstracted in the following databases:
According to the Journal Citation Reports, the journal had a 2020 impact factor of 0.753.
References
External links
Algebra i Logika website
Algebra and Logic website
Mathematics journals
Academic journals established in 1962
Novosibirsk State University
Magazines published in Novosibirsk
Russian-language journals
|
https://en.wikipedia.org/wiki/Component-based%20software%20engineering
|
Component-based software engineering (CBSE), also called component-based development (CBD), is a style of software engineering that aims to build software out of loosely-coupled, modular components. It emphasizes the separation of concerns among different parts of a software system.
Definition and characteristics of components
An individual software component is a software package, a web service, a web resource, or a module that encapsulates a set of related functions or data.
Components communicate with each other via interfaces. Each component provides an interface (called a provided interface) through which other components can use it. When a component uses another component's interface, that interface is called a used interface.
In the UML illustrations in this article, provided interfaces are represented by lollipop-symbols, while used interfaces are represented by open socket symbols.
Components must be substitutable, meaning that a component must be replaceable by another one having the same interfaces without breaking the rest of the system.
Components should be reusable.
Component-based usability testing should be considered when software components directly interact with users.
Components should be:
fully documented
thoroughly tested
robust - with comprehensive input-validity checking
able to pass back appropriate error messages or return codes
History
The idea that software should be componentized - built from prefabricated components - first became prominent with Douglas McIlroy's address at the NATO conference on software engineering in Garmisch, Germany, 1968, titled Mass Produced Software Components. The conference set out to counter the so-called software crisis. McIlroy's subsequent inclusion of pipes and filters into the Unix operating system was the first implementation of an infrastructure for this idea.
Brad Cox of Stepstone largely defined the modern concept of a software component. He called them Software ICs and set out to crea
|
https://en.wikipedia.org/wiki/Compression%20theorem
|
In computational complexity theory, the compression theorem is an important theorem about the complexity of computable functions.
The theorem states that there exists no largest complexity class, with computable boundary, which contains all computable functions.
Compression theorem
Given a Gödel numbering of the computable functions and a Blum complexity measure where a complexity class for a boundary function is defined as
Then there exists a total computable function so that for all
and
References
.
.
Computational complexity theory
Structural complexity theory
Theorems in the foundations of mathematics
|
https://en.wikipedia.org/wiki/G.%20Mike%20Reed
|
George Michael ("Mike") Reed is an American computer scientist. He has contributed to theoretical computer science in general and CSP in particular.
Mike Reed has a doctorate in pure mathematics from Auburn University, United States, and a doctorate in computation from Oxford University, England. He has an interest in mathematical topology.
Reed was a Senior Research Associate at NASA Goddard Space Flight Center. From 1986 to 2005, he was at the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science) in England where he was also a Fellow in Computation of St Edmund Hall, Oxford (1986–2005). In 2005, he became Director of UNU/IIST, Macau, part of the United Nations University.
References
External links
Year of birth missing (living people)
Living people
Auburn University alumni
Alumni of the University of Oxford
Members of the Department of Computer Science, University of Oxford
Fellows of St Edmund Hall, Oxford
Academic staff of United Nations University
American computer scientists
Formal methods people
Topologists
20th-century American mathematicians
21st-century American mathematicians
|
https://en.wikipedia.org/wiki/Marcel-Paul%20Sch%C3%BCtzenberger
|
Marcel-Paul "Marco" Schützenberger (24 October 1920 – 29 July 1996) was a French mathematician and Doctor of Medicine. He worked in the fields of formal language, combinatorics, and information theory. In addition to his formal results in mathematics, he was "deeply involved in [a] struggle against the votaries of [neo-]Darwinism", a stance which has resulted in some mixed reactions from his peers and from critics of his stance on evolution. Several notable theorems and objects in mathematics as well as computer science bear his name (for example Schutzenberger group or the Chomsky–Schützenberger hierarchy). Paul Schützenberger was his great-grandfather.
In the late 1940s, he was briefly married to the psychologist Anne Ancelin Schützenberger.
Contributions to medicine and biology
Schützenberger's first doctorate, in medicine, was awarded in 1948 from the Faculté de Médecine de Paris. His doctoral thesis, on the statistical study of biological sex at birth, was distinguished by the Baron Larrey Prize from the French Academy of Medicine.
Biologist Jaques Besson, a co-author with Schützenberger on a biological topic, while noting that Schützenberger is perhaps most remembered for work in pure mathematical fields, credits him for likely being responsible for the introduction of statistical sequential analysis in French hospital practice.
Contributions to mathematics, computer science, and linguistics
Schützenberger's second doctorate was awarded in 1953 from Université Paris III. This work, developed from earlier results is counted amongst the early influential French academic work in information theory. His later impact in both linguistics and combinatorics is reflected by two theorems in formal linguistics (the Chomsky–Schützenberger enumeration theorem and the Chomsky–Schützenberger representation theorem), and one in combinatorics (the Schützenberger theorem). With Alain Lascoux, Schützenberger is credited with the foundation of the notion of the plactic mo
|
https://en.wikipedia.org/wiki/John%20Fitzgerald%20%28computer%20scientist%29
|
John S. Fitzgerald FBCS (born 1965) is a British computer scientist. He is a professor at Newcastle University. He was the head of the School of Computing before taking on the role of Dean of Strategic Projects in the university’s Faculty of Science, Agriculture and Engineering. His research interests are in the area of dependable computer systems and formal methods, with a background in the VDM. He is a former Chair of Formal Methods Europe and committee member of BCS-FACS.
Education
Fitzgerald was born in Belfast, Northern Ireland, and was educated at Bangor Grammar School and the Victoria University of Manchester. He holds the BSc in Computing and Information Systems and the PhD degrees from the Department of Computer Science at Manchester.
Selected books
Bicarregui, J.C., Fitzgerald, J.S. and Lindsay, P.A. et al., Proof in VDM: a Practitioner's Guide. Springer-Verlag Formal Approaches to Computing and Information Technology (FACIT), 1994. .
Fitzgerald, J.S. and Larsen, P.G., Modelling Systems: Practical Tools and Techniques in Software Engineering. Cambridge University Press, 1998. . (Japanese Edition pub. Iwanami Shoten, 2003. .)
Fitzgerald, J.S., Larsen, P.G., Mukherjee, P. et al., Validated Designs for Object-oriented Systems. Springer-Verlag, 2005. .
See also
Colleagues at Newcastle University:
Cliff Jones
Brian Randell
References
External links
Home page
1965 births
Living people
Scientists from Belfast
Alumni of the Victoria University of Manchester
British computer scientists
Formal methods people
Academics of Newcastle University
Computer science writers
Fellows of the British Computer Society
People educated at Bangor Grammar School
|
https://en.wikipedia.org/wiki/Marconi%20Prize
|
The Marconi Prize is an annual award recognizing achievements and advancements made in field of communications (radio, mobile, wireless, telecommunications, data communications, networks, and Internet). The prize is awarded by the Marconi Society, and it includes a $100,000 honorarium and a work of sculpture. Recipients of the prize are awarded at the Marconi Society's annual symposium and gala.
Occasionally, the Marconi Society Lifetime Achievement Award is bestowed on legendary late-career individuals, recognizing their transformative contributions and remarkable impacts to the field of communications and to the development of the careers of students, colleagues and peers, throughout their lifetimes. So far, the recipients include Claude E. Shannon (2000, died in 2001), William O. Baker (2003, died in 2005), Gordon E. Moore (2005), Amos E. Joel Jr. (2009, died in 2008), Robert W. Galvin (2011, died in 2011), and Thomas Kailath (2017).
Criteria
The Marconi Prize is awarded based on the candidate’s contributions in the following areas:
The significance of the impact of the nominee’s work on widely-used technology.
The scientific importance of the nominee’s work in setting the stage for, influencing, and advancing the field beyond the nominee’s own achievements.
The nominee’s contributions to innovation and entrepreneurship by introducing completely new ideas, methods, or technologies. These may include forming, leading, or advising organizations, mentoring students on moving ideas from research to implementation, or fostering new industries/enabling scale implementation.
The social and humanitarian impact of the nominee’s contributions to the design, development, and/or deployment of new communication technologies or communications public policies that promote social development and/or inclusiveness.
Marconi Fellow
The Marconi Prize winners are also named as Marconi Fellows. The foundation and the prize are named after the honor of Guglielmo Marconi, a Nob
|
https://en.wikipedia.org/wiki/Early%20completion
|
Early completion is a property of some classes of asynchronous circuit. It means that the output of a circuit may be available as soon as sufficient inputs have arrived to allow it to be determined. For example, if all of the inputs to a mux have arrived, and all are the same, but the select line has not yet arrived, the circuit can still produce an output. Since all the inputs are identical, the select line is irrelevant.
Example: an asynchronous ripple-carry adder
A ripple carry adder is a simple adder circuit, but slow because the carry signal has to propagate through each stage of the adder:
This diagram shows a 5-bit ripple carry adder in action. There is a five-stage long carry path, so every time two numbers are added with this adder, it needs to wait for the carry to propagate through all five stages.
By switching to dual-rail signalling for the carry bit, it can have each stage signal its carry out as soon as it knows. If both inputs to a stage are 1, then the carry out will be 1 no matter what the carry in is. If both inputs are 0, then the carry out will be zero. This early completion cuts down on the maximum length of the carry chain in most cases:
Two of the carry-out bits can be known as soon as input arrives, for the input shown in the picture. This means that the maximum carry chain length is three, not five. If it uses dual-rail signalling for inputs and outputs, it can indicate completion as soon as all the carry chains have completed.
On average, an n-bit asynchronous ripple carry adder will finish in O(log n) time. By extending this approach to carry look-ahead adders, it is possible to add in O(log log n) time.
External links
"Self-timed carry-lookahead adders" by Fu-Chiung Cheng, Stephen H. Unger, Michael Theobald.
Digital electronics
|
https://en.wikipedia.org/wiki/Clock%20gating
|
In computer architecture, clock gating is a popular power management technique used in many synchronous circuits for reducing dynamic power dissipation, by removing the clock signal when the circuit is not in use or ignores clock signal. Clock gating saves power by pruning the clock tree, at the cost of adding more logic to a circuit. Pruning the clock disables portions of the circuitry so that the flip-flops in them do not have to switch states. Switching states consumes power. When not being switched, the switching power consumption goes to zero, and only leakage currents are incurred.
Although asynchronous circuits by definition do not have a global "clock", the term perfect clock gating is used to illustrate how various clock gating techniques are simply approximations of the data-dependent behavior exhibited by asynchronous circuitry. As the granularity on which one gates the clock of a synchronous circuit approaches zero, the power consumption of that circuit approaches that of an asynchronous circuit: the circuit only generates logic transitions when it is actively computing.
Details
An alternative solution to clock gating is to use Clock Enable (CE) logic on synchronous data path employing the input multiplexer, e.g., for D type flip-flops: using C / Verilog language notation: Dff= CE? D: Q;
where: Dff is D-input of D-type flip-flop, D is module information input (without CE input), Q is D-type flip-flop output. This type of clock gating is race condition free and is preferred for FPGAs designs and for clock gating of the small circuit. For FPGAs every D-type flip-flop has an additional CE input signal.
Clock gating works by taking the enable conditions attached to registers, and uses them to gate the clocks. A design must contain these enable conditions in order to use and benefit from clock gating. This clock gating process can also save significant die area as well as power, since it removes large numbers of muxes and replaces them with clock gating l
|
https://en.wikipedia.org/wiki/Quasi-delay-insensitive%20circuit
|
In digital logic design, an asynchronous circuit is quasi delay-insensitive (QDI) when it operates correctly, independent of gate and wire delay with the weakest exception necessary to be turing-complete.
Overview
Pros
Robust to process variation, temperature fluctuation, circuit redesign, and FPGA remapping.
Natural event sequencing facilitates complex control circuitry.
Automatic clock gating and compute-dependent cycle time can save dynamic power and increase throughput by optimizing for average-case workload characteristics instead of worst-case.
Cons
Delay insensitive encodings generally require twice as many wires for the same data.
Communication protocols and encodings generally require twice as many devices for the same functionality.
Chips
QDI circuits have been used to manufacture a large number of research chips, a small selection of which follows.
Caltech's asynchronous microprocessor
Tokyo University's TITAC and TITAC-2 processors
Theory
The simplest QDI circuit is a ring oscillator implemented using a cycle of inverters. Each gate drives two events on its output node. Either the pull up network drives node's voltage from GND to Vdd or the pull down network from VDD to GND. This gives the ring oscillator six events in total.
Multiple cycles may be connected using a multi-input gate. A c-element, which waits for its inputs to match before copying the value to its output, may be used to synchronize multiple cycles. If one cycle reaches the c-element before another, it is forced to wait. Synchronizing three or more of these cycles creates a pipeline allowing the cycles to trigger one after another.
If cycles are known to be mutually exclusive, then they may be connected using combinational logic (AND, OR). This allows the active cycle to continue regardless of the inactive cycles, and is generally used to implement delay insensitive encodings.
For larger systems, this is too much to manage. So, they are partitioned into processes. Each
|
https://en.wikipedia.org/wiki/Larmor%20formula
|
In electrodynamics, the Larmor formula is used to calculate the total power radiated by a nonrelativistic point charge as it accelerates. It was first derived by J. J. Larmor in 1897, in the context of the wave theory of light.
When any charged particle (such as an electron, a proton, or an ion) accelerates, energy is radiated in the form of electromagnetic waves. For a particle whose velocity is small relative to the speed of light (i.e., nonrelativistic), the total power that the particle radiates (when considered as a point charge) can be calculated by the Larmor formula:
where or is the proper acceleration, is the charge, and is the speed of light. A relativistic generalization is given by the Liénard–Wiechert potentials.
In either unit system, the power radiated by a single electron can be expressed in terms of the classical electron radius and electron mass as:
One implication is that an electron orbiting around a nucleus, as in the Bohr model, should lose energy, fall to the nucleus and the atom should collapse. This puzzle was not solved until quantum theory was introduced.
Derivation
Derivation 1: Mathematical approach (using CGS units)
We first need to find the form of the electric and magnetic fields. The fields can be written (for a fuller derivation see Liénard–Wiechert potential)
and where is the charge's velocity divided by , is the charge's acceleration divided by , is a unit vector in the direction, is the magnitude of , is the charge's location, and . The terms on the right are evaluated at the retarded time .
The right-hand side is the sum of the electric fields associated with the velocity and the acceleration of the charged particle. The velocity field depends only upon while the acceleration field depends on both and and the angular relationship between the two. Since the velocity field is proportional to , it falls off very quickly with distance. On the other hand, the acceleration field is proportional to , which means
|
https://en.wikipedia.org/wiki/Don%20Sannella
|
Donald T. Sannella FRSE is professor of computer science in the Laboratory for Foundations of Computer Science, at the School of Informatics, University of Edinburgh, Scotland.
Sannella graduated from Yale University, University of California, Berkeley and University of Edinburgh with degrees in computer science. His research interests include: algebraic specification and formal software development, correctness of modular systems, types and functional programming, resource certification for mobile code.
Sannella is founder of the European Joint Conferences on Theory and Practice of Software, a confederation of computer science conferences, held annually in Europe since 1998.
He is editor-in-chief of the journal Theoretical Computer Science,
and is co-founder and CEO of Contemplate Ltd. His father is Ted Sannella.
Honours and awards
In 2014 Sannella was elected a Fellow of the Royal Society of Edinburgh.
Personal Life
Don Sanella loves to ski and is often found out on the slopes.
References
External links
Official home page
Personal home page
Publications
Year of birth missing (living people)
Living people
Scottish computer scientists
Alumni of the University of Edinburgh
Academics of the University of Edinburgh
Formal methods people
Academic journal editors
Place of birth missing (living people)
Yale University alumni
University of California alumni
Fellows of the Royal Society of Edinburgh
|
https://en.wikipedia.org/wiki/%CE%98%20%28set%20theory%29
|
In set theory, Θ (pronounced like the letter theta) is the least nonzero ordinal α such that there is no surjection from the reals onto α.
If the axiom of choice (AC) holds (or even if the reals can be wellordered), then Θ is simply , the cardinal successor of the cardinality of the continuum. However, Θ is often studied in contexts where the axiom of choice fails, such as models of the axiom of determinacy.
Θ is also the supremum of the lengths of all prewellorderings of the reals.
Proof of existence
It may not be obvious that it can be proven, without using AC, that there even exists a nonzero ordinal onto which there is no surjection from the reals (if there is such an ordinal, then there must be a least one because the ordinals are wellordered). However, suppose there were no such ordinal. Then to every ordinal α we could associate the set of all prewellorderings of the reals having length α. This would give an injection from the class of all ordinals into the set of all sets of orderings on the reals (which can to be seen to be a set via repeated application of the powerset axiom). Now the axiom of replacement shows that the class of all ordinals is in fact a set. But that is impossible, by the Burali-Forti paradox.
Cardinal numbers
Descriptive set theory
Determinacy
|
https://en.wikipedia.org/wiki/AD%2B
|
In set theory, AD+ is an extension, proposed by W. Hugh Woodin, to the axiom of determinacy. The axiom, which is to be understood in the context of ZF plus DC (the axiom of dependent choice for real numbers), states two things:
Every set of reals is ∞-Borel.
For any ordinal λ less than Θ, any subset A of ωω, and any continuous function π:λω→ωω, the preimage π−1[A] is determined. (Here λω is to be given the product topology, starting with the discrete topology on λ.)
The second clause by itself is referred to as ordinal determinacy.
See also
Axiom of projective determinacy
Axiom of real determinacy
Suslin's problem
Topological game
References
Axioms of set theory
Determinacy
|
https://en.wikipedia.org/wiki/Squirrel%20%28programming%20language%29
|
Squirrel is a high level imperative, object-oriented programming language, designed to be a lightweight scripting language that fits in the size, memory bandwidth, and real-time requirements of applications like video games.
MirthKit, a simple toolkit for making and distributing open source, cross-platform 2D games, uses Squirrel for its platform. It is used extensively by Code::Blocks for scripting and was also used in Final Fantasy Crystal Chronicles: My Life as a King. It is also used in Left 4 Dead 2, Portal 2 and Thimbleweed Park for scripted events and in NewDark, an unofficial Thief 2: The Metal Age engine update, to facilitate additional, simplified means of scripting mission events, aside of the regular C scripting.
Language features
Dynamic typing
Delegation
Classes, inheritance
Higher order functions
Generators
Cooperative threads (coroutines)
Tail recursion
Exception handling
Automatic memory management (mainly reference counting with backup garbage collector)
Weak references
Both compiler and virtual machine fit together in about 7k lines of C++ code
Optional 16-bit character strings
Syntax
Squirrel uses a C-like syntax.
Factorial in Squirrel
function factorial(x)
{
if (x <= 1) {
return 1;
}
else {
return x * factorial(x-1);
}
}
Generators
function not_a_random_number_generator(max) {
local last = 42;
local IM = 139968;
local IA = 3877;
local IC = 29573;
for(;;) { // loops forever
yield (max * (last = (last * IA + IC) % IM) / IM);
}
}
local randtor = not_a_random_number_generator(100);
for(local i = 0; i < 10; i += 1)
print(">"+resume randtor+"\n");
Classes and inheritance
class BaseVector {
constructor(...)
{
if(vargv.len() >= 3) {
x = vargv[0];
y = vargv[1];
z = vargv[2];
}
}
x = 0;
y = 0;
z = 0;
}
class Vector3 extends BaseVector {
function _add(other)
{
if(other instanceof ::Vector3)
return ::Vector3(x+other.x,y+other.y,z+other.z);
else
throw "wro
|
https://en.wikipedia.org/wiki/IEEE%20802.1D
|
IEEE 802.1D is the Ethernet MAC bridges standard which includes bridging, Spanning Tree Protocol and others. It is standardized by the IEEE 802.1 working group. It includes details specific to linking many of the other 802 projects including the widely deployed 802.3 (Ethernet), 802.11 (Wireless LAN) and 802.16 (WiMax) standards.
Bridges using virtual LANs (VLANs) have never been part of 802.1D, but were instead specified in separate standard, 802.1Q originally published in 1998.
By 2014, all the functionality defined by IEEE 802.1D has been incorporated into either IEEE 802.1Q-2014 (Bridges and Bridged Networks) or IEEE 802.1AC (MAC Service Definition). 802.1D is expected to be officially withdrawn in 2022.
Publishing history:
1990 — Original publication (802.1D-1990).
1993 — standard ISO/IEC 10038:1993.
1998 — Revised version (802.1D-1998, ISO/IEC 15802-3:1998), incorporating the extensions P802.1p, P802.12e, 802.1j-1996 and 802.6k-1992.
2004 — Revised version (802.1D-2004), incorporating the extensions 802.11c-1998, 802.1t-2001, 802.1w-2001, and removing the original Spanning Tree Protocol, instead incorporating the Rapid Spanning Tree Protocol (RSTP) from 802.1w-2001.
Amendments to 802.1D-2004:
2004 — Small amendment (802.17a-2004) to add in 802.17 bridging support.
2007 — Small amendment (802.16k-2007) to add in 802.16 bridging support.
2012 — Shortest Path Bridging (IEEE 802.1aq-2012, amendment to 802.1Q-2011).
See also
Spanning tree protocol
Multiple Spanning Tree Protocol
TRILL TRansparent Interconnection of Lots of Links
References
802.1D-2004 - IEEE Standard for Local and metropolitan area networks: Media Access Control (MAC) Bridges
802.1D Status
IEEE 802.01D
Ethernet standards
|
https://en.wikipedia.org/wiki/Dreambox
|
Dreambox is a series of Linux-powered DVB satellite, terrestrial and cable digital television receivers (set-top boxes), produced by German multimedia vendor Dream Multimedia.
History and description
The Linux-based production software originally used by Dreambox was originally developed for DBox2, by the Tuxbox project. The Dbox2 was a proprietary design distributed by KirchMedia for their pay TV services. The bankruptcy of KirchMedia flooded the market with unsold boxes available for Linux enthusiasts. The Dreambox shares the basic design of the DBox2, including the Ethernet port and the PowerPC processor.
Its firmware is officially user-upgradable, since it is a Linux-based computer, as opposed to third-party "patching" of alternate receivers. All units support Dream's own DreamCrypt conditional access (CA) system, with software-emulated CA Modules (CAMs) available for many alternate CA systems. The built-in Ethernet interface allows networked computers to access the recordings on the internal hard disks on some Dreambox models. It also enables the receiver to store digital copies of DVB MPEG transport streams on distributed file systems or broadcast the streams as IPTV to VideoLAN and XBMC Media Center clients. Unlike many PC based PVR systems that use free-to-air type of DVB receiver cards, the built-in conditional access allows receiving and storing encrypted content.
In 2007, Dream Multimedia also introduced a non-Linux based Dreambox receiver, the DM100, their sole to date, still featuring an Ethernet port. It has a USB-B port for service instead of the RS-232 or mini-USB connectors found on other models. Unlike all other Dreamboxes, it features an STMicroelectronics CPU instead of PowerPC or MIPS.
Dreambox models
There are a number of different models of Dreambox available. The numbers are suffixed with -S for Satellite, -T for Terrestrial and -C for Cable:
Table
**HDMI via DVI to HDMI adapter.
Remark: The new 7020hd v2 has a new Flash with anoth
|
https://en.wikipedia.org/wiki/Electrocoagulation
|
Electrocoagulation (EC) is a technique used for wastewater treatment, wash water treatment, industrially processed water, and medical treatment. Electrocoagulation has become a rapidly growing area of wastewater treatment due to its ability to remove contaminants that are generally more difficult to remove by filtration or chemical treatment systems, such as emulsified oil, total petroleum hydrocarbons, refractory organics, suspended solids, and heavy metals. There are many brands of electrocoagulation devices available and they can range in complexity from a simple anode and cathode to much more complex devices with control over electrode potentials, passivation, anode consumption, cell REDOX potentials as well as the introduction of ultrasonic sound, ultraviolet light and a range of gases and reactants to achieve so-called Advanced Oxidation Processes for refractory or recalcitrant organic substances.
Water and Wastewater Treatment
With the latest technologies, reduction of electricity requirements, and miniaturization of the needed power supplies, EC systems have now become affordable for water treatment plants and industrial processes worldwide.
Background
Electrocoagulation ("electro", meaning to apply an electrical charge to water, and "coagulation", meaning the process of changing the particle surface charge, allowing suspended matter to form an agglomeration) is an advanced and economical water treatment technology. It effectively removes suspended solids to sub-micrometre levels, breaks emulsions such as oil and grease or latex, and oxidizes and eradicates heavy metals from water without the use of filters or the addition of separation chemicals
A wide range of wastewater treatment techniques are known, which includes biological processes for nitrification, denitrification and phosphorus removal, as well as a range of physico-chemical processes that require chemical addition. The commonly used physico-chemical treatment processes are filtration, air
|
https://en.wikipedia.org/wiki/Jim%20Woodcock
|
James Charles Paul Woodcock is a British computer scientist.
Woodcock gained his PhD from the University of Liverpool. Until 2001 he was Professor of Software Engineering at the Oxford University Computing Laboratory, where he was also a Fellow of Kellogg College. He then joined the University of Kent and is now based at the University of York, where, since October 2012, he has been head of the Department of Computer Science.
His research interests include: strong software engineering, Grand Challenge in dependable systems evolution, unifying theories of programming, formal specification, refinement, concurrency, state-rich systems, mobile and reconfigurable processes, nanotechnology, Grand Challenge in the railway domain. He has a background in formal methods, especially the Z notation and CSP.
Woodcock worked on applying the Z notation to the IBM CICS project, helping to gain a Queen's Award for Technological Achievement, and Mondex, helping to gain the highest ITSEC classification level.
Prof. Woodcock is editor-in-chief of the Formal Aspects of Computing journal.
Books
Jim Woodcock and Jim Davies, Using Z: Specification, Refinement, and Proof. Prentice-Hall International Series in Computer Science, 1996. .
Jim Woodcock and Martin Loomes, Software Engineering Mathematics: Formal Methods Demystified. Kindle Edition, Taylor & Francis, 2007.
References
External links
Official homepage
Personal homepage
Research profile
1956 births
Living people
Alumni of the University of Liverpool
British computer scientists
Formal methods people
Members of the Department of Computer Science, University of Oxford
Fellows of Kellogg College, Oxford
Academics of the University of Kent
Academics of the University of York
Fellows of the British Computer Society
Fellows of the Royal Academy of Engineering
Computer science writers
British textbook writers
Academic journal editors
|
https://en.wikipedia.org/wiki/Marching%20tetrahedra
|
Marching tetrahedra is an algorithm in the field of computer graphics to render implicit surfaces. It clarifies a minor ambiguity problem of the marching cubes algorithm with some cube configurations. It was originally introduced in 1991.
While the original marching cubes algorithm was protected by a software patent, marching tetrahedrons offered an alternative algorithm that did not require a patent license. More than 20 years have passed from the patent filing date (June 5, 1985), and the marching cubes algorithm can now be used freely. Optionally, the minor improvements of marching tetrahedrons may be used to correct the aforementioned ambiguity in some configurations.
In marching tetrahedra, each cube is split into six irregular tetrahedra by cutting the cube in half three times, cutting diagonally through each of the three pairs of opposing faces. In this way, the tetrahedra all share one of the main diagonals of the cube. Instead of the twelve edges of the cube, we now have nineteen edges: the original twelve, six face diagonals, and the main diagonal. Just like in marching cubes, the intersections of these edges with the isosurface are approximated by linearly interpolating the values at the grid points.
Adjacent cubes share all edges in the connecting face, including the same diagonal. This is an important property to prevent cracks in the rendered surface, because interpolation of the two distinct diagonals of a face usually gives slightly different intersection points. An added benefit is that up to five computed intersection points can be reused when handling the neighbor cube. This includes the computed surface normals and other graphics attributes at the intersection points.
Each tetrahedron has sixteen possible configurations, falling into three classes: no intersection, intersection in one triangle and intersection in two (adjacent) triangles. It is straightforward to enumerate all sixteen configurations and map them to vertex index lists definin
|
https://en.wikipedia.org/wiki/Servomotor
|
A servomotor (or servo motor or simply servo (to be differentiated from servomechanism, which may also be called a servo)) is a rotary actuator or linear actuator that allows for precise control of angular or linear position, velocity, and acceleration in a mechanical system. It consists of a suitable motor coupled to a sensor for position feedback. It also requires a relatively sophisticated controller, often a dedicated module designed specifically for use with servomotors.
Servomotors are not a specific class of motor, although the term servomotor is often used to refer to a motor suitable for use in a closed-loop control system.
Servomotors are used in applications such as robotics, CNC machinery, and automated manufacturing.
Mechanism
A servomotor is a closed-loop servomechanism that uses position feedback to control its motion and final position. The input to its control is a signal (either analog or digital) representing the position commanded for the output shaft.
The motor is paired with some type of position encoder to provide position and speed feedback. In the simplest case, only the position is measured. The measured position of the output is compared to the command position, the external input to the controller. If the output position differs from that required, an error signal is generated which then causes the motor to rotate in either direction, as needed to bring the output shaft to the appropriate position. As the positions approach, the error signal reduces to zero, and the motor stops.
The very simplest servomotors use position-only sensing via a potentiometer and bang-bang control of their motor; the motor always rotates at full speed (or is stopped). This type of servomotor is not widely used in industrial motion control, but it forms the basis of the simple and cheap servos used for radio-controlled models.
More sophisticated servomotors make use of an Absolute encoder (a type of rotary encoder) to calculate the shafts position and inf
|
https://en.wikipedia.org/wiki/Electrochemical%20gradient
|
An electrochemical gradient is a gradient of electrochemical potential, usually for an ion that can move across a membrane. The gradient consists of two parts:
The chemical gradient, or difference in solute concentration across a membrane.
The electrical gradient, or difference in charge across a membrane.
When there are unequal concentrations of an ion across a permeable membrane, the ion will move across the membrane from the area of higher concentration to the area of lower concentration through simple diffusion. Ions also carry an electric charge that forms an electric potential across a membrane. If there is an unequal distribution of charges across the membrane, then the difference in electric potential generates a force that drives ion diffusion until the charges are balanced on both sides of the membrane.
Electrochemical gradients are essential to the operation of batteries and other electrochemical cells, photosynthesis and cellular respiration, and certain other biological processes.
Overview
Electrochemical energy is one of the many interchangeable forms of potential energy through which energy may be conserved. It appears in electroanalytical chemistry and has industrial applications such as batteries and fuel cells. In biology, electrochemical gradients allow cells to control the direction ions move across membranes. In mitochondria and chloroplasts, proton gradients generate a chemiosmotic potential used to synthesize ATP, and the sodium-potassium gradient helps neural synapses quickly transmit information.
An electrochemical gradient has two components: a differential concentration of electric charge across a membrane and a differential concentration of chemical species across that same membrane. In the former effect, the concentrated charge attracts charges of the opposite sign; in the latter, the concentrated species tends to diffuse across the membrane to an equalize concentrations. The combination of these two phenomena determines the t
|
https://en.wikipedia.org/wiki/Infineon%20TriCore
|
TriCore is a 32-bit microcontroller architecture from Infineon. It unites the elements of a RISC processor core, a microcontroller and a DSP in one chip package.
History and background
In 1999, Infineon launched the first generation of AUDO (Automotive unified processor) which is based on what the company describes as a 32-bit ”unified RISC/MCU/DSP microcontroller core,” called TriCore, which is as of 2011 on its fourth generation, called AUDO MAX (version 1.6).
TriCore is a heterogeneous, asymmetric dual core architecture with a peripheral control processor that enables user modes and core system protection.
Infineon's AUDO families targets gasoline and diesel engine control units (ECUs), applications in hybrid and electric vehicles as well as transmission, active and passive safety and chassis applications. It also targets industrial applications, e.g. optimized motor control applications and signal processing.
Different models offer different combinations of memories, peripheral sets, frequencies, temperatures and packaging. Infineon also offers software
claimed to help manufacturers meet SIL/ASIL safety standards. All members of the AUDO family are binary-compatible and share the same development tools. An AUTOSAR library that enables existing code to be integrated is also available.
Safety
Infineon's portfolio includes microcontrollers with additional hardware features as well as SafeTcore safety software and a watchdog IC.
AUDO families cover safety applications including active suspension and driver assistant systems and also EPS and chassis domain control. Some features of the product portfolio are memory protection, redundant peripherals, MemCheck units with integrated CRCs, ECC on memories, integrated test and debug functionality and FlexRay.
References
External links
Infineon official website for Microcontroller
Port of GCC 3.3 to TriCore by HighTec
Microcontrollers
Digital signal processors
|
https://en.wikipedia.org/wiki/QWK%20%28file%20format%29
|
QWK is a file-based offline mail reader format that was popular among bulletin board system (BBS) users, especially users of FidoNet and other networks that generated large volumes of mail. QWK was originally developed by Mark "Sparky" Herring in 1987 for systems running the popular PCBoard bulletin board system, but it was later adapted for other platforms. Herring died of a heart attack in 2020 after being swatted. During the height of bulletin board system popularity, several dozen offline mail readers supported the QWK format.
Description
Like other offline readers, QWK gathered up messages for a particular user using BBS-side QWK software, compressed them using an application such as PKZIP, and then transferred them to the user. This is usually accomplished via a "BBS door" program running on the BBS system. In the case of QWK, the messages were placed in a single large file that was then bundled with several control files and then compressed into a single archive with the file extension, and typically the BBS's "id" name as the base filename in the form . The file was normally sent to the user automatically using the self-starting feature of the ZModem protocol, although most QWK doors allowed a choice of other protocols.
Once the resulting file has been received by the user, the steps are reversed to extract the files from the archive and then open them in a client-side reader. Again, these individual steps are typically automated to a degree, meaning that the user simply has to invoke the door software on the BBS, wait for the download to complete, and then run the client. The various intermediary steps are automated. QWK originally did not include any functionality for uploading replies, but this was quickly addressed as QWK became more popular. QWK placed replies in a file (again, typically with the BBS's "id" as the name) that was exchanged automatically the next time the user called in.
QWK clients varied widely in functionality, but all of them off
|
https://en.wikipedia.org/wiki/HNN%20extension
|
In mathematics, the HNN extension is an important construction of combinatorial group theory.
Introduced in a 1949 paper Embedding Theorems for Groups by Graham Higman, Bernhard Neumann, and Hanna Neumann, it embeds a given group G into another group G' , in such a way that two given isomorphic subgroups of G are conjugate (through a given isomorphism) in G' .
Construction
Let G be a group with presentation , and let be an isomorphism between two subgroups of G. Let t be a new symbol not in S, and define
The group is called the HNN extension of G relative to α. The original group G is called the base group for the construction, while the subgroups H and K are the associated subgroups. The new generator t is called the stable letter.
Key properties
Since the presentation for contains all the generators and relations from the presentation for G, there is a natural homomorphism, induced by the identification of generators, which takes G to . Higman, Neumann, and Neumann proved that this morphism is injective, that is, an embedding of G into . A consequence is that two isomorphic subgroups of a given group are always conjugate in some overgroup; the desire to show this was the original motivation for the construction.
Britton's Lemma
A key property of HNN-extensions is a normal form theorem known as Britton's Lemma. Let be as above and let w be the following product in :
Then Britton's Lemma can be stated as follows:
Britton's Lemma. If w = 1 in G∗α then
either and g0 = 1 in G
or and for some i ∈ {1, ..., n−1} one of the following holds:
εi = 1, εi+1 = −1, gi ∈ H,
εi = −1, εi+1 = 1, gi ∈ K.
In contrapositive terms, Britton's Lemma takes the following form:
Britton's Lemma (alternate form). If w is such that
either and g0 ≠ 1 ∈ G,
or and the product w does not contain substrings of the form tht−1, where h ∈ H and of the form t−1kt where k ∈ K,
then in .
Consequences of Britton's Lemma
Most basic properties of HNN-extensions follow from Britton's Lemm
|
https://en.wikipedia.org/wiki/Lacida
|
The Lacida, also called LCD, was a Polish rotor cipher machine. It was designed and produced before World War II by Poland's Cipher Bureau for prospective wartime use by Polish military higher commands.
History
The machine's name derived from the surname initials of Gwido Langer, Maksymilian Ciężki and Ludomir Danilewicz and / or his younger brother, Leonard Danilewicz. It was built in Warsaw, to the Cipher Bureau's specifications, by the AVA Radio Company.
In anticipation of war, before the September 1939 invasion of Poland, two LCDs were sent to France. From spring 1941, an LCD was used by the Polish Team Z at the Polish-, Spanish- and French-manned Cadix radio-intelligence and decryption center at Uzès, near France's Mediterranean coast.
Prior to the machine's production, it had never been subjected to rigorous decryption attempts. Now it was decided to remedy this oversight. In early July 1941, Polish cryptologists Marian Rejewski and Henryk Zygalski received LCD-enciphered messages that had earlier been transmitted to the staff of the Polish Commander-in-Chief, based in London. Breaking the first message, given to the two cryptologists on July 3, took them only a couple of hours. Further tests yielded similar results. Colonel Langer suspended the use of LCD at Cadix.
In 1974, Rejewski explained that the LCD had two serious flaws. It lacked a commutator ("plugboard"), which was one of the strong points of the German military Enigma machine. The LCD's other weakness involved the reflector and wiring. These shortcomings did not imply that the LCD, somewhat larger than the Enigma and more complicated (e.g., it had a switch for resetting to deciphering), was easy to solve. Indeed, the likelihood of its being broken by the German E-Dienst was judged slight. Theoretically it did exist, however.
See also
Biuro Szyfrów (Cipher Bureau)
References
Further reading
Władysław Kozaczuk, Enigma: How the German Machine Cipher Was Broken, and How It Was Read
|
https://en.wikipedia.org/wiki/Simulated%20pregnancy
|
A simulated pregnancy is a deliberate attempt to create the impression of pregnancy.
It should not be confused with false pregnancy, where a person mistakenly believes that they are pregnant.
Techniques
People who wish to look pregnant, generally for social, sexual, psychological or entertainment purposes, have the option of body suits and the like to wear under their clothes. It can be done by using pillows or pads, or light-weighing, small balls with a round shape to simulate a pregnant abdomen.
See also
Couvade syndrome
References
Prosthetics
Human pregnancy
|
https://en.wikipedia.org/wiki/Tonicity
|
In chemical biology, tonicity is a measure of the effective osmotic pressure gradient; the water potential of two solutions separated by a partially-permeable cell membrane. Tonicity depends on the relative concentration of selective membrane-impermeable solutes across a cell membrane which determine the direction and extent of osmotic flux. It is commonly used when describing the swelling-versus-shrinking response of cells immersed in an external solution.
Unlike osmotic pressure, tonicity is influenced only by solutes that cannot cross the membrane, as only these exert an effective osmotic pressure. Solutes able to freely cross the membrane do not affect tonicity because they will always equilibrate with equal concentrations on both sides of the membrane without net solvent movement. It is also a factor affecting imbibition.
There are three classifications of tonicity that one solution can have relative to another: hypertonic, hypotonic, and isotonic. A hypotonic solution example is distilled water.
Hypertonic solution
A hypertonic solution has a greater concentration of non-permeating solutes than another solution. In biology, the tonicity of a solution usually refers to its solute concentration relative to that of another solution on the opposite side of a cell membrane; a solution outside of a cell is called hypertonic if it has a greater concentration of solutes than the cytosol inside the cell. When a cell is immersed in a hypertonic solution, osmotic pressure tends to force water to flow out of the cell in order to balance the concentrations of the solutes on either side of the cell membrane. The cytosol is conversely categorized as hypotonic, opposite of the outer solution.
When plant cells are in a hypertonic solution, the flexible cell membrane pulls away from the rigid cell wall, but remains joined to the cell wall at points called plasmodesmata. The cells often take on the appearance of a pincushion, and the plasmodesmata almost cease to function b
|
https://en.wikipedia.org/wiki/Interrupt%20vector%20table
|
An interrupt vector table (IVT) is a data structure that associates a list of interrupt handlers with a list of interrupt requests in a table of interrupt vectors. Each entry of the interrupt vector table, called an interrupt vector, is the address of an interrupt handler(also known as ISR). While the concept is common across processor architectures, IVTs may be implemented in architecture-specific fashions. For example, a dispatch table is one method of implementing an interrupt vector table.
Interrupts are assigned a number between 0 to 255. The interrupt vectors for each interrupt number are stored in the lower 1024 bytes of main memory. For example, interrupt 0 is stored from 0000:0000 to 0000:0003, interrupt 1 from 0000:0004 to 0000:0007, and so on.
Background
Most processors have an interrupt vector table, including chips from Intel, AMD, Infineon, Microchip Atmel, NXP, ARM etc.
Interrupt handlers
Handling methods
An interrupt vector table is used in the three most popular methods of finding the starting address of the interrupt service routine:
"Predefined"
The "predefined" method loads the program counter (PC) directly with the address of some entry inside the interrupt vector table. The jump table itself contains executable code. While in principle an extremely short interrupt handler could be stored entirely inside the interrupt vector table, in practice the code at each entry is a single jump instruction that jumps to the full interrupt service routine (ISR) for that interrupt. The Intel 8080, Atmel AVR and all 8051 and Microchip microcontrollers use the predefined approach.
"Fetch"
The "fetch" method loads the PC indirectly, using the address of some entry inside the interrupt vector table to pull an address out of that table, and then loading the PC with that address. Each and every entry of the IVT is the address of an interrupt service routine. All Motorola/Freescale microcontrollers use the fetch method.
"Interrupt acknowledge"
For the "i
|
https://en.wikipedia.org/wiki/In-system%20programming
|
In-system programming (ISP), or also called in-circuit serial programming (ICSP), is the ability of some programmable logic devices, microcontrollers, chipsets and other embedded devices to be programmed while installed in a complete system, rather than requiring the chip to be programmed prior to installing it into the system. It also allows firmware updates to be delivered to the on-chip memory of microcontrollers and related processors without requiring specialist programming circuitry on the circuit board, and simplifies design work.
Overview
There is no standard for in-system programming protocols for programming microcontroller devices. Almost all manufacturers of microcontrollers support this feature, but all have implemented their own protocols, which often differ even for different devices from the same manufacturer. In general, modern protocols try to keep the number of pins used low, typically to 2 pins. Some ISP interfaces manage to achieve the same with just a single pin, others use up to 4 for implementing a JTAG interface.
The primary advantage of in-system programming is that it allows manufacturers of electronic devices to integrate programming and testing into a single production phase, and save money, rather than requiring a separate programming stage prior to assembling the system. This may allow manufacturers to program the chips in their own system's production line instead of buying pre-programmed chips from a manufacturer or distributor, making it feasible to apply code or design changes in the middle of a production run. The other advantage is that production can always use the latest firmware, and new features as well as bug fixes can be implemented and put into production without the delay occurring when using pre-programmed microcontrollers.
Microcontrollers are typically soldered directly to a printed circuit board and usually do not have the circuitry or space for a large external programming cable to another computer.
Typically,
|
https://en.wikipedia.org/wiki/Open%20Content%20Alliance
|
The Open Content Alliance (OCA) was a consortium of organizations contributing to a permanent, publicly accessible archive of digitized texts. Its creation was announced in October 2005 by Yahoo!, the Internet Archive, the University of California, the University of Toronto and others. Scanning for the Open Content Alliance was administered by the Internet Archive, which also provided permanent storage and access through its website.
The OCA was, in part, a response to Google Book Search, which was announced in October 2004. OCA's approach to seeking permission from copyright holders differed significantly from that of Google Book Search. OCA digitized copyrighted works only after asking and receiving permission from the copyright holder ("opt-in"). By contrast, Google Book Search digitized copyrighted works unless explicitly told not to do so ("opt-out"), and contends that digitizing for the purposes of indexing is fair use.
Microsoft had a special relationship with the Open Content Alliance until May 2008. Microsoft joined the Open Content Alliance in October 2005 as part of its Live Book Search project. However, in May 2008 Microsoft announced it would be ending the Live Book Search project and no longer funding the scanning of books through the Internet Archive. Microsoft removed any contractual restrictions on the content they had scanned and they relinquished the scanning equipment to their digitization partners and libraries to continue digitization programs. Between about 2006 and 2008 Microsoft sponsored the scanning of over 750,000 books, 300,000 of which are now part of the Internet Archive's on-line collections.
Opposition to Google Book Settlement
Brewster Kahle, a founder of the Open Content Alliance, actively opposed the proposed Google Book Settlement until its defeat in March 2011.
Contributors
The following are contributors to the OCA:
Adobe Systems Incorporated
Boston Library Consortium
Boston Public Library
The Bancroft Library
The British
|
https://en.wikipedia.org/wiki/FOSS.IN
|
FOSS.IN, previously known as Linux Bangalore, was an annual free and open source software (FOSS) conference, held in Bangalore, India from 2001 to 2012. From 2001 to 2004, it was known as Linux Bangalore, before it took on a new name and wider focus. During its lifetime, it was one of the largest FOSS events in Asia, with participants from around the world. It focused on the technical and software side of FOSS, encouraging development and contribution to FOSS projects from India. The event was held every year in late November or early December.
History
Linux Bangalore
Linux Bangalore was India's premier Free and Open Source Software event, held annually in Bangalore. It featured talks, discussions, workshops, round-table meetings and demonstrations by Indian and international speakers, and covered a diverse spectrum of Linux and other FOSS technologies, including kernel programming, embedded systems, desktop environments, localization, databases, web applications, gaming, multimedia and community and user group development.
First held in 2001, the event saw the participation of thousands of delegates and replicated its success in 2002, 2003 and 2004. Linux Bangalore was a community-driven event, conceived, planned and built by the free and open source community of India, and Facilitation (business)|facilitated by the Bangalore Linux User Group. The event was very popular among software developers as reflected heavily by the demographics of participants.
At the conclusion of LB/2004, it was announced that name of the event would change and the scope would expand to cater to a wider range of topics. On August 12, 2005, it was announced that the name of the event would be changed to FOSS.IN.
FOSS.IN
While the Linux Bangalore conferences focused around Linux, FOSS.IN broadened the scope to all free and open source software technologies. It was founded by Atul Chitnis.
FOSS.IN/2005
FOSS.IN/2005 was held from November 29 to December 2, 2005, at the Bangalore Palac
|
https://en.wikipedia.org/wiki/Openfiler
|
Openfiler is an operating system that provides file-based network-attached storage and block-based storage area network. It was created by Xinit Systems, and is based on the CentOS Linux distribution. It is free software licensed under the GNU GPLv2
History
The Openfiler codebase was started at Xinit Systems in 2001. The company created a project and donated the codebase to it in October 2003.
The first public release of Openfiler was made in May 2004. The latest release was published in 2011.
Although there has been no formal announcement, there is no evidence that Openfiler is being actively developed since 2015. DistroWatch has listed Openfiler as discontinued. The official website states that paid support is still available.
Criticism
Though some users have run Openfiler for years with few problems, in a 2013 article on SpiceWorks website, the author recommended against using Openfiler, citing lack of features, lack of support and risk of data loss.
See also
TrueNAS, a FreeBSD based free and open-source NAS solution
Unraid
OpenMediaVault
XigmaNAS
NetApp filer, a commercial proprietary filer
NexentaStor - Advanced enterprise-level NAS software solution (Debian/OpenSolaris-based)
NAS4Free — network-attached storage (NAS) server software.
Gluster
Zentyal
List of NAS manufacturers
Comparison of iSCSI targets
File area network
Disk enclosure
OpenWrt
References
Further reading
External links
Computer storage devices
Free file transfer software
Software appliances
Network-attached storage
|
https://en.wikipedia.org/wiki/Bit%20banging
|
In computer engineering and electrical engineering, bit banging is a "term of art" for any method of data transmission that employs software as a substitute for dedicated hardware to generate transmitted signals or process received signals. Software directly sets and samples the states of GPIOs (e.g., pins on a microcontroller), and is responsible for meeting all timing requirements and protocol sequencing of the signals. In contrast to bit banging, dedicated hardware (e.g., UART, SPI, I²C) satisfies these requirements and, if necessary, provides a data buffer to relax software timing requirements. Bit banging can be implemented at very low cost, and is commonly used in some embedded systems.
Bit banging allows a device to implement different protocols with minimal or no hardware changes. In some cases, bit banging is made feasible by newer, faster processors because more recent hardware operates much more quickly than hardware did when standard communications protocols were created.
C code example
The following C language code example transmits a byte of data on an SPI bus.
// transmit byte serially, MSB first
void send_8bit_serial_data(unsigned char data)
{
int i;
// select device (active low)
output_low(SD_CS);
// send bits 7..0
for (i = 0; i < 8; i++)
{
// consider leftmost bit
// set line high if bit is 1, low if bit is 0
if (data & 0x80)
output_high(SD_DI);
else
output_low(SD_DI);
// pulse the clock state to indicate that bit value should be read
output_low(SD_CLK);
delay();
output_high(SD_CLK);
// shift byte left so next bit will be leftmost
data <<= 1;
}
// deselect device
output_high(SD_CS);
}
Considerations
The question whether to deploy bit banging or not is a trade-off between load, performance and reliability on one hand, and the availability of a hardware alternative on the other. The software emulation process consumes more
|
https://en.wikipedia.org/wiki/Low-force%20helix
|
A low-force helix (LFH-60) is a 60-pin electrical connector (four rows of 15 pins) with signals for two digital and analog connectors. Each of the pins is twisted approximately 45 degrees between the tip and the plastic frame which holds the pins in place. Hence "helix" in the name.
The DMS-59 is a derivative of the LFH60.
The LFH connector is typically used with workstations, because it can connect a single computer graphics source to up to four different monitors. The standard interface is a 60-pin LFH connector with two breakout VGA or DVI cables. This system provides users with flexibility for a variety of display configurations, though forsakes standard DVI or VGA connectors. This renders the LFH connector unusable without an adapter.
It's also used in HDCI (High Definition Camera Interface) used in Polycom HDX video conferencing systems.
Using the LFH interface requires a graphics card with multi-monitor capabilities and an LFH port. NVIDIA and Matrox used to manufacture such cards.
Another application of LFH capabilities is manufactured by Matrox. The Matrox card outputs via two LFH cables to a single monitor, delivering 9.2 million pixels of resolution (3840 × 2400). This system provides large amounts of detailed information for professional applications such as aerospace and automotive visualization, computer aided design, desktop publishing, digital photography, life sciences, mapping, oil and gas exploration, plant design and management, satellite imaging, space exploration, and transportation and logistics.
The LFH connector is used in some Cisco routers and WAN Interface Cards.
See also
DMS-59
Molex
very-high-density cable interconnect standardized as SCSI interface which is often used to carry four digital/analog monitor signals
References
General
Molex DMS-59 Product Page
Electrical connectors
Digital display connectors
|
https://en.wikipedia.org/wiki/Delta%20method
|
In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator.
History
The delta method was derived from propagation of error, and the idea behind was known in the early 20th century. Its statistical application can be traced as far back as 1928 by T. L. Kelley. A formal description of the method was presented by J. L. Doob in 1935. Robert Dorfman also described a version of it in 1938.
Univariate delta method
While the delta method generalizes easily to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms. Roughly, if there is a sequence of random variables satisfying
where θ and σ2 are finite valued constants and denotes convergence in distribution, then
for any function g satisfying the property that its first derivative, evaluated at , exists and is non-zero valued.
Proof in the univariate case
Demonstration of this result is fairly straightforward under the assumption that is continuous. To begin, we use the mean value theorem (i.e.: the first order approximation of a Taylor series using Taylor's theorem):
where lies between and θ.
Note that since and , it must be that and since is continuous, applying the continuous mapping theorem yields
where denotes convergence in probability.
Rearranging the terms and multiplying by gives
Since
by assumption, it follows immediately from appeal to Slutsky's theorem that
This concludes the proof.
Proof with an explicit order of approximation
Alternatively, one can add one more step at the end, to obtain the order of approximation:
This suggests that the error in the approximation converges to 0 in probability.
Multivariate delta method
By definition, a consistent estimator B converges in probability to its true value β, and often a central limit theorem can be applied to obtain asymptotic
|
https://en.wikipedia.org/wiki/Dynamic%20Logical%20Partitioning
|
Dynamic Logical Partitioning (DLPAR), is the capability of a logical partition (LPAR) to be reconfigured dynamically, without having to shut down the operating system that runs in the LPAR. DLPAR enables memory, CPU capacity, and I/O interfaces to be moved nondisruptively between LPARs within the same server.
DLPAR has been supported by the operating systems AIX and IBM i on almost all POWER4 and follow-on POWER systems since then. The Linux kernel for POWER also supported DLPAR, but its dynamic reconfiguration capabilities were limited to CPU capacity and PCI devices, but not memory. In October 2009, seven years after the AIX announcement of DLPAR of memory, CPU and IO slots, Linux finally added the capability to DLPAR memory on POWER systems. The fundamentals of DLPAR are described in the IBM Systems Journal paper titled: "Dynamic reconfiguration: Basic building blocks for autonomic computing on IBM pSeries Servers.
Later on, the POWER5 processor added enhanced DLPAR capabilities, including micro-partitioning: up to 10 LPARs can be configured per processor, with a single multiprocessor server supporting a maximum of 254 LPARs (and thus up to 254 independent operating system instances).
There are many interesting applications of DLPAR capabilities. Primarily, it is used to build agile infrastructures, or to automate hardware system resource allocation, planning, and provisioning. This in turn results in increased system utilization. For example, memory, processor or I/O slots can be added, removed or moved to another LPAR, without rebooting the operating system or the application running in an LPAR. IBM DB2 is such application (http://www.ibm.com/developerworks/eserver/articles/db2_dlpar.html), it is aware of the DLPAR events and automatically tunes itself to changing LPAR resources.
The IBM Z mainframes and their operating systems, including Linux on IBM Z, support even more sophisticated forms of dynamic LPARs. Relevant LPAR-related features on those mainf
|
https://en.wikipedia.org/wiki/RNA%20editing
|
RNA editing (also RNA modification) is a molecular process through which some cells can make discrete changes to specific nucleotide sequences within an RNA molecule after it has been generated by RNA polymerase. It occurs in all living organisms and is one of the most evolutionarily conserved properties of RNAs. RNA editing may include the insertion, deletion, and base substitution of nucleotides within the RNA molecule. RNA editing is relatively rare, with common forms of RNA processing (e.g. splicing, 5'-capping, and 3'-polyadenylation) not usually considered as editing. It can affect the activity, localization as well as stability of RNAs, and has been linked with human diseases.
RNA editing has been observed in some tRNA, rRNA, mRNA, or miRNA molecules of eukaryotes and their viruses, archaea, and prokaryotes. RNA editing occurs in the cell nucleus, as well as within mitochondria and plastids. In vertebrates, editing is rare and usually consists of a small number of changes to the sequence of the affected molecules. In other organisms, such as squids, extensive editing (pan-editing) can occur; in some cases the majority of nucleotides in an mRNA sequence may result from editing. More than 160 types of RNA modifications have been described so far.
RNA-editing processes show great molecular diversity, and some appear to be evolutionarily recent acquisitions that arose independently. The diversity of RNA editing phenomena includes nucleobase modifications such as cytidine (C) to uridine (U) and adenosine (A) to inosine (I) deaminations, as well as non-template nucleotide additions and insertions. RNA editing in mRNAs effectively alters the amino acid sequence of the encoded protein so that it differs from that predicted by the genomic DNA sequence.
Detection of RNA editing
Next generation sequencing
To identify diverse post-transcriptional modifications of RNA molecules and determine the transcriptome-wide landscape of RNA modifications by means of next gener
|
https://en.wikipedia.org/wiki/Slim%20Devices
|
Slim Devices, Inc. was a consumer electronics company based in Mountain View, California, United States. Their main product was the Squeezebox network music player which connects to a home ethernet or Wi-Fi network, and allows the owner to stream digital audio over the network to a stereo. The company, founded in 2000, was originally most notable for their support of open-source software, namely their SlimServer software which their products at that time all depended upon, and is still available as a free download and modification by any interested developer.
On 18 October 2006 Sean Adams, the CEO of Slim Devices, announced that the company was being fully acquired by Logitech.
Slim Devices was featured in the December 2006 issue of Fast Company magazine. The article focused on the company's business model and profiled the three key leaders: Sean Adams (CEO), Dean Blackketter (CTO), and Patrick Cosson (VP of Marketing).
References
Tew, Sarah. "Logitech leaves Squeezebox fans wondering what's next", CNET. September 24, 2012. Retrieved November 14, 2014.
Merritt, Rick. "Digital audio startup finds edge in open-source code", EE Times. August 9, 2004. Retrieved December 14, 2005.
Smith, Tony. "Slim Devices adds 802.11g to wireless MP3 player", The Register. March 11, 2005. Retrieved December 14, 2005.
Pogue, David. "Video review of Squeezebox 3", New York Times. February 9, 2006. Retrieved December 2, 2006.
Atkinson, John. "Slim Devices Squeezebox WiFi D/A processor", Stereophile. September 2006. Retrieved December 2, 2006.
Deutschman, Alan. "Ears Wide Open", Fast Company. December 2006. Retrieved January 6, 2007.
Footnotes
Electronics companies of the United States
Companies based in Mountain View, California
American companies established in 2000
Electronics companies established in 2000
2000 establishments in California
2006 mergers and acquisitions
Logitech
|
https://en.wikipedia.org/wiki/DataTAC
|
DataTAC is a wireless data network technology originally developed by Mobile Data International which was later acquired by Motorola, who jointly developed it with IBM and deployed in the United States as ARDIS (Advanced Radio Data Information Services). DataTAC was also marketed in the mid-1990s as MobileData by Telecom Australia, and is still used by Bell Mobility as a paging network in Canada. The first public open and mobile data network using MDI DataTAC was found in Hong Kong as Hutchison Mobile Data Limited (a subsidiary of Hutchison Telecom), where public end-to-end data services are provided for enterprises, FedEx, and consumer mobile information services were also offered called MobileQuotes with financial information, news, telebetting and stock data.
DataTAC is an open standard for point to point wireless data communications, similar to Mobitex. Like Mobitex, it is mainly used in vertical market applications. One of the early DataTAC devices was the Newton Messaging Card, a two-way pager connected to a PC card using the DataTAC network. The original BlackBerry devices, the RIM 850 and 857 also used the DataTAC network.
In North America, DataTAC is typically deployed in the 800 MHz band. DataTAC was also deployed in the same band by Telecom Australia (now Telstra).
The DataTAC network runs at speeds up to 19.2 kbit/s, which is not sufficient to handle most of the wireless data applications available today. The network runs 25 kHz channels in the 800 MHz frequency bands. Due to the lower frequency bands that DataTAC uses, in-building coverage is typically better than with newer, higher frequency networks.
In the 1990s a DataTAC network operators group was put together by Motorola called Worldwide Wireless Data Networks Operators Group (WWDNOG) chaired by Shahram Mehraban, Motorola's DataTAC system product manager.
References
Wireless networking
Motorola products
IBM products
|
https://en.wikipedia.org/wiki/GEOnet%20Names%20Server
|
The GEOnet Names Server (GNS), sometimes also referred to in official documentation as Geographic Names Data or geonames in domain and email addresses, is a service that provides access to the United States National Geospatial-Intelligence Agency's (NGA) and the US Board on Geographic Names's (BGN) database of geographic feature names and locations for locations outside the US. The database is the official repository for the US Federal Government on foreign place-name decisions approved by the BGN. Approximately 20,000 of the database's features are updated monthly. Names are not deleted from the database, "except in cases of obvious duplication". The database contains search aids such as spelling variations and non-Roman script spellings in addition to its primary information about location, administrative division, and quality. The accuracy of the database had been criticised.
Accuracy
A 2008 survey of South Korea toponyms on GNS found that roughly 1% of them were actually Japanese names that had never been in common usage, even during the period of Japanese colonial rule in Korea, and had come from a 1946 US military map that had apparently been compiled with Japanese assistance. In addition to the Japanese toponyms, the same study noted that "There are many spelling errors and simple mis-understanding of the place names with similar characters" amongst South Korea toponyms on GNS, as well extraneous names of Chinese and English origin.
See also
Geographic Names Information System (GNIS), a similar database for locations within the United States
References
External links
GEOnet Names Server (Archived at the Internet Archive)
GeoNet Designations: Codes and Definitions
Country files download page (broken link; Archived at the Internet Archive)
Place names
Public domain databases
Geocodes
National Geospatial-Intelligence Agency
Geographical databases
Gazetteers
|
https://en.wikipedia.org/wiki/Arbiter%20%28electronics%29
|
Arbiters are electronic devices that allocate access to shared resources.
Bus arbiter
There are multiple ways to perform a computer bus arbitration, with the most popular varieties being:
dynamic centralized parallel where one central arbiter is used for all masters as discussed in this article;
centralized serial (or "daisy chain") where, upon accessing the bus, the active master passes the opportunity to the next one. In essence, each connected master contains its own arbiter;
distributed arbitration by self-selection (distributed bus arbitration) where the access is self-granted based on the decision made locally by using information from other masters;
distributed arbitration by collision detection where each master tries to access the bus on its own, but detects conflicts and retries the failed operations.
A bus arbiter is a device used in a multi-master bus system to decide which bus master will be allowed to control the bus for each bus cycle.
The most common kind of bus arbiter is the memory arbiter in a system bus system.
A memory arbiter is a device used in a shared memory system to decide, for each memory cycle, which CPU will be allowed to access that shared memory.
Some atomic instructions depend on the arbiter to prevent other CPUs from reading memory "halfway through" atomic read-modify-write instructions.
A memory arbiter is typically integrated into the memory controller/DMA controller.
Some systems, such as conventional PCI, have a single centralized bus arbitration device that one can point to as "the" bus arbiter.
Other systems use decentralized bus arbitration, where all the devices cooperate to decide who goes next.
When every CPU connected to the memory arbiter has synchronized memory access cycles, the memory arbiter can be designed as a synchronous arbiter.
Otherwise the memory arbiter must be designed as an asynchronous arbiter.
Asynchronous arbiters
An important form of arbiter is used in asynchronous circuits to select th
|
https://en.wikipedia.org/wiki/Immerman%E2%80%93Szelepcs%C3%A9nyi%20theorem
|
In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s(n)) = co-NSPACE(s(n)) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem.
In other words, if a nondeterministic machine can solve a problem, another machine with the same resource bounds can solve its complement problem (with the yes and no answers reversed) in the same asymptotic amount of space. No similar result is known for the time complexity classes, and indeed it is conjectured that NP is not equal to co-NP.
The principle used to prove the theorem has become known as inductive counting. It has also been used to prove other theorems in computational complexity, including the closure of LOGCFL under complementation and the existence of error-free randomized logspace algorithms for USTCON.
Proof sketch
The theorem can be proven by showing how to translate any nondeterministic Turing machine M into another nondeterministic Turing machine that solves the complementary decision problem under the same (asymptotic) space complexity, plus a constant number of pointers and counters, which needs only a logarithmic amount of space.
The idea is to simulate all the configurations of M on input w, and to check if any configuration is accepting. This can be done within the same space plus a constant number of pointers and counters to keep track of the configurations. If no configuration is accepting, the simulating Turing machine accepts the input w. This idea is elaborated below for logarithmic NSPACE class (NL), which generalizes to larger NSPACE classes via a
|
https://en.wikipedia.org/wiki/EDN%20%28magazine%29
|
EDN is an electronics industry website and formerly a magazine owned by AspenCore Media, an Arrow Electronics company. The editor-in-chief is Majeed Ahmad. EDN was published monthly until, in April 2013, EDN announced that the print edition would cease publication after the June 2013 issue.
History
The first issue of Electrical Design News, the original name, was published in May 1956 by Rogers Corporation of Englewood, Colorado. In January 1961, Cahners Publishing Company, Inc., of Boston, acquired Rogers Publishing Company. In February 1966, Cahners sold 40% of its company to International Publishing Company in London In 1970, the Reed Group merged with International Publishing Corporation and changed its name to Reed International Limited.
Acquisition of EEE magazine
Cahners Publishing Company acquired Electronic Equipment Engineering, a monthly magazine, in March 1971 and discontinued it. In doing so, Cahners folded EEE's best features into EDN, and renamed the magazine EDN/EEE. At the time, George Harold Rostky (1926–2003) was editor-in-chief of EEE. Rostky joined EDN and eventually became editor-in-chief before leaving to join Electronic Engineering Times as editor-in-chief.
Taking EDN worldwide
Roy Forsberg later became editor-in-chief of EDN magazine. He was later promoted to publisher and Jon Titus PhD was named editor-in-chief. Forsberg and Titus established EDN Europe, EDN Asia and EDN China, creating one of the largest global circulations for a design engineering magazine. EDNs 25th anniversary issue was a 425-page folio.
Reed Limited acquires remaining interest in Cahners
In 1977, Reed acquired the remaining interest in Cahners, then known as Cahners Publications. In 1982, Reed International Limited changed its name to Reed International PLC. In 1992, Reed International merged with Elsevier NV, becoming Reed Elsevier PLC on January 1, 1993. Reed Business Media then removed the Cahners Business Publishing name to rebrand itself as Reed Business Info
|
https://en.wikipedia.org/wiki/Generation%E2%80%93recombination%20noise
|
Generation–recombination noise, or g–r noise, is a type of electrical signal noise caused statistically by the fluctuation of the generation and recombination of electrons in semiconductor-based photon detectors.
References
See also
Noise
Noise (audio) – residual low level "hiss or hum"
Noise (electronic) – related to electronic circuitry.
Noise figure – the ratio of the output noise power to attributable thermal noise.
Signal noise – in science, fluctuations in the signal being received.
Thermal noise – sets a fundamental lower limit to what can be measured.
Weighting filter
ITU-R 468 noise weighting
A-weighting
List of noise topics
Noise (electronics)
|
https://en.wikipedia.org/wiki/Exact%20cover
|
In the mathematical field of combinatorics, given a collection of subsets of a set , an exact cover is a subcollection of such that each element in is contained in exactly one subset in .
One says that each element in is covered by exactly one subset in .
An exact cover is a kind of cover.
In other words, is a partition of consisting of subsets contained in .
The exact cover problem to find an exact cover is a kind of constraint satisfaction problem. The elements of represent choices and the elements of represent constraints.
An exact cover problem involves the relation contains between subsets and elements. But an exact cover problem can be represented by any heterogeneous relation between a set of choices and a set of constraints. For example, an exact cover problem is equivalent to an exact hitting set problem, an incidence matrix, or a bipartite graph.
In computer science, the exact cover problem is a decision problem to determine if an exact cover exists. The exact cover problem is NP-complete and is one of Karp's 21 NP-complete problems. It is NP-complete even when each subset in contains exactly three elements; this restricted problem is known as exact cover by 3-sets, often abbreviated X3C.
Knuth's Algorithm X is an algorithm that finds all solutions to an exact cover problem. DLX is the name given to Algorithm X when it is implemented efficiently using Donald Knuth's Dancing Links technique on a computer.
The exact cover problem can be generalized slightly to involve not only exactly-once constraints but also at-most-once constraints.
Finding Pentomino tilings and solving Sudoku are noteworthy examples of exact cover problems. The N queens problem is a generalized exact cover problem.
Formal definition
Given a collection of subsets of a set , an exact cover of is a subcollection of that satisfies two conditions:
The intersection of any two distinct subsets in is empty, i.e., the subsets in are pairwise disjoint. In other words, e
|
https://en.wikipedia.org/wiki/Channel%20length%20modulation
|
Channel length modulation (CLM) is an effect in field effect transistors, a shortening of the length of the inverted channel region with increase in drain bias for large drain biases. The result of CLM is an increase in current with drain bias and a reduction of output resistance. It is one of several short-channel effects in MOSFET scaling. It also causes distortion in JFET amplifiers.
To understand the effect, first the notion of pinch-off of the channel is introduced. The channel is formed by attraction of carriers to the gate, and the current drawn through the channel is nearly a constant independent of drain voltage in saturation mode. However, near the drain, the gate and drain jointly determine the electric field pattern. Instead of flowing in a channel, beyond the pinch-off point the carriers flow in a subsurface pattern made possible because the drain and the gate both control the current. In the figure at the right, the channel is indicated by a dashed line and becomes weaker as the drain is approached, leaving a gap of uninverted silicon between the end of the formed inversion layer and the drain (the pinch-off region).
As the drain voltage increases, its control over the current extends further toward the source, so the uninverted region expands toward the source, shortening the length of the channel region, the effect called channel-length modulation. Because resistance is proportional to length, shortening the channel decreases its resistance, causing an increase in current with increase in drain bias for a MOSFET operating in saturation. The effect is more pronounced the shorter the source-to-drain separation, the deeper the drain junction, and the thicker the oxide insulator.
In the weak inversion region, the influence of the drain analogous to channel-length modulation leads to poorer device turn off behavior known as drain-induced barrier lowering, a drain induced lowering of threshold voltage.
In bipolar devices, a similar increase in current
|
https://en.wikipedia.org/wiki/Class-T%20amplifier
|
Class T was a registered trademark for a switching (class-D) audio amplifier, used for Tripath's amplifier technologies (patent filed on Jun 20, 1996). Similar designs have now been widely adopted by different manufacturers.
Amplifier
The covered products use a class-D amplifier combined with proprietary techniques to control the pulse-width modulation to produce what is claimed to be better performance than other class-D amplifier designs. Among the publicly disclosed differences is real time control of the switching frequency depending on the input signal and amplified output. One of the amplifiers, the TA2020, was named one of the twenty-five chips that 'shook the world" by the IEEE Spectrum magazine.
The control signals in Class T amplifiers may be computed using digital signal processing or fully analog techniques. Currently available implementations use a loop similar to a higher order Delta-Sigma (ΔΣ) (or sigma-delta) modulator, with an internal digital clock to control the sample comparator. The two key aspects of this topology are that (1), feedback is taken directly from the switching node rather than the filtered output, and (2), the higher order loop provides much higher loop gain at high audio frequencies than would be possible in a conventional single pole amplifier.
Financial difficulties caused Tripath to file for Chapter 11 bankruptcy protection on 8 February 2007. Tripath's stock and intellectual property were purchased later that year by Cirrus Logic.
Products and applications
Tripath used to sell the amplifiers as chips, or as chipsets, to be integrated into products by other companies in several countries. For example:
Sony, Panasonic and Blaupunkt use them in several car stereos and integrated home cinema systems
Apple Computer used them in their Power Mac G4 Cube, Power Mac G4 (Digital audio), eMac and iMac (Flat Panel) computers
Audio Research, an audio electronics company, formerly an exclusive tube circuit specialist, produced a Tr
|
https://en.wikipedia.org/wiki/Algorithmic%20information%20theory
|
Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."
Besides the formalization of a universal measure for irreducible information content of computably generated objects, some main achievements of AIT were to show that: in fact algorithmic complexity follows (in the self-delimited case) the same inequalities (except for a constant) that entropy does, as in classical information theory; randomness is incompressibility; and, within the realm of randomly generated software, the probability of occurrence of any data structure is of the order of the shortest program that generates it when running on a universal machine.
AIT principally studies measures of irreducible information content of strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers. One of the main motivations behind AIT is the very study of the information carried by mathematical objects as in the field of metamathematics, e.g., as shown by the incompleteness results mentioned below. Other main motivations came from surpassing the limitations of classical information theory for single and fixed objects, formalizing the concept of randomness, and finding a meaningful probabilistic inference
|
https://en.wikipedia.org/wiki/Knightmare%20%281986%20video%20game%29
|
Knightmare is a 1986 vertically scrolling shooter video game developed and published by Konami for the MSX home computer. It was included in compilations for the MSX, PlayStation and Sega Saturn, followed by a port for mobile phones, and digital re-releases for the Virtual Console and Microsoft Windows. It is the first entry in the Knightmare trilogy. The game stars Popolon, a warrior who embarks on a quest to rescue the princess Aphrodite from the evil priest Hudnos. The player must fight waves of enemies while avoiding collision with their projectiles and obstacles along the way, and facing against bosses.
Knightmare was created by the MSX division at Konami under management of Shigeru Fukutake. The character of Popolon was conceived by a staffer who later became the project's lead designer and writer, as the process of making original titles for the platform revolved around the person who came up with the characters. Development proceeded with a team of four or five members, lasting somewhere between four and six months. The music was scored by Miki Higashino, best known for her work in the Gradius and Suikoden series, and Yoshinori Sasaki.
Knightmare proved popular among Japanese players, garnering generally positive reception from critics and retrospective commentarists. It was followed by The Maze of Galious and Shalom: Knightmare III (1987), while Popolon and Aphrodite would later make appearances outside of the trilogy in other Konami titles. In the years since, fans have experimented with remaking and porting the title unofficially to other platforms.
Gameplay
Knightmare is a vertical-scrolling shoot 'em up game starring Popolon, a warrior who embarks on a quest to rescue the princess Aphrodite from the evil priest Hudnos. The player controls Popolon through eight increasingly difficult stages across a Greek-esque fantasy setting, populated with an assortment of enemies and obstacles, over a constantly scrolling background that never stops moving unti
|
https://en.wikipedia.org/wiki/Philip%20Wadler
|
Philip Lee Wadler (born April 8, 1956) is a UK-based American computer scientist known for his contributions to programming language design and type theory. He is the chair of theoretical computer science at the Laboratory for Foundations of Computer Science at the School of Informatics, University of Edinburgh. He has contributed to the theory behind functional programming and the use of monads; and the designs of the purely functional language Haskell and the XQuery declarative query language. In 1984, he created the Orwell language. Wadler was involved in adding generic types to Java 5.0. He is also author of "Theorems for free!", a paper that gave rise to much research on functional language optimization (see also Parametricity).
Education
Wadler received a Bachelor of Science degree in mathematics from Stanford University in 1977, and a Master of Science degree in computer science from Carnegie Mellon University in 1979. He completed his Doctor of Philosophy in computer science at Carnegie Mellon University in 1984. His thesis was entitled "Listlessness is better than laziness" and was supervised by Nico Habermann.
Research and career
Wadler's research interests are in programming languages.
Wadler was a research fellow at the Programming Research Group (part of the Oxford University Computing Laboratory) and St Cross College, Oxford during 1983–87. He was progressively lecturer, reader, and professor at the University of Glasgow from 1987 to 1996. Wadler was a member of technical staff at Bell Labs, Lucent Technologies (1996–99) and then at Avaya Labs (1999–2003). Since 2003, he has been professor of theoretical computer science in the School of Informatics at the University of Edinburgh.
Wadler was editor of the Journal of Functional Programming from 1990 to 2004.
Since 2003, Wadler has been a professor of theoretical computer science at the Laboratory for Foundations of Computer Science at the University of Edinburgh and is the chair of theoretical com
|
https://en.wikipedia.org/wiki/List%20of%20PDF%20software
|
This is a list of links to articles on software used to manage Portable Document Format (PDF) documents. The distinction between the various functions is not entirely clear-cut; for example, some viewers allow adding of annotations, signatures, etc. Some software allows redaction, removing content irreversibly for security. Extracting embedded text is a common feature, but other applications perform optical character recognition (OCR) to convert imaged text to machine-readable form, sometimes by using an external OCR module.
Terminology
Creators – to allow users to convert other file formats to PDF.
Readers – to allow users to open, read and print PDF files.
Editors – to allow users to edit or otherwise modify PDF files.
Converters – to allow users to convert PDF files to other formats.
Multi-platform
Development libraries
These are used by software developers to add and create PDF features.
Creators
These create files in their native formats, but then allow users to export them to PDF formats.
Viewers
These allow users to view (not edit or modify) any existing PDF file.
AmigaOS
Converters
Antiword: A free Microsoft Office Word reader for various operating systems; converts binary files from Word 2, 6, 7, 97, 2000, 2002 and 2003 to plain text or PostScript; available for AmigaOS 4, MorphOS, AROS x86
dvipdfm: a DVI to PDF translator with zlib support
Viewers
Xpdf: a multi-platform viewer for PDF files, Amiga version uses X11 engine Cygnix.
Linux and Unix
Converters
Collabora Online can be used as a web application, a command line tool, or a Java/Python library. Supported formats include OpenDocument, PDF, HTML, Microsoft Office formats (DOC/DOCX/RTF, XLS/XLSX, PPT/PPTX) and others.
Creators, editors and viewers
macOS
Converters
deskUNPDF for Mac: proprietary application from Docudesk to convert PDF files to Microsoft Office, LibreOffice, image, and data file formats
Creators
macOS: Creates PDF documents natively via print dialog
Editors
Adobe
|
https://en.wikipedia.org/wiki/The%20Goonies%20%28MSX%20video%20game%29
|
The Goonies is a 1986 platform game by Konami for the MSX based on the film of the same name. The music is a simple rendition of the song "The Goonies 'R' Good Enough", by Cyndi Lauper.
Gameplay
The Goonies is a platform and puzzle game, featuring five 'scenes'. After each successfully completed scene, a key word is given and thus the player can continue the game from this point at any time.
References
External links
1986 video games
Konami games
The Goonies video games
MSX games
MSX-only games
Video games developed in Japan
ja:グーニーズ (ゲーム)
|
https://en.wikipedia.org/wiki/JSP%20model%201%20architecture
|
In the design of Java Web applications, there are two commonly used design models, referred to as Model 1 and Model 2.
In Model 1, a request is made to a JSP or servlet and then that JSP or servlet handles all responsibilities for the request, including processing the request, validating data, handling the business logic, and generating a response. The Model 1 architecture is commonly used in smaller, simple task applications due to its ease of development.
Although conceptually simple, this architecture is not conducive to large-scale application development because, inevitably, a great deal of functionality is duplicated in each JSP. Also, the Model 1 architecture unnecessarily ties together the business logic and presentation logic of the application. Combining business logic with presentation logic makes it hard to introduce a new 'view' or access point in an application. For example, in addition to an HTML interface, you might want to include a Wireless Markup Language (WML) interface for wireless access. In this case, using Model 1 will unnecessarily require the duplication of the business logic with each instance of the presentation code.
References
Software design patterns
Software architecture
|
https://en.wikipedia.org/wiki/King%27s%20Valley%20II
|
King's Valley II: The Seal of El Giza is a game for MSX1 and MSX2 computers by Konami. It is a sequel to King's Valley from 1985.
The MSX2 version only saw a release in Japan. The same goes for a very rare "contest" version. The contest was about making levels with the games' built-in level editor, held by four Japanese MSX magazines, two of them are MSX.FAN and Beep. The winners of this contest received a gold cartridge with the twenty custom stages on it. Custom levels can be saved to either a disk or tape, and the levels are interchangeable between both the MSX1 and MSX2 versions.
Story
Far, far into the future, inter-planetary archaeologist Vick XIII, makes a choking discovery.
The pyramids on earth are malfunctioning devices of alien origin with enough energy to destroy earth.
And it's up to Vick to switch off the core functions of El Giza.
Gameplay
The game consists of six pyramids each with its own wall engravings and color pattern; every pyramid contains 10 levels.
The idea of the game is to collect crystals called soul stones in each level by solving the different puzzles and evading or killing the enemies using the many tools and weapons available to unlock the exit door that will take you to the next level.
Versions
The later Konami game Castlevania: Portrait of Ruin for the Nintendo DS reuses the stage musics "In Search of the Secret Spell" and "Sandfall" for the Egyptian area of the game.
The MSX 2 version was the same game except minor changes like the music was remixed and some of the items and backgrounds recolored.
Castlevania: Harmony of Despair uses a remix of the Stage Clear theme as the Stage Clear theme for Chapter 7: Beauty, Desire, Situation Dire (not found on the OST).
References
1988 video games
Ancient Egypt in fiction
Konami games
MSX games
MSX2 games
Puzzle video games
Video games scored by Kinuyo Yamashita
Video games scored by Michiru Yamane
Video games set in Egypt
Video games developed in Japan
|
https://en.wikipedia.org/wiki/Compunet
|
Compunet was a United Kingdom-based interactive service provider, catering primarily for the Commodore 64 but later for the Amiga and Atari ST. It was also known by its users as CNet. It ran from 1984 to May 1993.
Overview
Compunet hosted a wide range of content, and users were permitted to create their own sections within which they could upload their own graphics, articles and software. A custom editor existed in which the "frames" that made up the pages could be created either offline or when connected to the service. The editor's cache allowed users to quickly download a set of pages, then disconnect from the service in order to read them, thus saving on telephone costs.
The user interface used a horizontally scrolling menu system, known as the "duck shoot", and navigation was essentially "select and click" with the ability to jump directly to pages with the use of keywords. Content could be voted upon by the users.
The service had many features which were considerably ahead of its time, especially when compared to the Internet of today:
Pricing of content (Optional. Users could price their own content).
Voting on content quality.
"Upload anywhere" of content: programs, graphics and text (Unless a section was protected).
Software could be dongle protected (the custom modem doubled as the dongle in this instance).
WYSIWYG editing of content.
Chat room (known as Partyline), which allowed users to create their own rooms (similar principles have been shown in IRC).
The server hosted Multi-User Dungeon (MUD) (by Richard Bartle),
Federation II, and Realm. The first two of these games continue to run on the Internet today.
Games creator Jeff Minter and musician Rob Hubbard, along with various members of the demo scene, had a presence on the network.
History
In 1982, Commodore UK decided to construct a nationwide computer network for the use of teachers. The Commodore PET computer had been very successful. Nick Green developed the specification of
|
https://en.wikipedia.org/wiki/Transport%20triggered%20architecture
|
In computer architecture, a transport triggered architecture (TTA) is a kind of processor design in which programs directly control the internal transport buses of a processor. Computation happens as a side effect of data transports: writing data into a triggering port of a functional unit triggers the functional unit to start a computation. This is similar to what happens in a systolic array. Due to its modular structure, TTA is an ideal processor template for application-specific instruction set processors (ASIP) with customized datapath but without the inflexibility and design cost of fixed function hardware accelerators.
Typically a transport triggered processor has multiple transport buses and multiple functional units connected to the buses, which provides opportunities for instruction level parallelism. The parallelism is statically defined by the programmer. In this respect (and obviously due to the large instruction word width), the TTA architecture resembles the very long instruction word (VLIW) architecture. A TTA instruction word is composed of multiple slots, one slot per bus, and each slot determines the data transport that takes place on the corresponding bus. The fine-grained control allows some optimizations that are not possible in a conventional processor. For example, software can transfer data directly between functional units without using registers.
Transport triggering exposes some microarchitectural details that are normally hidden from programmers. This greatly simplifies the control logic of a processor, because many decisions normally done at run time are fixed at compile time. However, it also means that a binary compiled for one TTA processor will not run on another one without recompilation if there is even a small difference in the architecture between the two. The binary incompatibility problem, in addition to the complexity of implementing a full context switch, makes TTAs more suitable for embedded systems than for general purpos
|
https://en.wikipedia.org/wiki/CISPR
|
The Comité International Spécial des Perturbations Radioélectriques (CISPR; ) was founded in 1934 to set standards for controlling electromagnetic interference in electrical and electronic devices and is a part of the International Electrotechnical Commission (IEC).
Organization
CISPR is composed of six technical and one management subcommittees, each responsible for a different area, defined as:
AA - Radio-interference measurements and statistical methods
B - Interference relating to industrial, scientific, and medical radio-frequency apparatus, to other (heavy) industrial equipment, overhead power lines, high voltage equipment, and electric traction
D - Electromagnetic disturbances related to electric/electronic equipment on vehicles and internal combustion engine-powered devices
F - Interference relating to household appliances tools, lighting equipment, and similar apparatus
H - Limits for the protection of radio frequencies
I - Electromagnetic compatibility of information technology equipment, multimedia equipment, and receivers
S - Steering Committee
The IEC describes the structure, officers, work programme, and other relevant details of CISPR on the CISPR Dashboard.
Technical standards
CISPR's standards cover the measurement of radiated and conducted interference and immunity for some products.
CISPR standards include:
CISPR 11 - Industrial, scientific, and medical equipment - Radio-frequency disturbance characteristics - Limits and methods of measurement
CISPR 12 - Vehicles, boats, and internal combustion engines - Radio disturbance characteristics - Limits and methods of measurement for the protection of off-board receivers
CISPR 14-1 - Electromagnetic compatibility - Requirements for household appliances, electric tools, and similar apparatus - Part 1:
CISPR 14-2 - Electromagnetic compatibility - Requirements for household appliances, electric tools, and similar apparatus - Part 2: Immunity - Product family standard
CISPR 15 - Limits and
|
https://en.wikipedia.org/wiki/Traffic%20flow
|
In mathematics and transportation engineering, traffic flow is the study of interactions between travellers (including pedestrians, cyclists, drivers, and their vehicles) and infrastructure (including highways, signage, and traffic control devices), with the aim of understanding and developing an optimal transport network with efficient movement of traffic and minimal traffic congestion problems.
History
Attempts to produce a mathematical theory of traffic flow date back to the 1920s, when American Economist Frank Knight first produced an analysis of traffic equilibrium, which was refined into Wardrop's first and second principles of equilibrium in 1952.
Nonetheless, even with the advent of significant computer processing power, to date there has been no satisfactory general theory that can be consistently applied to real flow conditions. Current traffic models use a mixture of empirical and theoretical techniques. These models are then developed into traffic forecasts, and take account of proposed local or major changes, such as increased vehicle use, changes in land use or changes in mode of transport (with people moving from bus to train or car, for example), and to identify areas of congestion where the network needs to be adjusted.
Overview
Traffic behaves in a complex and nonlinear way, depending on the interactions of a large number of vehicles. Due to the individual reactions of human drivers, vehicles do not interact simply following the laws of mechanics, but rather display cluster formation and shock wave propagation, both forward and backward, depending on vehicle density. Some mathematical models of traffic flow use a vertical queue assumption, in which the vehicles along a congested link do not spill back along the length of the link.
In a free-flowing network, traffic flow theory refers to the traffic stream variables of speed, flow, and concentration. These relationships are mainly concerned with uninterrupted traffic flow, primarily found on fr
|
https://en.wikipedia.org/wiki/Router%20on%20a%20stick
|
A router on a stick, also known as a one-armed router, is a router that has a single physical or logical connection to a network. It is a method of inter-VLAN routing where one router is connected to a switch via a single cable. The router has physical connections to the broadcast domains where one or more VLANs require the need for routing between them.
Devices on separate VLANs or in a typical local area network are unable to communicate with each other. Therefore, it is often used to forward traffic between locally attached hosts on separate logical routing domains or to facilitate routing table administration, distribution and relay.
Details
One-armed routers that perform traffic forwarding are often implemented on VLANs. They use a single Ethernet network interface port that is part of two or more Virtual LANs, enabling them to be joined. A VLAN allows multiple virtual LANs to coexist on the same physical LAN. This means that two machines attached to the same switch cannot send Ethernet frames to each other even though they pass over the same wires. If they need to communicate, then a router must be placed between the two VLANs to forward packets, just as if the two LANs were physically isolated. The only difference is that the router in question may contain only a single Ethernet network interface controller (NIC) that is part of both VLANs. Hence, "one-armed". While uncommon, hosts on the same physical medium may be assigned with addresses and to different networks. A one-armed router could be assigned addresses for each network and be used to forward traffic between locally distinct networks and to remote networks through another gateway.
One-armed routers are also used for administration purposes such as route collection, multi hop relay and looking glass servers.
All traffic goes over the trunk twice, so the theoretical maximum sum of up and download speed is the line rate. For a two-armed configuration, uploading does not need to impact downloa
|
https://en.wikipedia.org/wiki/Helium%20mass%20spectrometer
|
A helium mass spectrometer is an instrument commonly used to detect and locate small leaks. It was initially developed in the Manhattan Project during World War II to find extremely small leaks in the gas diffusion process of uranium enrichment plants. It typically uses a vacuum chamber in which a sealed container filled with helium is placed. Helium leaks out of the container, and the rate of the leak is detected by a mass spectrometer.
Detection technique
Helium is used as a tracer because it penetrates small leaks rapidly. Helium also has the properties of being non-toxic, chemically inert and present in the atmosphere only in minute quantities (5 ppm). Typically a helium leak detector will be used to measure leaks in the range of 10 to 10 Pa·m·s.
A flow of 10 Pa·m·s is about 0.006 ml per minute at standard conditions for temperature and pressure (STP).
A flow of 10 Pa·m·s is about 0.003 ml per century at STP.
Types of leaks
Typically there are two types of leaks in the detection of helium as a tracer for leak detection: residual leak and virtual leak. A residual leak is a real leak due to an imperfect seal, a puncture, or some other hole in the system. A virtual leak is the semblance of a leak in a vacuum system caused by outgassing of chemicals trapped or adhered to the interior of a system that is actually sealed. As the gases are released into the chamber, they can create a false positive indication of a residual leak in the system.
Uses
Helium mass spectrometer leak detectors are used in production line industries such as refrigeration and air conditioning, automotive parts, carbonated beverage containers food packages and aerosol packaging, as well as in the manufacture of steam products, gas bottles, fire extinguishers, tire valves, and numerous other products including all vacuum systems.
Test methods
Global helium spray
This method requires the part to be tested to be connected to a helium leak detector. The outer surface of the part to be tes
|
https://en.wikipedia.org/wiki/Vendor%20Independent%20Messaging
|
VIM (Vendor Independent Messaging) was a standard API for applications to integrate with e-mail on Windows 3.x, proposed by Lotus, Borland, IBM & Novell in the early 1990s. Its main competitor was Microsoft's MAPI, which was the eventual winner of the MAPI v. VIM war. Ultimately, the choice of VIM or MAPI did not make a huge difference: bridges meant that an MAPI client could access a VIM provider and vice versa, and the rise of Internet e-mail in the mid-1990s rendered the panoply of proprietary e-mail systems which VIM and MAPI were meant to cater to largely irrelevant.
Email
|
https://en.wikipedia.org/wiki/Bit%20manipulation
|
Bit manipulation is the act of algorithmically manipulating bits or other pieces of data shorter than a word. Computer programming tasks that require bit manipulation include low-level device control, error detection and correction algorithms, data compression, encryption algorithms, and optimization. For most other tasks, modern programming languages allow the programmer to work directly with abstractions instead of bits that represent those abstractions.
Source code that does bit manipulation makes use of the bitwise operations: AND, OR, XOR, NOT, and possibly other operations analogous to the boolean operators; there are also bit shifts and operations to count ones and zeros, find high and low one or zero, set, reset and test bits, extract and insert fields, mask and zero fields, gather and scatter bits to and from specified bit positions or fields.
Integer arithmetic operators can also effect bit-operations in conjunction with the other operators.
Bit manipulation, in some cases, can obviate or reduce the need to loop over a data structure and can give manyfold speed-ups, as bit manipulations are processed in parallel.
Terminology
Bit twiddling, bit fiddling, bit bashing, and bit gymnastics are often used interchangeably with bit manipulation, but sometimes exclusively refer to clever or non-obvious ways or uses of bit manipulation, or tedious or challenging low-level device control data manipulation tasks.
The term bit twiddling dates from early computing hardware, where computer operators would make adjustments by tweaking or twiddling computer controls. As computer programming languages evolved, programmers adopted the term to mean any handling of data that involved bit-level computation.
Bitwise operation
A bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, primitive action directly supported by the central processing unit (CPU), and is used to manipulate values for comparis
|
https://en.wikipedia.org/wiki/Municipal%20wireless%20network
|
A municipal wireless network is a citywide wireless network. This usually works by providing municipal broadband via Wi-Fi to large parts or all of a municipal area by deploying a wireless mesh network. The typical deployment design uses hundreds of wireless access points deployed outdoors, often on poles. The operator of the network acts as a wireless internet service provider.
Overview
Municipal wireless networks go far beyond the existing piggybacking opportunities available near public libraries and some coffee shops. The basic premise of carpeting an area with wireless service in urban centers is that it is more economical to the community to provide the service as a utility rather than to have individual households and businesses pay private firms for such a service. Such networks are capable of enhancing city management and public safety, especially when used directly by city employees in the field. They can also be a social service to those who cannot afford private high-speed services. When the network service is free and a small number of clients consume a majority of the available capacity, operating and regulating the network might prove difficult.
In 2003, Verge Wireless formed an agreement with Tropos Networks to build a municipal wireless networks in the downtown area of Baton Rouge, Louisiana. Carlo MacDonald, the founder of Verge Wireless, suggested that it could provide cities a way to improve economic development and developers to build mobile applications that can make use of faster bandwidth. Verge Wireless built networks for Baton Rouge, New Orleans, and other areas. Some applications include wireless security cameras, police mug shot software, and location-based advertising.
In 2007, some companies with existing cell sites offered high-speed wireless services where the laptop owner purchased a PC card or adapter based on EV-DO cellular data receivers or WiMAX rather than 802.11b/g. A few high-end laptops at that time featured built-in su
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.